How applying cognitive diversity to LLMs could transform the user experience
Putting the Ghost in the Machine
As AI continues to transform, so too does the experience of the people it serves.
Research by McKinsey shows that in 2025, 62% of organizations are at least experimenting with AI agents, whilst almost 9 in 10 now say they are regularly using them.
Despite talk of an AI bubble, the market is currently booming, global adoption is predicted to be valued at $15 billion before the end of decade, and ChatGPT alone is reported to reach over 500 million users worldwide every month.
Used properly, AI tools and LLMs can be invaluable. In fact, the same McKinsey survey reports that 39% of respondents attribute some level of operational income to AI, with other benefits including improvements across innovation (64%), customer satisfaction (45%), and profitability (36%).
Dean of the School of Engineering at Manhattan University, Professor Emerita of engineering design and mechanical engineering at Penn State University, and a KAI practitioner.
While these figures are encouraging, concerns about the technology remain consistent, especially around the quality and reliability of data and the inaccurate answers AI tools can generate. Inaccuracy is the risk most organizations are working to mitigate, according to McKinsey.
So, is there a way to improve the output of LLMs and get the answers and information we want and in the way that we need? Currently, the answer is to tell users to simply get better at their prompts, but if we look at how humans interact with one another, there could be another solution.
Introducing cognitive diversity – and why it matters for LLMs
In humans, cognitive diversity refers to differences in how individuals think, solve problems, generate ideas and make decisions.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The KAI inventory suggests this diversity comes in the form of a natural, innate preference for the amount of structure we use as we generate solutions, organize our environment as we implement them, and respond to rules and group norms.
Adaption-Innovation Theory, on which the KAI is based, describes a spectrum that ranges from highly adaptive to highly innovative, with infinite variations in between.
Generally speaking, more adaptive individuals prefer more structure and prefer to leverage clear and consistent rules, while more innovative people prefer less structure and are more likely to ignore or change the rules to stay engaged.
One’s preference for more adaption or more innovation is not related to one’s intelligence or motivation; and because of this, there is no ideal position for one to have on the KAI spectrum.
Decades of research by Dr. M. J. Kirton into Adaption-Innovation Theory suggests that, when individuals understand their cognitive styles, solutions can be reached in more effective, actionable, and efficient ways – both alone and in teams.
But how can we apply this theory to technology, and can we train LLMs to work in a similar way? Research suggests the answer is ‘yes’.
What the research suggests:
A recent paper by researchers at Carnegie Mellon University and Penn State University - Putting the Ghost in the Machine: Emulating Cognitive Style in Large Language Models - explored a fundamental question: can LLMs emulate cognitive styles if we teach them how?
The researchers taught an LLM model about Adaption-Innovation Theory, giving it an understanding of cognitive diversity and how more adaptive and more innovative people behave. It was then tasked with solving three design problems using two different prompts, each prompt having been specially designed with a different cognitive style in mind.
One prompt was adaptively framed - mirroring the thinking style of someone who is meticulous, attentive to details and thrives when working with clear expectations; the other prompt was innovatively framed - mirroring the thinking style of someone who is energized when the expectations are more ambiguous and there is greater flexibility.
Answers were evaluated on feasibility (how workable and realistic the solutions were) and paradigm-relatedness (whether the ideas stayed within existing frameworks or shifted away from them).
The results revealed that the adaptive prompt resulted in more feasible, structured, traditional solutions. In contrast, the innovative prompt produced less feasible but more paradigm-challenging solutions.
Simply put, the LLM wasn't just generating solutions or answers, it was generating the right kinds of solutions based on its knowledge of cognitive diversity and the effective cognitive style of the individual asking the question.
As result, it provided a more innovative/adaptive solution depending on how it was prompted and what the asker needed.
But what does this all mean for the future of LLMs?
Simply put, we’re wasting the power of LLMs if we don’t take cognitive diversity into account. If we want to get better, more relevant and more productive solutions from AI, and get them more efficiently, the next generation of the technology must have an understanding of cognitive diversity embedded into it.
In real life, we rarely preface a question by explaining in detail how we think or approach problems, but we know when an answer matches our way of thinking or not – and whether that is the type of answer we are seeking. If LLMs can offer us the same range of possible answers that the cognitive style spectrum represents, it will eliminate the endless cycle of prompting until we stumble on the answer we need.
Research shows that, by integrating an understanding of human cognitive styles into the technology itself, we’re giving ourselves, and our AI tools, a head start. From there, the opportunity for even better rates of productivity, efficiency and user satisfaction have the potential to skyrocket.
We feature the best Large Language Models (LLMs) for coding.
Dean of the School of Engineering at Manhattan University, Professor Emerita of engineering design and mechanical engineering at Penn State University, and a KAI practitioner.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.