Events
Fundamentals of AI Reading Group: LLM and Human Modes of Representation
Centre for Fundamentals of AI and Computational TheoryLLM and Human Modes of Representation
Shalom Lappin
Queen Mary University of London, University of Gothenburg, and King's College London
Much work on the cognitive foundations of AI has focussed on comparisons between the
ways in which Large Language Models (LLMs) and humans process information and represent
it. One aspect of this comparison involves determining the extent to which LLMs can achieve
or surpass human performance on a variety of cognitively interesting tasks. A second explores
points of convergence and divergence between LLM and human systems for processing information.
Here, I consider some recent research that has addressed both issues in two informational
domains. The first is the representation of linguistic knowledge. The second is real world reason-
ing and planning. While LLMs frequently achieve impressive levels of performance and fluency
on linguistic applications, they tend to handle linguistic content in ways that are distinct from
human processing. They are also, for the most part, less efficient than humans in learning and
generalisation for reasoning tasks.
| Contact: | Frederik Dahlqvist |
| Email: | f.dahlqvist@qmul.ac.uk |
Updated by: Paul Curzon
