Home >Technology peripherals >AI >Scientists Go Serious About Large Language Models Mirroring Human Thinking
This article examines groundbreaking research merging neuroscience, psychology, and computer science to reveal surprising similarities—and crucial differences—between Large Language Models (LLMs) and the human brain, particularly in text processing and procedural reasoning.
Introduction:
The rise of LLMs has sparked intense debate about their potential to mimic human cognitive processes. Their advanced capabilities in language, reasoning, and problem-solving raise compelling questions about underlying operational principles. Previous articles explored this, particularly concerning the "Chinese room argument" and parallels between LLM text processing and human language acquisition:
Prior work also analyzed LLM "reasoning" and the impact of prompt engineering on problem-solving accuracy:
Recent Research Illuminates Striking Similarities:
This article reviews recent studies exploring the parallels and distinctions between LLMs and human brains, focusing on cognitive task performance, evaluation methodologies, and the very nature of intelligence. Five key research papers form the basis of this analysis:
Large Language Models and Cognitive Science: A Comprehensive Review of Similarities, Differences… This review (currently un-peer-reviewed) examines the intersection of LLMs and cognitive science, detailing methods for comparing LLM and human information processing, including adaptations of cognitive psychology experiments and neuroimaging data. It highlights similarities in language processing and sensory judgments, while emphasizing differences in reasoning, particularly with novel problems.
Contextual feature extraction hierarchies converge in large language models and the brain – Nature… This paper analyzes twelve LLMs, assessing their ability to predict neural responses (intracranial EEGs) during speech comprehension. Higher-performing LLMs showed greater brain similarity, aligning their hierarchical feature extraction with the brain's pathways using fewer layers. Contextual information significantly improved both model performance and brain-like processing.
Scale matters: Large language models with billions (rather than millions) of parameters better… This reviewed preprint in eLife investigates the correlation between LLM size and the prediction of human brain activity during natural language processing (using electrocorticography). Larger LLMs more accurately predicted neural activity, with the optimal prediction layer shifting to earlier layers in larger models.
Shared computational principles for language processing in humans and deep language models – PubMed This 2022 article (using GPT-2) found empirical evidence that humans and LLMs share three computational principles: continuous next-word prediction, using pre-onset predictions to calculate post-onset surprise, and representing words using contextual embeddings.
Procedural Knowledge in Pretraining Drives Reasoning in Large Language Models This preprint examines how LLMs learn to reason, comparing reasoning strategies with factual knowledge retrieval. Reasoning is driven by procedural knowledge, synthesizing solutions from documents demonstrating similar reasoning processes.
Key Parallels and Divergences:
Similarities:
Hierarchical Language Processing: Both LLMs and the human brain process language hierarchically, with layers progressively extracting complex linguistic features. LLM performance correlates with their ability to predict human brain activity during language processing.
Contextual Dependence: Both systems heavily rely on contextual information. Larger context windows in LLMs enhance their ability to predict human neural responses, mirroring the brain's reliance on context for comprehension.
Differences:
Functional vs. Formal Linguistic Competence: While LLMs excel at formal linguistic competence (grammar), they often struggle with functional competence (pragmatics, context-dependent aspects like humor or sarcasm).
Memory Mechanisms: LLM memory differs significantly from human memory. Human memory is dynamic, adapting based on experiences and associations; LLM memory relies on fixed representations.
Evaluating LLMs as Cognitive Models:
Evaluating LLM cognitive abilities presents unique challenges. Researchers adapt cognitive psychology experiments (like CogBench) and use neuroimaging data to compare LLM representations with human brain activity. However, interpreting these findings requires caution due to the fundamental differences between the two systems.
The Convergence Question:
The question of whether LLMs are developing true intelligence remains open. While their performance on cognitive tasks is impressive, fundamental differences with the human brain persist. The convergence of LLMs towards brain-like processing raises intriguing possibilities, but whether they will ever achieve human-level intelligence remains uncertain.
Conclusion:
The research reviewed here highlights the fascinating parallels and differences between LLMs and the human brain. This ongoing investigation not only advances our understanding of artificial intelligence but also deepens our knowledge of human cognition itself.
Further Reading:
(www.lucianoabriata.com - Subscribe for more articles. Services and contact information available on the website.)
The above is the detailed content of Scientists Go Serious About Large Language Models Mirroring Human Thinking. For more information, please follow other related articles on the PHP Chinese website!