Understanding Large Language Models and the Path to AGI
Neural Networks are often perceived by the general population as a form of magic, but at their core, they are essentially a structured sequence of mathematical transformations mapping an input tensor space to an output tensor space. Large Language Models (LLMs), such as ChatGPT, operate through a series of tensor algebra operations, leveraging vast amounts of data and computation. The true "magic" emerges not from individual calculations but from scaling-when models grow larger, they exhibit emergent properties that were not explicitly programmed.
This talk explores the implications of scale in AI, drawing lessons from nature. Evolution did not grant humans 86 billion neurons and 100 trillion synaptic connections by accident; nature is economical, and the complexity of human intelligence is deeply tied to its capacity. The human brain's encephalization quotient-the ratio of brain mass to body size-exceeds that of any other primate, highlighting the importance of scale in biological intelligence.
A central question arises: are human intelligence and consciousness Turing-computable? If intelligence is simply the product of sufficient capacity and complexity, then in principle, AI models, when scaled, should be able to achieve human-level Artificial General Intelligence (AGI). But does the nature of intelligence go beyond computation? What is the Kolmogorov complexity of human intelligence? The Chinese Room argument, proposed by philosopher John Searle, challenges the idea that syntactic manipulation alone is sufficient for true understanding. Meanwhile, philosopher and cognitive scientist Daniel Dennett's theories on consciousness suggest that intelligence is just an emergent property of information processing, much like what we observe in modern AI models.
This talk will critically examine these perspectives, discussing whether AI is on the trajectory to achieving human-like cognition or if there are fundamental barriers that limit computational models from replicating consciousness. Ultimately, we will explore whether the rapid scaling of AI is bringing us closer to AGI or revealing the limits of algorithmic intelligence. Is the human brain super-Turing powerful?