Overly broad definition of AGI
The author's definition of AGI is quite broad and arguably sets a low bar. It focuses on functional capabilities and comparing AI to *average* humans rather than peak human performance, potentially overselling the current state of AI. This perspective could lead to premature declarations of AGI achievement.
Downplaying crucial limitations
While the paper acknowledges limitations like the lack of physical grounding and causal reasoning, it downplays their significance in achieving true general intelligence. Dismissing these as merely "current stage and artificial nature" overlooks the profound difference between manipulating symbols and understanding the real world.
Over-reliance on LLM performance in limited domains
The core argument relies heavily on LLMs' performance on informational tasks like language and code. While impressive, this neglects other crucial aspects of general intelligence, like creativity, social intelligence, and physical embodiment, giving a skewed picture of AGI's capabilities.
Narrow focus on transformers
The paper makes sweeping generalizations about AGI based primarily on the Transformer architecture and LLMs. This neglects other promising AI approaches and potentially limits the scope of the discussion about achieving true general intelligence. It presents a single narrative and ignores the diversity in the field of AI.
Lack of quantitative/experimental support
The paper makes a rather philosophical discussion and lacks quantitative or experimental data.