At a packed lecture in Pembroke Hall at Brown University on April 1, AI pioneer Yann LeCun delivered a provocative and deeply skeptical assessment of the current state of artificial intelligence. Speaking as part of the Lemley Family Leadership Lecture Series, the New York University professor and Turing Award winner argued that today’s dominant AI systems — particularly large language models (LLMs) — are fundamentally flawed because they do not truly understand the world. In his characteristically blunt style, LeCun declared that “AI sucks,” explaining that current systems merely manipulate language convincingly enough to appear intelligent while lacking any real grasp of physical reality or causality.
LeCun’s central argument was that modern AI systems are incapable of safely acting in the world because they cannot predict the consequences of their actions. He criticized the industry’s growing enthusiasm for “agentic” AI systems, warning that systems unable to model outcomes could become dangerous. Instead, he advocated for the development of “world models” — AI systems capable of constructing abstract predictive models of reality. Such systems, he explained, would allow machines to simulate the effects of actions before taking them, making planning and reasoning possible.
He also challenged the economic assumptions driving the AI boom, calling the belief that LLMs alone will achieve human-level intelligence “complete BS.” According to LeCun, future breakthroughs will require AI systems trained not just on text but on diverse streams of data including video, images, audio, and scientific information. He pointed to his new startup, AMI Labs, which recently raised over $1 billion to pursue this approach.
Despite his criticism of current AI, LeCun remained optimistic about the long-term potential of machine intelligence, particularly in accelerating scientific discovery. However, he cautioned that achieving human-like intelligence remains far more difficult than many in the industry assume.