What does it mean for AI to understand the world?

This was the central question of the debate between Professor Yann LeCun, Turing Award laureate, Executive Chairman of Advanced Machine Intelligence (AMI) and Professor at NYU, and Professor Eric Xing, President of MBZUAI and Professor at Carnegie Mellon University.

The conversation was recorded at the Spring School AI For Impact 2026, hosted at UM6P in Benguerir.

The debate explores two major directions for world models and next-generation AI architectures:

JEPA, associated with Yann LeCun's research direction, learns abstract representations of the world and focuses on prediction beyond raw reconstruction.

GLP, presented by Eric Xing and his team at MBZUAI, keeps reconstruction as a validation mechanism: not as the final goal, but as a way to test whether the model has truly understood the signal.

Behind this technical discussion lies a broader question: will future AI systems be built mainly as abstraction-driven reasoners, or as models that ground and validate their understanding against the real world?

Spring School AI For Impact 2026 was co-organized by EMINES (UM6P), École Polytechnique (Industrial Processes Chair) and EMSI, with the support of OCP. The event brought together 337 participants in person, more than 200 online for this debate, 27 speakers, 13 conferences, 11 workshops, and participants from 4 continents.

A rare exchange between two major voices in AI, at a moment when the field is actively rethinking its foundations.

How Should AI Learn to Understand the World?