LJ
Li Jing
Head of Pretraining
New YorkAMI Labs
← Back to Org Chart
Biography

Li Jing is a Chinese-American researcher, entrepreneur, and machine learning scientist currently serving as Head of Pretraining. Li earned a Ph.D. in Physics from the Massachusetts Institute of Technology (MIT), where his doctoral research intersected theoretical physics, applied mathematics, and computing systems. His academic trajectory was distinguished from an early stage: he was a gold medalist at the International Physics Olympiad (IPhO), placing him among the top competitive science students globally.

Following his doctorate, he completed a postdoctoral fellowship at Meta AI Research (FAIR) in New York, one of the world's leading academic-style industrial AI research laboratories, where he contributed to foundational machine learning research. He was subsequently recognized on the Forbes 30 Under 30 list, reflecting his early-career impact across both research and entrepreneurship.

Prior to his research roles, Li co-founded Lightelligence, a Boston-based startup focused on photonic computing—leveraging light-based hardware accelerators to perform artificial intelligence computations with significantly greater energy efficiency than conventional silicon-based systems. The venture represented an ambitious convergence of his physics expertise and applied machine learning, attracting notable investment and research attention in the hardware AI space.

He later joined OpenAI as a researcher, where he contributed to work on large-scale model development and training systems before transitioning to AMI Labs.

At AMI Labs, Li leads pretraining efforts—the computationally intensive, foundational phase of large language and multimodal model development in which models are trained on broad datasets to acquire general capabilities. This role places him at the core of the company's technical mission.

Career History
2022–2024
OpenAI
Research Scientist
2020–2022
Meta AI (FAIR)
Postdoctoral Researcher
2017–2020
Lightelligence
Co-Founder
Key Papers
We explore large-scale training of generative models on video data. Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that operates on spacetime patches of video and image latent codes. Our largest model, Sora, is capable of generating a minute of high fidelity video. Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.
2024 · OpenAI
1,299 citations