Meta Says Its New AI Model Can Understand the Physical World
June 11, 2025
Meta says a new generative AI model it released Wednesday could change how machines understand the physical world, opening up opportunities for smarter robots and more.
The new open-source model, called Video Joint Embedding Predictive Architecture 2, or V-JEPA 2, is designed to help artificial intelligence understand things like gravity and object permanence, Meta said.
Current models that allow AI to interact with the physical world rely on labeled data or video to mimic reality, but this approach emphasizes the logic of the physical world, including how objects move and interact. The model could allow AI to understand concepts like the fact that a ball rolling off of a table will fall.
Meta said the model could be useful for devices like autonomous vehicles and robots by ensuring they don’t need to be trained on every possible situation. The company called it a step toward AI that can adapt like humans can.
One struggle in the space of physical AI has been the need for significant amounts of training data, which takes time, money and resources. At SXSW earlier this year, experts said synthetic data — training data created by AI — could help prepare a more traditional learning model for unexpected situations. (In Austin, the example used was the emergence of bats from the city’s famed Congress Avenue Bridge.)
Meta said its new model simplifies the process and makes it more efficient for real-world applications because it doesn’t rely on all of that training data.
Watch this: These AI Robots Want to Do Your Chores for You
Search
RECENT PRESS RELEASES
Related Post