
News
1X Neo robots learn from video to understand the physical world
Robotics company 1X has released a new artificial intelligence system designed to help its Neo humanoid robots better understand the physical world.
The system, called the 1X World Model, uses video paired with written prompts to help robots learn new skills. The company says the model captures how objects move and interact, allowing robots to improve their behavior over time.
A world model is a type of AI that predicts how the real world works. In simple terms, it helps a robot guess what will happen next if it moves an arm, picks up an object, or walks through a room.
According to 1X, Neo robots can use video to study tasks they were not explicitly trained to perform. The model connects what the robot sees with instructions, then updates how the robot understands its surroundings.
The release comes as 1X prepares to bring Neo into homes. The company opened preorders in October and plans to ship units this year. A spokesperson said preorders exceeded expectations but declined to share shipment timing or order numbers.
Bernt Børnich, founder and chief executive of 1X, said the model allows Neo to learn from large amounts of online video and apply that knowledge in the real world. He said the system can turn prompts into actions, even when the robot has not seen an exact example before.
The company later clarified how that learning works. Neo robots do not instantly perform a new task after watching a video and receiving a prompt. Instead, the robot collects video linked to specific instructions and sends that data back to the world model.
The updated model then feeds information back into the wider robot system. Over time, this process improves how Neo understands physical space and how actions lead to outcomes.
The model also shows how a robot plans to respond to a prompt. That visibility helps engineers see how Neo interprets instructions and predicts results.
1X says this feedback could eventually allow robots to respond to requests involving tasks they have never performed before, reinforcing progress toward robots learning from video rather than relying only on preprogrammed behaviors. The company did not say when those capabilities might reach consumer-ready systems.
Recommended Articles



