Xiaomi launches open-source “MiMo-Embodied” AI model for autonomous driving & robotics
News, 24 November 2025
Xiaomi has unveiled its latest breakthrough: a publicly released foundation model named MiMo‑Embodied, designed to power both autonomous driving and embodied robotics. According to the company, this vision-language model combines capabilities in perception, planning, spatial understanding and drive decision-making, and is now available to developers on platforms like Hugging Face and GitHub.
What makes MiMo-Embodied particularly noteworthy is its cross-domain design. Rather than limiting itself to either self-driving vehicles or robotics, the model bridges both, enabling tasks such as task planning for robots and drive-path prediction for vehicles within a shared architecture. Xiaomi asserts the model achieves state-of-the-art results across 29 benchmark tests covering areas like affordance prediction (robots) and status planning (autonomous driving).
The strategic implications are substantial: as Xiaomi expands its EV business and robotics ambitions, offering an open-source model signals an intent to build a broader ecosystem and tap external developers. The model’s dual focus means vehicles and robots could effectively learn from each other—indoor robot navigation improving road-vehicle perception, and road-vehicle planning enhancing robot action planning. The release marks a critical step in bringing highly capable AI to real-world machines—both on the road and in factory or home environments.
In short, Xiaomi is positioning itself not just as an EV or consumer-electronics brand, but as a foundational AI player bridging physical mobility and embodied intelligence.
Compiled using AI





One Comment