Unitree H2 Humanoid Appears at IROS 2025, No LiDAR in Sight

Fresh off its official unveiling on October 20th, the new Unitree H2 humanoid robot is already making a scene at the IROS 2025 conference in Hangzhou, China. The bipedal bot, standing an imposing 180cm tall and weighing 70kg, is turning heads not just for its surprisingly lifelike, if slightly unsettling, “bionic face,” but for what it’s apparently lacking. Initial observations and the company’s own spec sheets suggest Unitree may be taking a page from Tesla’s playbook, forgoing expensive LiDAR sensors in favor of a “Humanoid Binocular Camera with Wide Field of View.”

The Unitree H2 humanoid robot being observed up close at IROS 2025.

The H2 is an evolution of the company’s previous H1 model, boasting 31 degrees of freedom and a newly confirmed 2-degree-of-freedom neck, allowing for more nuanced head movements. This increased flexibility was on full display in launch videos showing the robot performing martial arts and dance routines with an unnerving degree of fluidity. While Unitree has a history of developing its own 4D LiDAR for its quadruped robots, the decision to omit it from their flagship humanoid is a significant, and likely cost-saving, gamble. The focus is clearly on vision-based AI, with the H2’s brain powered by a combination of Intel Core i5/i7 processors and support for up to three Nvidia Jetson Orin NX modules for developers.

Why is this important?

Unitree is making a bold statement by prioritizing advanced computer vision over the industry’s safety-blanket, LiDAR. This “Tesla approach” could drastically lower the H2’s price point, potentially accelerating the adoption of general-purpose humanoids if—and it’s a big if—their software can reliably navigate complex environments using cameras alone. By manufacturing most of its key components in-house, from motors to sensors, Unitree already holds a significant cost advantage. If their vision-only system proves robust, it could force competitors to reconsider their reliance on expensive sensor suites, shifting the entire industry’s focus from high-cost hardware to the brutal complexity of AI-driven perception.