Pixels to Torque: Figure AI's Helix 02 Solves Loco-Manipulation — The Hardest Unsolved Problem in Home Robotics
On January 27, 2026, Figure AI unveiled Helix 02: a single neural system that controls a humanoid's entire body — legs, torso, arms, and individual fingers — directly from camera pixels, with no human intervention. The robot completed a four-minute dishwasher task involving 61 continuous actions. Then it cleaned a living room. Then it jogged outdoors. The problem roboticists called "maybe impossible" has a working solution.
- 01 — The Problem: Why Loco-Manipulation Defeated Robotics for Decades
- 02 — The Architecture: System 0, 1, and 2 — Three Speeds, One Body
- 03 — The Demos: Dishwasher, Living Room, Outdoors
- 04 — The Hardware: Figure 03's Sensor Suite Built for Helix 02
- 05 — What Comes Next: 24/7 Operation, Factory Fleets, Home Pilots
- 06 — Why Helix 02 Matters Beyond Figure AI
01 — The Problem: Why Loco-Manipulation Defeated Robotics for Decades
There is a deceptively simple thing you do every morning without thinking about it. You walk from the bedroom to the kitchen, pick up a mug, fill it with water, carry it back, and set it on a table. Every step of that sequence requires something robotics researchers call loco-manipulation: the simultaneous, continuous coupling of locomotion and manipulation. Walk while carrying. Adjust balance while reaching. Recover from unexpected contact without stopping.
For decades, this has been one of the hardest unsolved problems in robotics — not because walking is hard, and not because object manipulation is hard, but because doing both together at the same time, through a body that has to balance continuously while its arms change the center of mass, resists the clean decomposition that engineers prefer. The standard workaround was a "stop-and-go" paradigm: walk to a destination, halt, stabilize, reach, grasp, then walk again. These handoffs are slow, brittle, and profoundly unlike how humans move. Any unexpected deviation — a wobble, a dropped object, an uneven floor — causes the whole sequence to fail.
Every humanoid robot demonstrated publicly before Helix 02 worked around this problem to some degree. Even impressive demonstrations — Boston Dynamics' backflips, Unitree's martial arts routines — are pre-planned motions with limited real-time feedback. They show what a robot can do when the environment cooperates exactly as expected. Helix 02 is the first system that addresses what happens when it doesn't.
02 — The Architecture: System 0, 1, and 2 — Three Speeds, One Body
The technical core of Helix 02 is a three-layer hierarchical architecture where each layer operates at the timescale appropriate to its function. The key insight — which Figure calls a "System 0, 1, 2" design — is that a robot body needs to reason at three very different speeds simultaneously: semantically slow, motorically fast, and physically instantaneous.
The three systems are trained end-to-end to communicate, not as independent modules with interfaces between them. S2 produces latent semantic representations that S1 reads as context for its visuomotor control. S1's outputs feed into S0's physical execution. The result is a robot that can "think slow" about what it's trying to do while "acting fast" to actually do it, and "balancing continuously" beneath both — without any of those three processes stopping for the others.
03 — The Demos: Dishwasher, Living Room, Outdoors
Figure has published three major Helix 02 demonstrations since January 2026, each expanding the scope of what the system can handle. Together they trace a clear trajectory from controlled kitchen environment to unstructured outdoor space.
04 — The Hardware: Figure 03's Sensor Suite Built for Helix 02
Helix 02 is not just a software breakthrough. The Figure 03 hardware was co-designed around the AI architecture's requirements — particularly the new sensor modalities that System 1's whole-body visuomotor policy needs to read from. Three additions stand out.
Fingertip tactile sensors sensitive to forces as low as three grams — roughly the weight of a paperclip — give the robot the ability to distinguish between a secure grip and an impending slip before the slip happens. This level of force feedback enables handling fragile objects that would be destroyed by a few hundred grams of excess force: glassware, eggs, pills, thin electronics. Palm-mounted cameras provide visual feedback when the robot's hands are occluded by the task itself — reaching into a bin, handling an object from below — situations where the head cameras lose useful information. 10 Gbps mmWave wireless data offload lets Figure 03 upload terabytes of operational data for continuous fleet-wide learning without requiring a wired connection.
| Capability | Helix (Original) | Helix 02 |
|---|---|---|
| Body coverage | Upper body only | Entire robot — legs, torso, arms, fingers |
| Foundation layer | Not present | System 0 at 1 kHz — human motion neural prior |
| Balance during manipulation | Stop-and-stabilize required | Continuous — walk while carrying, reach while moving |
| Tactile input | Limited | 3g-resolution fingertip sensors + palm cameras |
| Task horizon demonstrated | Short sequences, single room | 4-min full-kitchen cycle, multi-room, outdoor |
| New skill acquisition | Requires new code or demonstrations | Natural language — say what you want, robot does it |
| Overnight autonomous operation | Not demonstrated | Demonstrated — 24/7 without human supervision |
05 — What Comes Next: 24/7 Operation, Factory Fleets, Home Pilots
Figure CEO Brett Adcock has outlined a specific 2026 roadmap built on the Helix 02 foundation. The targets are aggressive — but they are grounded in demonstrated capability rather than aspirational projections.
06 — Why Helix 02 Matters Beyond Figure AI
The significance of Helix 02 is not limited to Figure AI's product roadmap. It is a proof of concept for an architectural approach — end-to-end neural control from sensors to actuators, without modular decomposition — that the entire robotics field is watching. If this approach generalizes, it changes the cost model for developing new robot behaviors from months of engineering work to days of data collection and training.
The traditional robotics paradigm required specialized engineers to program each new behavior: a PhD and hundreds of demonstrations per skill, as Figure's scaling curve chart illustrates. Helix 02 changes that equation. New capabilities are acquired by adding task examples to the training data — the model learns from them. Teach the robot to clean a kitchen; add living room examples; the bedroom follows without separate development. This is the same scaling dynamic that made large language models so powerful. Figure is applying it to physical intelligence.
For companies operating in the AI companion and high-fidelity interaction space, Helix 02 establishes a new reference point for what "full-body AI" means and what it makes possible. A robot that can walk into an unseen home and perform complex household tasks autonomously is not just an industrial tool. It is the physical substrate that companion AI has been waiting for: hardware capable of the kind of continuous, adaptive, whole-body presence that makes a robot feel like something more than a sophisticated appliance.
The home pilots planned for Q4 2026 will be the most important test. Factory environments are controlled. Homes are not. If Figure 03 running Helix 02 can perform reliably in the variable, unpredictable, human-occupied environments that consumer robots must handle — with fragile objects, irregular surfaces, unexpected obstacles, and people who don't behave like factory workers — the timeline to commercially viable home companion robots compresses significantly. That outcome is not guaranteed. But the architecture to attempt it exists now, publicly, in working hardware.
Sources
- Figure AI — Official: Introducing Helix 02: Full-Body Autonomy (January 27, 2026)
- Figure AI — Official: Helix — A Vision-Language-Action Model for Generalist Humanoid Control
- Figure AI — Official: Introducing Figure 03 (hardware spec and Helix 02 integration)
- Humanoids Daily — From Pixels to Torque: Figure Unveils Helix 02 and the Era of Whole-Body Autonomy (February 2026)
- Techloy — Figure AI's Helix 02 Completes 4-Minute Autonomous Kitchen Task, Setting New Humanoid Robotics Benchmark (January 28, 2026)
- Abit — Figure AI's Helix 02 Robot Cleans an Entire Living Room — No New Code Required (March 2026)
- ReviewsTown — Figure Helix 02 Review 2026: The AI Brain For Full-Body Autonomy (April 2026)
- Humanoid Press — April 2026 Update: Figure 03 Achieves 24/7 Autonomous Operation, Outdoor Mobility
