Virtual limitations: Reinforcement was used to train bots to walk inside simulations even before learning, but it is difficult to transfer that power to the real world. “The videos you see of virtual agents aren’t realistic in most cases,” said Chelsea Finn, an AI and robotics researcher at Stanford University, who was not involved. Small differences between simulated physical laws in a virtual environment and actual physical laws outside of them, such as how friction between a robot’s legs and the ground works – can be a big failure when a robot tries to apply what it has learned. A heavy two-legged robot can lose balance and fall if its movement is stopped even slightly.
Double simulation: But training a large robot through trial and error in the real world would be dangerous. To address these issues, the Berkeley team used a two-tier virtual environment. Early on, a simulated version of Cassie learned to walk by drawing a large existing database of robot movements. This simulation was then transferred to a second virtual environment called Simmechanics, which mirrors real-world physics with a higher degree of accuracy – but spends it slower than in real life. Only once did Cassie seem to be doing well there was a loaded flame walking model in the real robot.
The original Casey was able to walk using the model he learned in the simulation without any additional fine-tuning. It can walk across rough and slippery terrain, carry unexpected loads and recover from pushing. During the test, Casey also damaged two motors on his right side but was able to adjust the movement to compensate. Finn thinks it’s exciting work. Edward Johns, head of the Robot Learning Lab at Imperial College London, agreed. “This is the most successful example I’ve seen,” he says.
The Berkeley team is hopeful of using their approach to add to Casey’s refutation of movement. But don’t expect the dance to stop anytime soon.