Google's AI Technology Learns Parkour By Itself

Google’s AI Technology Learns Parkour By Itself

Keeping up with AI research may be an experience. On the one hand, you get to partake in cutting-edge technological experimentation, with new papers outlining the ideas and methods which will probably (eventually) snowball into the largest technological revolution of all time. On the other hand, it’s just wacky and hilarious.

Introducing a new paper from Google’s AI DeepMind study. The study investigates how reinforcement learning (or RL) may be used to teach a computer to navigate unfamiliar and complex environments. It’s the type of basic AI research that we are currently testing in virtual worlds, but will one day help robots navigate the stairs in your home.

DeepMind recently put a stick figure through a puzzle course in which it needed to find the fastest way to jump, climb, or crawl to get from place A to B. All DeepMind’s developers have done is give the rig a set of virtual sensors (so it can tell whether it’s failed or not) and then incentive to proceed. The computer works out the rest for itself.

The novelty here is that the investigators are exploring how hard environments can teach an AI complicated and powerful movements (i.e., using its knee to find a footing on top of a high wall). Normally, reinforcement learning generates behaviour that’s fragile and that breaks down in unfamiliar conditions, like a baby who knows how to attack the stairs in your home, but who can not understand an escalator. This research indicates that RL can be used to teach replicable movements.