AI robots figure out how to play football in shambolic footage
Robots fitted with AI developed by Google’s DeepMind have figured out how to play football.
The miniature humanoid robots, which are about knee height, were able to make tackles, score goals and easily recover from falls when tripped.
In order to learn how to play, AI researchers first used DeepMind’s state-of-the-art MuJoCo physics engine to train virtual versions of the robots in decades of match simulations.
The simulated robots were rewarded if their movements led to improved performance, such as winning the ball from an opponent or scoring a goal.
Once they were sufficiently capable of performing the basic skills, DeepMind researchers then transferred the AI into real-life versions of the bipedal bots, who were able to play one-on-one games of football against each other with no additional training required.
“The trained soccer players exhibit robust and dynamic movement skills, such as rapid fall recovery, walking, turning, kicking and more,” DeepMind noted in a blog post.
“The agents also developed a basic strategic understanding of the game, and learned, for instance, to anticipate ball movements and to block opponent shots.
“Although the robots are inherently fragile, minor hardware modifications, together with basic regularisation of the behaviour during training led the robots to learn safe and effective movements while still performing in a dynamic and agile way.”
A paper detailing the research, titled ‘Learning agile soccer skills for a bipedal robot with deep reinforcement learning’, is currently under peer-review.
Previous DeepMind research on football-playing AI has used different team set ups, increasing the number of players in order to teach simulated humanoids how to work as a team.
The researchers say the work will not only advance coordination between AI systems, but also offer new pathways towards building artificial general intelligence (AGI) that is of an equivalent or superiour level to humans.