Fluid-Motion Humanoid Robots Engage in Soccer Game

Humanoid robots play soccer with fluid movements

A team of scientists at Deep Mind, an AI company, has leveraged Deep Reinforcement Learning (Deep RL) to teach small humanoid robots how to play soccer. The robots were trained to focus on their goal and not lose sight of it despite external disturbances. While the initial stages of the experiment were performed in simulation, the training was eventually transferred to real robots.

The researchers initially focused on teaching individual skills in isolation to the robots, such as walking, kicking the ball, and falling fast. These skills were later combined and linked, enabling the robots to switch between different movement sequences. Additionally, the robots developed a basic strategic understanding of the game over time, anticipating ball movements and blocking opposing shots.

However, the experiments conducted by the scientists, which consisted of repeatedly knocking the little robots over or holding them back, faced criticism on social media. Users urged the team not to “abuse” the robots. Nonetheless, the researchers argued that these disturbances were necessary to train the robots to stand back up and execute safer movements.

The Deep RL approach has been successful in teaching the robots complex movements and strategic thinking, even when they face external disturbances. As robots become increasingly sophisticated, they are expected to take on more complex tasks in various fields, and this research takes us one step closer to that reality.

Leave a Reply