Four-legged robot system for playing soccer in various fields


Researchers created DribbleBot, a system for dribbling in the wild over a variety of natural terrains including sand, gravel, mud and snow using onboard sensing and computing. In addition to these football feats, such robots could one day assist humans in search and rescue missions. Photo: Mike Grimmett/MIT CSAIL

By Rachel Gordon | MIT CSAIL

If you’ve ever played soccer with a robot, it’s a familiar feeling. The sun shines on your face as the scent of grass permeates the air. you look around. The four-legged robot rushes at you, dribbling with determination.

While the bot doesn’t exhibit the level of skill that Lionel Messi does, it is an impressive dribbling system. Researchers from MIT’s Improbable Artificial Intelligence Laboratory, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), have developed a system of legged robots that can dribble a ball under conditions similar to humans. The bot uses a mix of onboard sensing and computing to traverse a variety of natural terrains such as sand, gravel, mud and snow, and adapts to their varying impacts on ball motion. Like every committed athlete, “DribbleBot” can get back up and recover the ball after it’s been dropped.

Programming robots to play soccer has been an active area of ​​research for some time. However, the team wanted to automatically learn how to move the foot during dribbling, to enable the discovery of hard-to-write skills that respond to a variety of terrains such as snow, gravel, sand, grass and pavement. Enter, simulation.

Robots, balls and fields exist in simulations — nature’s digital counterpart. You can load bots and other assets and set physics parameters, then tackle dynamics simulations going forward from there. Four thousand versions of the robot are simulated in parallel in real time, enabling data collection 4,000 times faster than using just one robot. That’s a lot of data.


The robot starts off without knowing how to dribble — it only receives a reward when it does, or negative reinforcement when it makes a mistake. So, it’s basically trying to figure out what order of forces to apply with her feet. “One aspect of this reinforcement learning approach is that we have to design good rewards to facilitate the robot learning successful dribbling behaviors,” said the MIT PhD student. Gabe Margoliswho co-led the work together Yandongjiresearch assistant at Impossible AI Lab. “Once we’ve designed the prize, it’s time to train the robot: In real time, it’s a few days, and in the simulator, hundreds of days. Over time he learned to get better and better at manipulating the ball to match the desired speed.

The bot can also navigate unfamiliar terrain and recover from falls due to the recovery controller the team built into its system. This controller allows the robot to bounce back after a fall and switch back to the dribbling controller to continue chasing the ball, helping it deal with distractions and out-of-distribution terrain.

“If you look around today, most robots are on wheels. But imagine that there is a scenario of disaster, flood or earthquake, and we want robots to assist humans in the search and rescue process. We need machines to traverse the uneven terrain, and wheeled robots cannot traverse the landscape,” said Pulkit Agrawal, MIT professor, CSAIL principal investigator, and director of the Improbable AI Lab.” The essence of studying legged robots is to go into terrain beyond the reach of current robotic systems,” he added. “Our goal in developing algorithms for legged robots is to provide autonomy in challenging and complex terrain that is currently beyond the reach of robotic systems.”

The fascination with quadrupedal robots and soccer runs deep — Canadian professor Alan Mackworth first noted the idea in a paper titled “On Seeing Robots,” presented at VI-92, 1992. Japanese researchers then organized a workshop on “Grand Challenges in Intelligence Artificial,” which led to a discussion about using soccer to promote science and technology. The project was launched as J-League Robots a year later, and global excitement was immediate. Shortly after that, “RoboCup” was born.

Compared to walking alone, dribbling a soccer ball places more constraints on the DribbleBot’s movements and what terrain it can traverse. The robot must adjust its movement to apply force to the ball to dribble. The interaction between the ball and the landscape can be different from that between the robot and a landscape, such as thick grass or a sidewalk. For example, a soccer ball would experience a pulling force in the grass that does not exist on the pavement, and a hill would apply an accelerating force, changing the ball’s typical path. However, the bot’s ability to traverse different terrain is often not greatly affected by these differences in dynamics – as long as it doesn’t slip – so the soccer test can be sensitive to non-self-propelled terrain variations.

“The past approach simplified the dribbling problem, making the assumptions of modeling flat, hard ground. Movement is also designed to be more static; the robot doesn’t try to run and manipulate the ball at the same time,” said Ji. “That’s where the more difficult dynamics enter into the control issue. We address this by extending the latest advances that enable better outdoor locomotion into this combined task that combines aspects of locomotion and agile manipulation together.

On the hardware side, the robot has a set of sensors that allow it to sense its environment, allowing it to sense where it is, “understand” its position, and “see” some of its environment. It has a set of actuators that allow it to apply forces and move itself and objects. Between the sensors and actuators sits a computer, or “brain”, which is in charge of converting sensor data into action, to be implemented via motors. When the robot walks on snow, it doesn’t see the snow but can feel it through its motor sensors. But soccer is a more difficult feat than walking – so the team leveraged cameras on the robot’s head and body for new sensory vision modalities, in addition to new motor skills. And then – we dribble.

“Our robot can explore the wilderness because it has all the sensors, cameras and computing inside. It required some innovation in terms of customizing the entire controller to suit this onboard computing,” said Margolis. “That is one area where learning helps because we can run lightweight neural networks and train them to process noise sensor data observed by moving robots. This is in stark contrast to most robots today: Typically a robotic arm is mounted on a fixed base and sits at a workbench with a giant computer attached directly to it. Neither the computer nor the sensors are in the robot arm! So, everything has weight, it’s hard to move.”

There’s still a long way to go to make these robots as nimble as their natural counterparts, and some challenging terrain for DribbleBot. Currently, controllers are not trained in a simulation environment that includes slopes or stairs. The robot doesn’t understand the geometry of the terrain; it only estimates its material contact properties, such as friction. If there’s a step up, for example, the robot will get stuck — it won’t be able to lift a ball over the step, an area the team hopes to explore in the future. The researchers are also excited to apply lessons learned during DribbleBot development to other tasks that involve a combination of locomotion and object manipulation, quickly transporting various objects from one place to another using their legs or arms.

This research was supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation Institute of Artificial Intelligence and Fundamental Interactions, the US Air Force Research Laboratory, and the US Air Force Artificial Intelligence Accelerator. This paper will be presented at the IEEE 2023 International Conference on Robotics and Automation (ICRA).

MIT news


Source link

Related Articles

Back to top button