Humans have a unique ability to understand the goals, desires, and beliefs of others, which is essential for anticipating actions and collaborating effectively. This skill, known as “theory of mind”, is innate to us but remains a challenge for robots. However, if robots are to truly become collaborative helpers in manufacturing and everyday life, they need to learn these abilities too.
In new paper, a finalist for the best paper award at the ACM/IEEE International Conference on Human-Robot Interaction (HRI), computer science researchers from USC Viterbi aim to teach robots to predict human preferences in assembly tasks. This will allow robots one day to assist in a variety of tasks, from building satellites to setting tables.
“When working with a human, the robot needs to constantly guess what the person is going to do next,” said lead author Heramb Nemlekar, a USC computer science PhD student supervised by Stefanos Nikolaidis, an assistant professor of computer science. “For example, if the robot thinks the person will need a screwdriver to assemble the next part, the robot can get the screwdriver earlier so the person doesn’t have to wait. This way the robots can help people complete assembly more quickly.”
A New Approach to Predicting Human Actions
Predicting human actions can be challenging, because different people prefer to accomplish the same task in different ways. Current techniques require people to demonstrate how they want to go about the assembly, which can be time-consuming and counterproductive. To address this issue, the researchers found similarities in how individuals structured different products and used this knowledge to predict preferences.
Instead of asking individuals to “show” the robot their preferences in complex tasks, the researchers created small assembly tasks (referred to as “canonical” tasks) that could be performed quickly and easily. The robot will then “watch” the human complete the task using a camera and leverage machine learning to learn the person’s preferences based on their sequence of actions in the canonical task.
In user studies, the researcher’s system was able to predict human actions with approximately 82% accuracy. This approach not only saves time and effort, but also helps build trust between humans and robots. This can be useful in industrial settings, where workers assemble products on a large scale, as well as for people with disabilities or limited mobility who need assistance with product assembly.
Towards an Enhanced Future of Human-Robot Collaboration
The researchers’ goal is not to replace human workers but to improve safety and productivity in human-robot hybrid factories by getting robots to perform non-value-added or ergonomically challenging tasks. Future research will focus on developing methods to automatically design canonical tasks for different types of assembly tasks and evaluating the benefits of learning human preferences from short tasks and predicting actions in complex tasks in various contexts, such as personal assistance at home.
“Robots that can quickly learn our preferences can help us prepare food, rearrange furniture or do home repairs, which have a significant impact on our daily lives,” says Nikolaidis.