Robot hand rotates objects using touch, not sight (with video)


July 25, 2023

(Nanowerk News) Inspired by the easy way humans handle objects without looking at them, a team led by engineers at the University of California San Diego have developed a new approach that allows a robot hand to rotate objects simply by touch, without relying on sight.

Using their technique, the researchers created robotic hands that could smoothly rotate a variety of objects, from small toys to cans, and even fruits and vegetables, without bruising or squeezing them. Robotic hands accomplish these tasks using only tactile information.

The work could help develop robots that can manipulate objects in the dark.


The team recently presented their work at Robotics 2023: Science and Systems Conference (“Spinning without Seeing: Towards Hand Dexterity through Touch”).

To build their system, the researchers attached 16 touch sensors to the palm and fingers of a four-fingered robotic hand. Each sensor costs about $12 and has a simple function: it detects whether an object is touching it or not.

What makes this approach unique is that it relies on lots of inexpensive, low-resolution touch sensors that use simple binary signals—touch or no touch—to perform robotic hand rotations. These sensors are spread over a large area of ​​the robot hand.

This contrasts with various other approaches which rely on multiple, high-resolution touch sensors affixed to small areas of the robot’s hand, particularly the fingertips.

There are several problems with this approach, explains Xiaolong Wang, a professor of electrical and computer engineering at UC San Diego, who led the study. First, having a small number of sensors in the robot’s hand minimizes the chance of contact with an object. That limits the sensing capabilities of the system. Second, high-resolution touch sensors that provide information about textures are very difficult to simulate, not to mention very expensive. That makes it more challenging to use them in real-world experiments. Lastly, much of this approach still relies on vision.

“Here, we used a very simple solution,” said Wang. “We demonstrated that we don’t need details about the object’s texture to perform this task. We only need a simple binary signal whether the sensor has touched an object or not, and it is much easier to simulate and transfer to the real world.”

The researchers further noted that having a wide range of binary touch sensors gives the robot hand enough information about the object’s 3D structure and orientation to successfully rotate it without sight.

They first trained their system by running simulations of a virtual robot hand rotating various objects, including those with irregular shapes. The system judges which sensors in the hand the object touches at a certain point in time during rotation. It also assesses the current position of the hand joints, as well as their previous actions. Using this information, the system tells the robotic hand which connection to go to at the next point in time.

The researchers then tested their system on real-life robot hands with objects the system had yet to encounter. The robot hand is able to rotate various objects without stopping or losing its grip. These items include a tomato, pepper, a can of peanut butter, and a rubber duck toy, which is the most challenging object because of its shape. Objects with more complex shapes take longer to rotate. The robot hand can also rotate objects around different axes.

Wang and his team are now looking to extend their approach to more complex manipulation tasks. They are currently developing techniques to allow robotic hands to catch, throw and juggle, for example.

“Hand manipulation is a very common skill for humans to have, but very complicated for robots to master,” said Wang. “If we can give robots these skills, it will open doors for the kinds of tasks they can do.”


Source link

Related Articles

Back to top button