The robot hand can identify objects with just one grip
By Adam Zewe | MIT News Agency
Inspired by human fingers, MIT researchers have developed a robotic hand that uses high-resolution touch sensing to accurately identify objects after gripping them just once.
Many robotic hands pack all of their powerful sensors into a fingertip, so an object must be in full contact with the fingertip to be identified, which can be done with many grips. Other designs use a lower resolution sensor spread across the finger, but these don’t capture a lot of detail, so multiple regraps are often required.
Instead, the MIT team created a robot finger with a rigid framework encased in a soft outer layer that has several high-resolution sensors incorporated under its transparent “skin”. The sensor, which uses a camera and an LED to gather visual information about the shape of an object, provides continuous sensing along the finger. Each finger captures rich data on many parts of the object simultaneously.
Using this design, the researchers created a three-finger robotic hand that could identify objects with just one grip, with about 85 percent accuracy. The rigid shell makes the fingers strong enough to pick up heavy objects, such as a drill, while the soft skin allows the fingers to grip a flexible object securely, such as an empty plastic water bottle, without crushing it.
These soft and stiff fingers can be very useful in home care robots designed to interact with the elderly. The robot can lift heavy items from shelves with the same hands it uses to help individuals bathe.
“Having both soft and rigid elements is essential in any hand, but so is the ability to perform great sensing over a very wide area, especially if we want to consider doing as highly complex manipulation tasks as our own hands can do. Our goal with this work is to combine all of the things that make our human hands so good into a robotic finger that can perform tasks that no other current robotic finger can do,” said the mechanical engineering graduate student Sandra Liuco-author of a research paper on robotic fingers.
Liu co-authored the paper with co-lead author and undergraduate mechanical engineering student Leonardo Zamora Yañez and his advisors, Edward Adelson, John and Dorothy Wilson Professors of Vision Sciences in the Department of Brain and Cognitive Sciences and members of the Computer Science and Artificial Intelligence Laboratory (CSAIL). This research will be presented at the RoboSoft Conference.
Human inspired finger
The robotic finger consists of a 3D printed rigid endoskeleton placed in a mold and encased in a transparent silicone “skin”. Placing the finger in the mold eliminates the need for fasteners or adhesive to hold the silicone in place.
The researchers designed the mold with a curved shape so that the robot’s fingers curved slightly at rest, like human fingers.
“Silicone creased when bent, so we thought if our fingers were formed in this curved position, when you bend them more to grip objects, you won’t develop as many wrinkles. Wrinkles are good in some ways – they can help fingers glide along surfaces very smoothly and easily – but we don’t want wrinkles we can’t control,” says Liu.
The endoskeleton of each finger contains a pair of detailed touch sensors, known as GelSight sensors, embedded at the top and center, under transparent skin. The sensors are placed so that the camera range overlaps slightly, providing continuous sensing of the finger along its entire length.
The GelSight sensor, based on technology pioneered by the Adelson group, consists of a camera and three colored LEDs. When a finger grips an object, the camera captures an image when colored LEDs illuminate the skin from within.
Using an illuminated contour that appears on smooth skin, an algorithm performs a backwards calculation to map the contour on the surface of the object being held. The researchers trained a machine learning model to identify objects using raw camera image data.
As they perfected the finger-making process, the researchers ran into a few hurdles.
First, silicone tends to peel off the surface over time. Liu and his collaborators found they could limit this peeling by adding tiny indentations along the hinges between the joints in the endoskeleton.
When the finger is bent, the silicone bend is distributed along the small indentation, which reduces stress and prevents peeling. They also added pleats at the joints so the silicone doesn’t squeeze as much when the finger is bent.
While solving their design problems, the researchers realized the wrinkles in the silicone prevented the skin from tearing.
“The usability of wrinkles was an accidental discovery on our part. When we synthesized them on the surface, we found that they actually made the fingers more durable than we expected,” he said.
Get a good understanding
After they perfected the design, the researchers built a robotic hand using two fingers arranged in a Y pattern with a third finger as the opposable thumb. The hand captures six images while holding the object (two from each finger) and sends these images to a machine learning algorithm that uses them as input to identify the object.
Because the hand has tactile sensing that includes all of its fingers, it can collect rich tactile data from a single grip.
“While we have a lot of sensing in the fingers, maybe adding palm sensors to the sensors will help make the tactile difference even better,” said Liu.
In the future, the researchers also want to upgrade the hardware to reduce the amount of wear and tear on the silicon over time and add more actuation to the thumb so it can perform a wider range of tasks.
This work was partially supported by the Toyota Research Institute, the Office of Naval Research, and the SINTEF BIFROST project.