Industry Research

Giving Robots a Sense of Touch

The rubber on GelSight sensor will conform to any object pressed against it, resulting in the reflective lights on the metallic paint to vary according to different materials and shapes.

GelSight Technology Lets Robots Gauge Objects’ Hardness and Manipulate Small Tools.

Watch Video
Larry Hardesty | MIT News Office
June 5, 2017
Press Inquiries

Two papers from MIT using GelSight sensing technology were published in ICRA 2017. The GelSight sensor was first introduced 8 years ago [Fig 1]. This sensor consists of a piece of transparent, synthetic rubber, with a metallic paint coated on one side. The working principle of the sensor is as follows: the rubber will conform to any object that is pressed against it, resulting in the reflective lights on the metallic paint to vary according to different materials and shapes. As we can see from the demonstration images [Fig 2], the rubber is an easy-to-deform soft material, with a high resolution for reflecting the power, shape and pattern of the object pressed against it. Also, with only machine vision algorithms to calculate the 3D reconstruction, this sensor is low-cost yet robust. One possible application is to realize tactile sensing on robots: equipped on a robot hand, it can make the 3D measurement of object shapes much easier and more accurate. A detailed demonstration video can be seen on Youtube: https://www.youtube.com/watch?v=aKoKVA4Vcu0.

image (4).png
Fig 1. The building blocks of the GelSight sensor

Screen Shot 2017-06-28 at 16.50.24.png

Screen Shot 2017-06-28 at 16.50.41.png
Fig 2. Demonstrations about how the GelSight sensor reconstruct the tactile information

Right now, with more advanced algorithms and faster machines, the GelSight sensor plays an ever more important role in state-of-the-art tactile sensing.

In one ICRA paper [1], Prof. Adelson and his colleagues used the GelSight sensor to enable a robot to judge the hardness of surfaces it is touching. For robots, knowing the hardness of the objects they are handling is a crucial ability, endowing them with another modality similar to a human being. To accomplish the goal of measuring hardness, GelSight sensor provides high resolution tactile images of the contact geometry, as well as the contact force and slip conditions. With objects of the same shape but different hardness, the sensor provides different image sequences recording the deformation while contacting occurs, as shown in Fig. 3. Consistent with our intuition, soft object deforms more during contact, with the ridges flattened, producing a smoother surface; hard sample deforms less, and ridges remain sharp. This sequences can be captured in the image sequences by the Gelsight sensor. To obtain a quantitative estimation of the hardness, the data is further processed by a CNN and LSTM neural network.

Screen Shot 2017-06-28 at 16.57.51.png
Fig 3. The comparison of GelSight contacting objects with different hardness. During contact, soft object deforms more, with the ridges flattened, producing a smoother surface; hard sample deforms less, and ridges remain sharp.

While another paper [2] used the contact-based geometric data, based on the data collected from the GelSight, to improve pose accuracy during grasping. It showed that it is possible to solve the problem of occlusion of small objects, enabling a robot to manipulate smaller objects, such as a needle. This was achieved by performing sensor fusion with data from GelSight and a RGB-D sensor. Employing the Extended Kalman Filter, the tracking algorithm takes a continuous stream of RGB-D images and depth images from the GelSight sensor as inputs. The fusion of sensing signals from both sensors and then solving the occlusion problem is another challenge. Because it is a highly non-linear optimization problem, and the estimated states to fit both sensors will vary, depending on the shapes. The authors solved this problem by iteratively constructing and solving approximate unconstrained quadratic programs (QPs). The experiments seem quite impressive (Fig 4). Compared with a previous manipulation experiment, where a USB cable was inserted, the more advanced algorithm enabled the robot to handle a smaller object.

Screen Shot 2017-06-28 at 17.50.24.png
Fig 4. A GelSight sensor on a gripper is used to manipulate a small screwdriver via teleoperation of the arm together with the point cloud information from the RGB-D camera. The GelSight sensor surface is shown in red, where the deformation is less than a threshold (indicating no contact), and green where depth is above the threshold (indicating contact).

GelSight is not the first tactile sensor equipped on a robot. Previously, there were piezoresistive [3], PCB[4], and capacitive [5] sensors that measure tactile force. But these sensors are difficult to fabricate, and have limited spatial resolution or area. The optical sensor technique that the GelSight is based on seams to have higher resolution and lower costs. Prof. Daniel Lee from the University of Pennsylvania’s GRASP robotics lab says, “having a fast optical sensor to do this kind of touch sensing is a novel idea, and I think the way that they’re doing it with such low-cost components — using just basically colored LEDs and a standard camera — is quite interesting.” Also, Prof. Sergey Levine from UC Berkeley also looks forward to such high-bandwidth sensors which will make big impacts on robotic platforms, especially for allowing a more dexterous manipulation for robots.

In their book “How the Body Shapes the Way We Think”, Pfeifer and Bongard assert that complex robot cognition tasks are highly constrained by the hardware. Therefore, embodiment, such as dedicated hardware design, simplifies the information processing demands of many tasks. For instance, a self-stabilizing design based on compliant pneumatic actuators allows rapid hexapod locomotion for robots [6]. So in the case of GelSight, an accurate tactile images of the contact object makes the grasping more robust. We look forward to more and more cheaper and robust hardware development that will ease the development of robot intelligence.

http://news.mit.edu/2017/gelsight-robots-sense-touch-0605

[1] Yuan, Wenzhen, et al. “Shape-independent Hardness Estimation Using Deep Learning and a GelSight Tactile Sensor.” arXiv preprint arXiv:1704.03955 (2017).
[2] Gregory Izatt, Geronimo Mirano, Edward Adelson, and Russ Tedrake. Tracking objects with point clouds from vision and touch. In Proceedings of the International Conference on Robotics and Automation (ICRA), Singapore, May 2017.
[3]Kerpa, Oliver, Karsten Weiss, and Heinz Worn. “Development of a flexible tactile sensor system for a humanoid robot.” Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on. Vol. 1. IEEE, 2003.
[4]Jamali, Nawid, Giorgio Metta, and Lorenzo Natale. “iCub Tactile Sensing System: Current State and Future Directions.” IEEE Trans. Robot 24.2 (2008): 505-512.
APA
[5]Papakostas, Thomas V., Julian Lima, and Mark Lowe. “A large area force sensor for smart skin applications.” Sensors, 2002. Proceedings of IEEE. Vol. 2. IEEE, 2002.
[6]Pfeifer, Rolf, and Josh Bongard. How the body shapes the way we think: a new view of intelligence. MIT press, 2006.
[7] Cham, Jorge G., Jonathan K. Karpick, and Mark R. Cutkosky. “Stride period adaptation of a biomimetic running hexapod.” The International Journal of Robotics Research 23.2 (2004): 141-153.


Author: Joni Chung | Reviewer: Hao Wang

0 comments on “Giving Robots a Sense of Touch

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: