For today’s large-scale manufacturers deep learning is like a thorny rose. The AI technology has achieved attractive results in machine vision and robotic applications, saving manufacturing time and costs for example by automating assembly or spotting product defects. Many manufacturers however soon realize they lack the talent and resources to properly handle complicated deep learning software unless they hire dedicated AI experts.
At last week’s Collaborative Robotics and Advanced Vision Conference in San Jose, Cognex Director of Marketing for Vision Software John Petry spoke on the application of deep learning in advancing vision-guided robotics (VGR), stressing the need for robotic technology companies to provide maintainable deep learning-based solutions for their manufacturing customers.
The AI talent pool is shallow, and tech companies tend to scoop up talents with generous salaries and stock options. Petry believes manufacturers prefer deep learning applications that can be operated by their own engineers. “If you’re going to have a viable solution, it can’t be something where you need a PhD from Stanford to configure the system,” he says.
State-of-the-art robotic technology, which theoretically does not require complicated manuals and adjustments, is increasingly accommodating of manufacturers’ expectations. Berkeley-based startup Embodied Intelligence, founded two months ago, is prototyping a machine learning system that allows humans to teach robots using virtual reality (VR). After a human performs a 30-minute VR demonstration, the mimicking robot can then learn how to perform that task.
In addition, Petry pointed out that industrial deep learning solutions are quite different from research lab prototypes, which researchers can develop using massive datasets and cloud-based training.
Petry says that in the real world, deep learning solutions should be custom built for the environment where they will be deployed. For example, robots on a massive production line may not have independent access to cloud servers. Manufacturers are more likely to opt for an application they can run cheaply on a commercial PC.
Manufacturers also don’t always have access to rich labeled data, and so robotic companies much make their applications effective with very limited datasets. “It’s not to say that over time you won’t be adding to it. But at least to get that first system deployed, to convince the customer, you’re going to have to work with tens of images, not thousands,” says Petry.
Vision-based deep learning solutions must also be flexible enough to accommodate different camera inputs. VGR equips robots with cameras to enable functionalities like picking, assembling and packaging. Many manufacturers want applications that can catch product defects. To that end, they might use multiple cameras to inspect the same product. A good deep learning solution should be able to deal with different camera positions, angles, lighting and resolutions.
Petry identified four major fields where current deep learning software offers superior adaptability and efficiency over traditional methods: part correctness and orientation, deformation part location, pre-picking, and post-placement.
Petry’s presentation was a highlight of the conference, attracting more attendees than the room’s capacity. While it’s clear that deep learning offers power and potential, challenges remain regarding its commercialization in robotics.
Journalist: Tony Peng | Editor: Michael Sarazen
0 comments on “Bringing Robotics from the Lab to the Real World”