I know why you're confused about bin picking
Forgive me for the clickbait title, but I just had to get your attention. Some robotics related articles that appeared online recently really - how shall I put this gently - raised my eyebrows. They claimed 'robotic vision being solved', 'robots can pick anything' or even 'insects teach us that robots don't need 3D vision'... Clearly, I don’t agree with these statements and I want to set the record straight. Allow me to clarify where state of the art robotics vision really is today, what reliability you can expect and, indeed, if 3D vision has been overrated?
There are 3 major applications that use robots in combination with cameras. First is vision-based quality control (also known as metrology): a robot-guided camera scans an area and the 2D picture and/or 3D scan is compared with a reference picture. The hardware and software in this application have matured a lot, and all established players (think Cognex, LMI, Faro,...) offer solutions. I don't think there is much confusion there. But the two new emerging applications that are stirring the market are 'order picking' – getting all orders collected for an online shop and 'machine tending' – feeding parts to a production machine. Both applications have tremendous automation opportunity with vision-enabled pick-and-place robots.
Order picking versus machine tending
First of all, I want to get out of the way that 'order picking' and 'machine tending' are two totally different applications today, even if both involve a pick-and-place robot and a camera.
For order picking, the drop-off orientation or position of the part you're picking doesn't matter in most cases: you pick from one bin and drop off in another bin or box. The Amazon Picking Challenge (later renamed to the Amazon Robotics Challenge) introduced this concept with a world-wide competition and many present-day robotic vision companies have come across this challenge one way or the other. (Note: Amazon built its own system in the end - creating that competition was a really smart move.)
This robot is picking and dropping parts in a bag
For machine tending, dropping stuff can’t be further away from what the customer wants; orientation and position mean everything (people really react emotionally when the robot just drops the part in a machine tending demo!). The performance of the application is measured not only by cycle time, but also by the position and the orientation accuracy when the part is placed by the robot. This is the real benchmark for the success of robotic vision in machine tending.
So let's explore what recent publications state about vision based machine tending.
This robot is picking and presenting parts with exact position
'Bin picking is solved'
This is probably the oldest, yet most harmful, claim to draw customers to bin picking solutions. Just last month, a Universal Robots blog post questioned if 'Automated bin picking is finally real?' and came to cautious conclusions. Others claim deep learning/AI, motion planning libraries or laser-sharp 3D images finally 'solved' bin picking. At Pickit, we evaluate hundreds of customer requests each month and today most of the hard cases are due to the gripper not being able to pick and present the part like the customer wants.
Don't get me wrong, there is a lot of low-hanging fruit out there where the gripper is easy, but this ecosystem will have to find the ladder to climb up and reach the full market size. There is little discussion on how to climb up to a solution:
- Detect parts position and orientation with high reliability.
- Select the ones your robot can reach and your gripper can handle.
- Execute the motion from pick-up to drop-off
At Pickit, we're implementing many hundreds of cases every year by focusing on step 1 and 2 and having step 3 done by the robot software. One day we may go further and suggest which gripper works best for your part, or take control of your robot. Is bin picking then solved? I leave the answer up to the reader.
'You don't need a camera’
Then there are the creative efforts to play around with underperforming hardware and trying to fix it in software or with a 'smart trick'. For example, I saw this video: a force sensor was used to gently crash into a bin filled with parts to grab something with a magnet, to drop it off onto a white plate and to detect the part and its position with a 2D camera. The idea is that a force sensor + 2D camera is cheaper than a 3D camera. But this misses the whole point. Let me explain.
Frankly, everybody gets creative, and we have shown several demos where we 'randomly' grabbed and looked a second time, but it's not what the market is asking. It slows down the cycle time and introduces another point of failure - and people just don't like crashing robots into bins. What is accepted are calibration-fixtures. They are some kind of molds that align a part when it's placed into it, they’re common in automation, and your gripper can perform this function as well. A well designed gripper will align and clamp the picked part in such a way that remaining tolerances are removed and the presentation of that part can happen in the right way.
So instead of making things worse first by removing the camera and advocating a crash into bins-approach, can we please start a discussion on the grippers our robots desperately need? For example, what about 3D printing grippers based on a 3D model of the part to pick? 3D printing company Materialise and gripper company Schunk have joined forces to do exactly that, and I have seen this application in industrial automotive production to pick steel parts.
'Do Robots Need 3D Vision? One Insect Says No'
This article felt like it was written for (people like) me. It was 100% pure and uncut clickbait. It was written on March 28 2019, and had it been Enabled 3 days later, I would have certainly thought it was an April fools’ joke. There is no quick way to cover all the reasoning errors in this article that takes a complex natural process and tries to apply it to industrial robotics.
Rule number one in logic reasoning: you can prove anything based on a false assumption. So instead of departing from false assumptions, I prefer to make a case about why robots - pick and place robots in particular - do need 3D vision.
- Industry choice
The reality is that all bin picking robots, whether it’s piece picking or parts presentation, are using 3D cameras. We do see the value of adding 2D imaging for improving the reliability with deep learning algorithms, but that's not serving as a replacement for the 3D data.
- Robustness to light changes
3D cameras bring their own light source (infrared or visible light) and have very little need for light conditioning. Direct sunlight on the parts should be avoided in all cases, and I have seen materials/setups where light plays a role when working on the limits of the camera.
- Flexible to use in your existing production process
The range of parts and situations/stackings from which they need to be picked is close to limitless with 3D vision. People often think about boxes/totes, but any part dispensing (or delivery) system actually works for a 3D camera.
- Teaching is NOT complex
A 3D camera can do something few people realize: it can filter out the background of a scene, which allows teaching a part by showing it to the camera once. It can then recognize the part even when it's on top of other or different parts. Also, 3D cameras can make use of an existing CAD model to learn how a part looks. For this reason, teaching in new parts is a really easy step.
- Process reliability
It's the last in this list, but top of mind of every process automation engineer and operator I have ever met. 3D increases the reliability of vision-based robotics tremendously.
"I’ve had the privilege of integrating one of Pickit’s earlier products and it runs flawlessly to this day. Bin picking is one of the biggest challenges in today’s manufacturing environment and this makes it feasible with today’s automation and robotics." - Production Engineer, KYB, USA
So no insects, nor blows at 2D vision are required to make a case for 3D vision.
I hope this inspired some and informed others. If you enjoyed reading this, feel free to share. Want to find out more about Pick-it’s 3D vision solution? Check out our website www.pickit3d.com.