Robot Revolutionized Manufacturing

 Robot Revolutionized Manufacturing

“The future is already here — it’s just not very evenly distributed.” — William Gibson

Current status of manufacturing automation

According to a recent report released by the International Federation of Robotics (IFR), global industrial robotic arm shipments set a new record in 2018, reaching 384,000 units. China is still the largest market (accounting for 35%) in the world, followed by Japan and the United States.

source: IFR Statistical Department

At this point, you might be wondering:

Industrial robotic arms were introduced to the manufacturing industry decades ago. Automation should have already been applied to areas where it is possible. What space is left for innovation?

Unexpectedly, even the automobile industry, which has the highest degree of automation, is still a long way from being a “lights out factory”, a factory with full automation.

For example, most of the automobile assembly is still completed manually. This is the most labor-intensive part of the process. An average of two-thirds of the employees in a car factory on the assembly line.

Even Tesla CEO Elon Musk, who has always pursued innovation and advocates a high degree of automation, has had to publicly acknowledge that the progress of Tesla’s production line automation has not met expectations.

Why is automation so difficult? What Are The Technical Limitations That Automation Has Not Been Able to Overcome so Far?

1. Flexibility And Adaptability

Today’s automated production lines are designed for mass production. Automation effectively reduces costs but also leads to a lack of flexibility. Shorter product life cycles, more and more small volume but highly customized production require higher flexibility. And humans are often more capable of adapting to changes than robots.

2. Dexterity And Task Complexity

Despite the rapid advancement of technology, humans still possess a higher level of dexterity than robots. From my conversations with electronic contract manufacturers, I learned that although the assembly process was already highly automated, the kitting procedure is still largely manual.

Kitting is common in both the manufacturing and warehousing industries. It is an important step in enhancing production efficiency. It refers to the process of gathering the various components required to assemble the product, packaging them, and placing them in a kit.

The robot then retrieves the parts from the kit and assembles them. Automation at this assembly stage is relatively easy because each part is in a fixed position and angle. Conversely, during kitting, the parts must be identified and taken from the box, where they are stored in a disorderly manner. The respective positions of the parts are different, leading to possible overlapping or entangling, posing a challenge for conventional machine vision and robotics technology.

3. Visual And Non-Visual Feedback

Many complicated assembly operations rely on the experience or “intuition” of the human operator. Whether it’s installing a car seat or putting parts into a kit, these seemingly simple actions require the operator to adjust the angle and force of their action based on various visual and tactile signals.

Traditional automated programming is not useful for such fine-tuning tasks because every instance of retrieving or placing of items is not entirely the same. It requires the human ability to learn and make inductions from multiple attempts. The mastery of such ability, especially deep and reinforced learning, can bring about the biggest change to robots!

Robotics 2.0: What previously unattainable tasks can AI robot perform?

The biggest change that AI brings to robotic arms is: In the past, robotic arms could only repeatedly perform the engineer’s writing process. Despite their accuracy and precision, they could not cope with environmental or processual changes.

Thanks to AI, machines can now learn to handle a wide range of objects and tasks on their own. Specifically, AI robots have achieved major breakthroughs in three major areas compared to traditional robotic arms:

  1. Vision System

Even the most advanced 3D industrial cameras do not possess the accuracy of the human eye in determining depth and distance as well as in identifying transparent packaging, reflective surfaces, or deformable objects.

This explains why it is difficult to find a camera that can provide accurate depth and identify most packages and items. However, due to AI, this situation will soon be changed.

Machine vision has made tremendous progress in the past few years, with innovations from deep learning, semantic segmentation, and scene understanding.

These have improved the depth and image recognition using commodity cameras, allowing manufacturers to obtain accurate image information and to successfully identify transparent or reflective object packaging without the need for expensive cameras.

2. Scalability

Unlike traditional machine vision, deep learning does not require pre-registration or construction of a 3D CAD model of each item. The artificial neural network can automatically identify the object in the image after training.

Unsupervised or self-supervised learning can also be used to reduce the need to manually tag data or features, allowing the machine to more closely resemble human learning.

ML reduces the need for human intervention and enables the robot to handle new parts without the need for engineers to rewrite the program. As the machine gathers more and more data through its operation, the accuracy of the machine learning model will also be further enhanced.

Currently, a typical production line usually has surrounding equipment such as a shaker table, a feeder, and a conveyor belt, to ensure that the robot can take the required components accurately.

If machine learning is further developed and the robotic arm becomes even smarter, perhaps one day these peripherals, more than four or five times more expensive than the robotic arm, will no longer be needed.

On the other hand, because deep learning models are generally stored in the cloud, this also allows robots to learn from each other and share knowledge. For example, if a robotic arm learns to combine two parts overnight, it can then update the new model to the cloud and share it with other robots. This saves the learning time of other machines and also ensures consistency of quality.

3. Intelligent Placement

Some instructions that seem easy for us such as carefully handling or neatly arranging items, represent a huge technical challenge for the robot. How is “careful handling” defined? Is it to immediately stop applying force when the object touches the tabletop? Or is it moving the object to 6 cm away from the table then letting it fall naturally? Or is it gradually reducing speed as you approach the tabletop? How do these different definitions affect the speed and accuracy of item placement?

Arranging items neatly is even more difficult. Even if we ignore the definition of “neat,” we must first pick up the item from the correct position to accurately place the item at the desired position and angle: The robotic arm is still not as dexterous as a human being’s, and most of the current robotic arms still use suction cups. There is still plenty of room for improvement in terms of achieving fine motor skills like that in human joints and fingers.

Secondly, we need to be able to instantly determine the angular position and shape of the object to be gripped. Taking the cup in the above figure as an example. The robotic arm needs to know: Whether the cup is facing up or down? Whether it should be placed sideways or upright? And whether there are other items or obstacles in the way? That way the robot can determine where to place the cup to make the most efficient use of space.

We are constantly learning the various tasks of picking up and putting down items from birth. These complicated tasks can be completed instinctively. However, the machine does not have such experience and must re-learn tasks.

Leveraging AI, the robotic arm can now judge depth more accurately. It can also learn through training and determine if a cup is facing upwards or downwards or is in some other state.

Object modeling or voxelization can be used to predict and reconstruct 3D objects. They enable the machine to more accurately render the size and shape of the actual item and place the item in the required position more accurately.

kapintar

Leave a Reply

Your email address will not be published. Required fields are marked *