(Co)Robot (Bin)Picking Demonstrator

By Mahmoud Sakr

Summary


The Fontys Knowledge Center for Mechatronics and Robotics runs several projects that helps not only towards developing solutions to industries but also renewing the vocational training with the latest technologies and solutions for challenges in the industry. One of the running projects is called Cobot bin picking demonstrator which aims to build a demonstrator that demonstrate the use of robots in the application of picking -and-placing of bolts A framework that integrates a robot, 3D vision and 2D vision systems is developed using Ethernet communication. The framework can detect and pick bolts of size M16 and bigger. Then, the picked bolt is placed in an inspection point where the length and width of the picked bolt are checked. Afterwards, it is (re)picked again from its head to be placed in a rack according to its size. The robot program uses a state machine to ensure modularity of the program and allow implementing modifications easily. It contains four states where the fourth state is allocated for handling errors. The other three states are utilized for first, picking bolts with guidance of 3D vision, second, picking the bolt from its head after 2D inspection is done and third placing the bolt in a rack. The 2D vision system checks reliably the location of the bolt’s head and measures the diameter of the bolt. Moreover, it is ready for future upgrades. The robot and 2D vision systems can still deliver good results even with smaller sizes of bolts up to M10 bolts. However, the limitation to M16 comes from the 3D vision system which can hardly detect the M16 bolts consistently.

Background information


the Fontys Knowledge Centre Mechatronics and Robotics is working closely with industry, aiming to link the current issues from companies with their own research. One of the projects that the Knowledge Centre is currently involved with is called “RAAK-mkb Aerobics” The which focuses on reconfigurable robot cell that can handle the challenge that meets the Dutch High-Tech Small and Medium-sized Enterprises (SMEs). The challenge is that the production at these SMEs is characterised by its highly variable batch sizes of products that sometimes in a matter of hours, days or weeks take, this is so called high mix low volume high complexity (HMLVHC).
The task of the robot cell is bin picking of high-tech metal and plastic components. Bin picking comprises the sorted and oriented disassembling of components which unsorted and non-oriented are supplied in varying numbers. It is extremely accurate, but boring work that is still largely be done by highly skilled people. Realising the requirements of the HMLVHC-(bin) picking will enhance the competitiveness of Dutch High-Tech SMEs in the global market. In connection with the RAAK-mkb Aerobic project, the Fontys Knowledge Centre launched several projects to study and explore possible solutions to meet the goal of the RAAK-mkb project. One of these projects is my project for the internship and so-called “(co)robot (bin)picking demonstrator” that is a demonstrator for a collaborative robot that is used in the pick-and-place application within the High-Tech Systems & Materials (HTSM) sector. The demonstrator combines 2D-and-3D vision systems with robotic system and aims to achieve adaptive, reconfigurable robots with low setup costs to handle wide variety of products.

System architecture


The top-level system architecture of the demonstrator is divided into three (sub)systems, see figure 3: • Robot: The core system of the demonstrator which performs all movements and pickingand-placing of the parts. The robot (sub)system consists of three modules: ▪ Robot Arm: The module which moves around to execute actions as received from the Robot Controller. ▪ Gripper: The module that executes the grasping of objects (bolts). ▪ Robot Controller: powers and handles communication with other modules and (sub)systems.

• 3D Vision [Pickit]: a system that guides the robot when performing the picking task. It enables the robot to pick objects that are overlapping or randomly positioned. That’s why the third dimension is needed (height). The system consists of two modules: ▪ Pickit Camera: This module grabs 3D images. ▪ Pickit Server: This module analyses the captured image and decides which are the valid bolts to pick. Moreover, it handles the communication with the Robot Controller.

• 2D vision: a system that helps to improve the accuracy of executing further tasks after the picking is done especially when a precision is needed in terms of orientation and position. The system consists of three modules: ▪ 2D Imager: This module grabs 2D images. ▪ 2D Vision Server: This module executes the 2D image processing and handles the communication with the Robot Controller. ▪ Backlight: This module is an external illumination to enhance the contrast between the bolt and the background and provides consistent lighting condition
.

mahmout overveiw.PNG
Demonstrator top-level architecture



Communication between (sub)systems

Since the 2D vision and 3D vision (sub)systems need to exchange data with the robot, then looking into the way of communication is essential. There are several communication options that are available with the robot controller namely: Ethernet TCP/IP, Profinet, EthernetIP and Modbus TCP.
Table 2 shows which of these communication protocols are supported by each of the (sub)systems.

commu_mam.PNG

The Profinet and EthernetIP communication protocols since implementing them would result in unjustifiable additional costs. Consequently, a choice is made between Ethernet TCP/IP and Modbus TCP.
The Ethernet TCP/IP is made because of the reusability since it is supported by all (sub)systems. Furthermore TCP/IP offers more flexibility.

Robot System


A modular robot’s program is of imperative importance towards delivering a scalable and flexible demonstrator. Hence, a crucial decision regarding the programming approach of the program is made.
Two types of approaches were analysed that are: The unstructured programming approach: the whole program is a single piece of code. The modular programming approach: the program is divided into smaller programs, so-called “modules”.

Top-level design of the Robot's program


The top-level design for the program of the robot is described using the diagram in figure below.

topview.PNG
[Left] top-level design of the robot’s program, [Right] Subprograms that are called from the Robot Program


State machine of the Robot's program


The diagram in figure 5 describes the flow of the robot program.

flow.PNG
Left] Flowchart for the state machine of the Robot program, [Right] implementation in the Robot


Gripper


The robot’s arm cannot pick object by itself but rather needs an additional tool that is mounted on the end effector of the robot’s arm. The gripper is the most common end-of-arm tooling.
The 2-finger 85 adaptive gripper from Robotiq is the gripper among the existing hardware. Therefore, its specifications were analysed first to verify its computability as it is an existing hardware.

Concepts for the fingertip

Three concept designs were developed for a fingertip. The inspiration for the first concept comes from the chopsticks that is used in Asia for eating rice where the bin is the bowl, bolts are the rice and the chopsticks are the gripper’s fingertip.
fingertips.PNG
fingertip, three concepts


The three concepts were 3D-printed since 3D-printing is a fast and cost-effective way for testing these concepts.
Concept 1 (figure a): Snapped very quickly due to the small diameter of the chopstick, especially at the bottom. When the size is increased, the access to the bin becomes a drawback.
Concept 2 (figure b): Showed good results in terms of gripping the bolt and firm holding of it but the access to the bin was not good enough because of the big footprint.
Concept 3 (figure c): Showed good results in terms of access to the bin, gripping of the bolt and firm holding of it. However, the size of the whole fingertip is quite big (bulky).
Table 5 shows a summary of the testing results:

table_finger.PNG

Design of the fingertip


The third concept of the fingertip is chosen and was developed further to a final design with the following design considerations:
- Rubber pads of 3 millimetres thickness are added to increase friction between the fingertip and the bolt which ensures firm holding of the product during movement of the robot.
- Attachment to the phalanx of the gripper using two screws and an indexing pin
- Access to bolts in a bin is achieved with the small footprint.

final_desgin.PNG
fingetip_motor.PNG

3D Vision System


The 3D vision system is used to localize unstructured parts in a bin and performs the planning of best-to-pick part.

System setup:


The camera is mounted on the platform i.e0 fixed-mounted at a distance of 440 mm to cover a region of interest (ROI) of 150 mm high.

mounting.PNG

2D Camera system


The 2D vision system is the third (sub)system in the Demonstrator. It performs the 2D inspection process that occurs after an object is localized and picked up by the robot. The 2D inspection process involves four (sub)processes:
• Establishing a communication with the Robot Controller. • Localisation of the bolt’s head • Measuring: To measure the length and diameter of the picked bolt to specify which size it is. • Sending the Drop location coordinates to the Robot Controller.

System setup:

A robust 2D vision solution always starts with grabbing a high-quality image that’s why it’s imperative to set the system up in such a way that the camera consistently acquire images where the features to be inspected are enhanced. The advantages of consistently acquiring a high-quality image are: • Relatively easier image processing program. • High repeatability of the results produced by the Vision System.
The 2D vision system, in this case, consists of a frame grabber, a lens, a back light and a PC. The crucial choices that were made are the backlight and lens.

Choice of illumination

The external light is a must to assure high repeatability in image quality rather than the dependency on the ambient light condition. Several lighting techniques are used in machine vision and the choice of one technique over another depends on the features to be inspected, material and colour of the part
Since the features of interest in this application relate to the contour of the bolt, the backlight illumination is chosen since it provides a very good contrast between the bolt and background and it is robust to texture, colour and ambient light.

Choice of Lens


A lens with a focal length of 25 mm has been chosen as resulted from the calculation Preferred field-of-view: 225 mm x 225 mm Distance-to-object: 550 mm.


2D Vision Program


The 2D vision program is developed using LabVIEW(LV) program and it is handles the image processing and the communication with robot system.
The Robot Controller connects the PC (Brix) over Ethernet TCP/ IP. The Brix waits for an incoming connection request within 10 seconds once the 2D Vision program is started. Otherwise, the connection times-out and the program stops.

State Machine


The state machine diagram represents the different states of the program and how they interact. The ovals represent the states and arrows represent the possible transitions.

state.PNG
State machine diagram in LabVIEW for the data exchange


Initiate: Once a connection is established, the brix is waiting to receive a string that contains “Go!” from the Robot Controller. When received, the program transits to Send data state.
Vision: The LabVIEW acquire and analyze an image using the Vision Acquisition and Vision Assistant VIs then transits to Vision results state. Further details about the image processing that occurs in the Vision state are described in section 5.3.
Vision results: In this state, the result is processed and compared against the acceptance criteria then transits to Send data state. Further details about the Vision results state is explained in section 5.4.
Send data: The brix sends the vision results to the robot. Then it transits to Initiate state.
The following functions are used to exchange data over the established connection.

Image processing in Vision State

The state “Vision” in the 2D Vision system executes the image processing on the image taken by the 2D frame grabbers. There are three outcomes that the processing program should deliver:
  • - Localisation of the bolt’s head (position and orientation)
  • - Measurement of the bolt’s diameter.
  • - Measurement of the bolt’s length.

labview.PNG
Vision Assistant Process


The approach:
1. Grab an image. 2. Correct the image. 3. Remove/ reduce noise in the image. 4. Inspect features

Correct the image

In vision applications where features are required to be measured, it is necessary that the image is calibrated to compensate for errors introduced by misalignment of the camera or the lens’s distortion.
The Distortion Model (Grid) is used to:
  • - Compensate for camera’s lens distortion.
  • - Convert the results of inspections to real-world units rather than pixels.

Remove/ reduce noise in the image


In order to further improve the contrast between the bolt and the background, the colour plane extraction tool is used. The RGB- Blue Plane (figure 24) is used since it provided the highest contrast among other tested methods.

rgb.PNG
[Left] original image, [Right] image after color plane extraction


Inspect features


The features that are inspected for every bolt are:
  • Locating the bolt’s head.
  • Measuring diameter and length
  • Locating the haed of the bolt


Geometric matching is used to locate the head of the bolt by matching a reference template to the current image and according to a pre-set threshold for scaling and rotation relative to the reference template.
With the setting configured as in the figure, the geometric matching locates the head with different size and in different positions. Then it returns the coordinates of the centre of mass of the found head.

geomatich.PNG

Measuring diameter
The diameter is measured using a clamp rake tool (max. horizontal distance) which measures the distance between the found edges of the bolt. Theses edges are found based on the contrast between the bolt and its background

measuring_bolt.PNG
Measuring length
The same methodology of measuring the diameter is used for measuring length but with using (max. vertical distance) tool

measuring_lengt_bolt.PNG

"Vision results" State


The state “Vision results” is the state that prepares the image processing result of the 2D inspection to the robot system. The main functions of this state are:
  • o Verify whether the measured length and diameter are in the acceptable range.
  • o Fetches the corresponding drop location of the found bolt’s head.
  • o Prepare the result to be sent to the robot

The custom-vi is created in such a way that it can be used for the verification of both diameter and length. One vi is used for each verification but without any changes to the content of the vi itself. The connected lookup table to the cluster of the vi determines whether it verifies the bolt’s diameter or the bolt’s length, the figure below shows how the lookup table for the bolt's diameter is connected to the vi.


custom_vi.PNG


Moreover, a new size can be easily added and/ or the acceptable ranges can be easily changed by the customer (if necessary) by modifying the lookup table. Furthermore, the same vi can be (re)used to implement verification on the bolt’s thread-size for instance, if needed in the future.
The vi works by retrieving the measurement {length or diameter) into the cluster and comparing it to the values retrieved from the lookup table. The vi iterates until either the measured value is found in range in one of the rows of the lookup table or the number of iterations is equal to the number of rows in the lookup table. Then returns the size for which a match was found or no match found in case no match is found with any of the sizes.

vi2.PNG

After a successful verification of the bolt’s diameter and length, the bolt’s head is located as described in section 5.3.5, a drop location is fetched from a lookup table. This drop location contains the X and Y coordinates of the drop location for this specific bolt size, for the implementation in LabVIEW.
These coordinates will be included in the result that will be sent later to the robot system. The values in the lookup table for Drop locations are decided by the system’s user and based on the design of the rack where the bolts will be placed.

vi3.PNG

Another vi is created to assemble a five-element array which constructs the vision result that will be sent to the robot. This vi combines the data from two parts into one string: • The first part is the relevant position information of the located bolt’s head that result from the geometric matching. The second part is the drop location of the bolt.
All the five elements are in real-world measurement [millimeter, degree] and these elements are:
  • X-position (Center of mass) of the located bolt’s head. o Y-position (Center of mass) of the located bolt’s head.
  • Orientation (Angle) of the located bolt’s head.
  • X-position of the drop location corresponding to the bolt’s product group.
  • Y-position of the drop location corresponding to the bolt’s product group. The resulting string is ready for the next state “Send data” which will send the data to the robot.


DEMO-video