In manufacturing of mechanical parts and assemblies, proper thread-engagement between a bolt and a nut is vital for the performance and reliability of the product. Typically, this is a precision work, requiring repetitive manual operations. In this paper, we explain how such assembly operations can be carried out by collaborative robots co-bots by monitoring the position and orientation of the nut and bolt using an image-sensor camera. The focus of our discussion is the assembly-operation of bolting of a nut by the grippers of a co-bot. Slips and misalignment, leading to wrong positioning of the nut and the bolt, are identified by capturing the images of the two components in real time using Microsoft Kinect camera-sensor. 3D Reconstruction of the image captured by the camera-sensor is carried out using the Kinect Fusion application. The reconstructed image is in the form of a polygonal mesh which is further converted to 3D Point Cloud data which is less sensitive to noise. Thereafter, the Point Cloud is segmented by dividing the entire scene into many clusters in order to distinguish the objects of the scene as grippers and nut and bolt. These clusters can be used for the training of the co-bot for the proposed operation. This method of extracting object-boundaries leading to recognition of objects is a vital operation in the field of robotic vision. We provide baseline description of various machine learning techniques that can be applied to realize proper assembly of a nut and a bolt.
cited By 0; Conference of 7th IEEE International Conference on Communication and Signal Processing, ICCSP 2018 ; Conference Date: 3 April 2018 Through 5 April 2018; Conference Code:142123
S. Prabhakaran, S. Veni, and Kathavate, V., “Implementation of Robotic Vision to Perform Threaded Assembly”, in Proceedings of the 2018 IEEE International Conference on Communication and Signal Processing, ICCSP 2018, 2018, pp. 358-364.