Today in industry, robots and sensors are increasingly being used, allowing us to achieve things that were previously not possible. With sensors, we can detect errors faster and more accurately than with human supervision, which enables us to improve and increase production efficiency.
This thesis presents a process for creating a program that allows for the detection of errors in models by calculating the difference between two models. The first model used in the program is a point cloud obtained from the Sick Trispector 1030 sensor mounted on a UR5e robot. The second model is a deliberate CAD model representing the desired shape of the model. The program was developed in the Python programming language and using the CloudCompare program, which allows for the manipulation and processing of point clouds. Before using the actual model, we created a test model in the Fusion 360 to verify the program's operation. We drew two models that differed by three steps.
Transformations in space are mathematical operations that change the position, orientation, and size of objects in three-dimensional space. When adapting models, it is important that both models (point cloud and CAD model) come from the same coordinate system, as this is the only way to compare them and get the differences between them.
Different types of transformations, such as translation, rotation, scaling, and mirroring, are used to adapt models. These transformations can be calculated based on reference points obtained using the Hough algorithm. The Hough algorithm enables us to detect geometric shapes, such as circles, in a point cloud.
In addition to transformations in space, interpolation is also important for adapting models. We use interpolation to fill in empty points in the point cloud obtained from the sensor.
The main objectives of this thesis are to describe the basics of the theory of transformations between models, design a system with the correct calculation of differences between models, and graphically display the differences.
|