The aim of the thesis was to automate the process of testing a military simulator with the help of a collaborative robot. First, we present the manual testing process and highlight the problem. We review the theoretical background and development environment, present the company Techman Robots and their products, learn about the ROS environment and the SAS module. In the following, we present the implementation of the proposed system, its architecture and the course of its operation. An important element of our system is the technology of the TM12 robot manipulator, which uses the TM Landmark label for positioning in the global coordinate system, which is read by a camera mounted on top of the robot manipulator. The programs are divided into a part for the robot manipulator written in the TM Flow environment and a part that includes ROS. After development, we focus on the necessary program testing. First, we perform an experiment where we measure the deviation of the position of the robot manipulator after placing it in the world coordinate system with the help of computer vision. We read the position from the robot manipulator and find that the deviation is large. Then we perform an experiment where we check whether computer vision is to blame for the error, by moving the robot manipulator to the same position several times without using computer vision. We come to the conclusion that the robot is accurate and that the vision itself is to blame for the inaccuracy. Then we perform an experiment where, with the help of a laser mounted on the robotic manipulator, we measure the spread of its points on a flat surface after several repetitions. We come to the conclusion that the dispersion is too large. We visualize the laser scattering results on a graph and finally summarize that the current implemented system is not accurate enough to use.
|