In this master's thesis, the integration of two cameras on an autonomous mobile robot is presented, as well as the development and testing of an obstacle detection algorithm. The primary objectives were to determine the best camera placement for capturing the visible area during driving and to develop an efficient algorithm for environmental perception.
First, we analyzed the market and existing solutions in the literature, defined system requirements, studied the characteristics of depth cameras, and developed a mathematical model to calculate the cameras' field of view. This model was used to study various camera placements and determine the optimal positioning. A significant portion of the thesis focused on the development of the obstacle detection algorithm in the environment.
We used the Intel RealSense D435f stereo cameras for environmental capture. The entire architecture is based on the open-source ROS environment and the Python language. The primary library for processing camera data was Open3d, while visualization was handled with RViz. Additionally, we explored a method for plane detection using a GPU, which enabled faster data processing. Based on these results, we developed an optimal algorithm for obstacle detection, which was then tested on the AMR.
The research results include an analysis of environmental obstacle detection, measuring the standard deviation of distance, ground obstacle detection resolution, and repeatability of obstacle detection in a specific zone. Testing of different obstacle materials and obstacle detection during driving was also included.
The discussion focuses on addressing potential issues that arose during the study. Challenges such as glare problems, detecting airborne obstacles, camera positioning, and possible improvements are highlighted.
|