Agriculture is a sector that is constantly evolving towards a more sustainable future and is in constant need of additional labour. However, based on statistics from recent years, there is a trend towards labour shortages in the agricultural sector. As a result, more and more research and development is being carried out towards smart, digitalised and robotic agriculture. Much of this research is focused on integrating mobile robots into different types of agricultural fields, as they can perform different tasks in different locations in the fields.
In the framework of this Master's thesis, we upgraded an existing mobile platform with additional sensor equipment to obtain a database from different agricultural environments and to integrate and test new approaches for the localisation of a mobile robot. In particular, we focused on the integration of localisation based on 3D LiDAR sensor data. We integrated and compared some of the most well-known algorithms for estimating odometry based on point cloud data on a real environment dataset that we acquired. We tested the performance of the algorithms in different agricultural scenarios to verify the robustness of the algorithms.
At the beginning of the Master's thesis, we briefly introduced the field of agricultural robotics and odometry based on a 3D LiDAR sensor. We then present the mobile platform used during the research from a hardware point of view and explain its kinematics. In the next section, we explain the acquired dataset, presenting which environments were covered and what sensor data were captured. In general, we acquired data from five different environments: apple orchard, plum orchard, asparagus field, hazel orchard and ground vegetables grown on PE foils. In terms of sensor data, we captured data of the following types: point cloud, depth images, RGB images, GNSS data and inertial measurements. The purpose of the dataset is to be useful for different types of research in the field of agricultural robotics. As we only focused on research in the field of localisation based on 3D LiDAR data, we only used GNSS data and point cloud data. In the following chapters, we explain the theory of the algorithms we used in our research and the process of their parameterisation. In our study, we implemented four well-known algorithms: RTAB-Map Scan-to-Scan (S2S), RTAB-Map Scan-to-Map (S2M), Keep-it-small-and-simple ICP (KISS-ICP), and LeGO-LOAM. Each of the four algorithms was tested on apple and plum orchard data and in a hazelnut environment. The performance of each algorithm was evaluated based on known statistical metrics used in the field of LiDAR odometry evaluation. The evaluation was based on a comparison of the resulting trajectories with a reference trajectory derived from GNSS data. In order to improve the localisation of the mobile robot and to propose a localisation method that is robust to different data and environments, we included a sensor data fusion method where we combined 3D LiDAR odometry together with noisy GNSS measurements using Extended Kalman Filter. Discussion section provides a detailed overview of the results and explains the specific behaviour of the algorithms in different environments.
|