<?xml version="1.0"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/"><rdf:Description rdf:about="https://repozitorij.uni-lj.si/IzpisGradiva.php?id=166848"><dc:title>Maritime obstacle avoidance using sensor data fusion</dc:title><dc:creator>Muhovič,	Jon Natanael	(Avtor)
	</dc:creator><dc:creator>Perš,	Janez	(Mentor)
	</dc:creator><dc:subject>obstacle detection</dc:subject><dc:subject>sensor fusion</dc:subject><dc:subject>unmanned surface vehicle</dc:subject><dc:subject>calibration</dc:subject><dc:description>The field of autonomous vehicles has recently been growing rapidly. The advances in machine learning and a wider availability of datasets and sensors has enabled a large variety of approaches for scene interpretation used for autonomous navigation. While a large part of interpreting the environment is based on color images, additional sensors can also be used to replace or improve the performance of visible light color cameras. Aside from that, using multiple different sensors in the same system requires precise alignment in order to use them in unison. When dealing with any autonomous platform, careful planning must go towards choosing appropriate sensors, given their capabilities, price and power consumption.

While the majority of the research focuses on ground vehicles, air and water platforms are also being actively researched.  While many general image processing methods can be used on different platforms without much need for adaptation, there are platform-specific situations that must be addressed if autonomy is to be achieved. Methods and sensors used on each type of autonomous platform must be adapted to different platform dynamics and situations that can be encountered during operation. This can range from using different complementary sensors, defining more fine-grained object classes, or exploiting prior knowledge about the environment.

In our work, we focus on marine environments as seen from onboard a small to mid-size autonomous vessel. As the source of dense and informative data, we first used a stereo camera system paired with an IMU sensor to detect obstacles on the water surface. Such a system consumes relatively little power, while producing color images as well as enabling the calculation of a dense 3D point cloud. Additional information from the IMU can further be used to simplify the interpretation of captured data. We show that we are able to detect and track class-agnostic obstacles, which, coupled with GPS, can enable safe navigation during robot operation.

Calibration of multimodal systems is an important and often overlooked aspect of using different sensors in a complementary way. Since precise relative positions and orientations of sensors mounted on a platform cannot be measured directly, they must be established via the calibration process. This can be time-consuming, but is crucial if methods such as stereo matching or aligning camera images are to be used. We present a method that can be used to correct stereo camera system calibration parameters during operation. We also present a novel approach for jointly calibrating multimodal systems and show how these methods can facilitate downstream data tasks such as supervised learning.

Finally, we used our multimodal sensor system to gather and annotate the first maritime dataset that includes a wide range of different modalities, including multiple cameras and LIDAR. We used the gathered data to develop and train a multimodal method that efficiently uses color images alongside thermal images and LIDAR data to enable scene interpretation even in very difficult low-light circumstances. The proposed dataset is a unique contribution to the field of multimodal maritime systems and can be used for further research into supervised multimodal segmentation and detection methods.</dc:description><dc:date>2025</dc:date><dc:date>2025-01-28 07:25:05</dc:date><dc:type>Doktorsko delo/naloga</dc:type><dc:identifier>166848</dc:identifier><dc:language>sl</dc:language></rdf:Description></rdf:RDF>
