Maritime accidents involving oil spills have been a threat to the environment since the early beginnings of the maritime industry. Even the smallest of spills can seriously threaten natural habitats devastating the lives of those that depend on them. Major oil spills and illegal oil discharges, such as Exxon Valdez, Deepwater Horizon, Sea Empress, Irika Shipping illegal oil discharge fiasco, signified to political decision makers and industry authorities that stricter industry regulations and advancements in oil spill monitoring and surveying techniques were required to protect the natural environment.
Modern oil spill surveilling and monitoring techniques consist of advanced remote sensing solutions using passive or active image sensors to detect the light beam reflected from the sea surface. Active microwave sensors like Synthetic Aperture Radar (SAR) are widely used on satellites and ground stations that emit light toward the sea and observe the reflected beam. Passive sensors, the more financially accessible technology, can consist of regular or other specialized cameras. Those sensors do not emit light but rather detect the light (infrared, visual specter, thermal infrared, microwave) reflected from other light sources such as the sun.
This thesis explores the question if it is possible to detect oil spills with a passive RGB sensor (normal camera) attached to a drone while utilizing artificial intelligence (AI). Succeeding this initial analysis, the question regarding the quality of the dataset when the dataset containing images of various oil spills had to be downloaded from the internet, arose as well as whether a pretrained neural network could adequately solve our problem of adequately identifying irregular shapes and colors of the oil spills, and inconsistency of sea shades.
The experimental setup consisted of a factory-made drone DJI Mavic 2 Enterprise Dual, its original camera, and computer vision algorithms. It was decided to train AI models to recognize three different classes: clear sea, oil spill, and pier. Two different computer vision algorithms were used: AlexNet for image recognition and YOLOv7 for object detection. Both algorithms were trained on the same dataset created in Port of Koper including the dataset of oil spill pictures retrieved from the internet.
The dataset focused on images taken in optimal weather conditions where the sea was calm, and sky was clear. Consequently, training a neural network on such a dataset brings many limitations such as detecting dark oil spills in the darker environments and detecting oil spills in rough seas, which is not desired. To simply the problem of establishing a system that would be able to detect oil spills in any weather conditions, I proposed the basic system of detecting oil spills in limited weather conditions, which can be further researched and upgraded.
This thesis proves that oil spills can be identified excellently in the selected environment using a regular camera attached to the drone and pretrained neural networks. Object classification capabilities of both algorithms are extraordinary (approx. 1%), while the harmonic mean of precision and recall surpassed 80%. Adequately trained models can be used for real-time oil spill detection using a RGB camera while the only limitations being the use of the system in a bright part of the day only and avoiding rough weather (above Beaufort 5).
|