According to the World Health Organization reports, cancer is the second leading cause of death world wide. Additionally, aging of the general populations in most developed countries is causing a rise in the number of newly discovered cancer cases reported each year. The majority of patients are treated with a combination of chemotherapy, radiotherapy and surgical treatment. Recent advances in radiotherapy made it possible to deliver radiation more precisely, allowing irradiation of tumors much closer to surrounding anatomical structures (i.e. organs at risk). In order to utilize the available dose delivery precision, the radiotherapy planning must also be conducted with a high degree of precision, for which accurate segmentations of organs at risk from computed tomography (CT) images are a prerequisite.
Manual CT segmentation is a repetitive and time consuming process. Research works report on successful automation of segmentation using convolutional neural networks, which seems to provide reasonable segmentations such that only minor manual corrections are needed, thereby rendering the overall contouring process more time efficient. The shorter organs-at-risk segmentation and radiotherapy planning times could therefore reduce the issue of long waiting lines, which are especially problematic in the oncology departments, where treatment success rate is inversely proportional to the time needed from diagnosis to treatment.
In recent years many research articles have been published, where researchers developed convolutional neural networks for the segmentation of a single anatomical structure. Without extensive evaluation, it is difficult to predict how such a model or method would perform on other, on simultaneously on many, anatomical structures, such as the organs at risks, of which many are poorly discernible from surrounding tissue. Therefore, the aims of this thesis were (i) to identify the state-of-the-art segmentation methods in scientific literature, (ii) to adapt those methods for simultaneous segmentation of multiple organs at risk in CT images, and (iii) perform an objective and comparative evaluation of segmentation performances.
Four different methods for automatic 3D image segmentation, i.e. DeepMedic, U-net, nnU-net in InnerEye, all based on convolutional neural networks, were chosen based on their state-of-the-art performance as reported in the literature. The methods were adapted for multi-organ segmentation and their hyperparameters tuned using an independent validation dataset. The segmentations were evaluated using the Dice-Sørensen coefficient and its surface-based variant. For evaluation purposes we applied the methods for organs at risk segmentation on collections of CT images of head and neck and a collection of thorax CT images. Additionally, we evaluated gross tumor region segmentation on a collection of CT images of lung cancer cases. We also performed experiments using semi-supervised learning principles, where we utilized the CT images without reference segmentation to augment the training dataset and further re-train and refine the segmentations.
Analysis of segmentation results showed that all tested methods yield usable organs-at-risk contours in terms of the need for additional manual segmentation, thereby substantially reducing the time needed for segmentation according to the published guidelines. In all current implementations of tested methods, however, an expert would need to verify and correct the generated segmentations before they could be used in the radiotherapy planning process. The method that performed the best, consistently on all tests, was the nnU-Net. The modular structure of this method allows for easy modification of individual components. Furthermore, the results of lung tumor segmentation showed that the tested methods were not yet suitable for tumor segmentation, since it would take more time to correct the obtained segmentations than to create them manually in the first place. The results of experiments performed using semi-supervised learning showed that the CT images without reference segmentations could be used in some cases to enhance the segmentation performance.
The performed experiments indicate that the tested methods, if incorporated at the start of the current manual segmentation process, could be beneficial in terms of time savings to perform organs-at-risk contours. Furthermore, the methods do not seem usable yet in terms of lung tumor segmentation, possibly due to the limited number of training, validation and testing image collections. Subsequently, the results' confidence interval was large and the priority in any further work should be to increase the number of cases.
|