The thesis describes the process of lesion segmentation in medical images using deep learning. Different methods used for segmentation are presented, starting with the most well-known and popular U-Net method. Then an improved version, U-Net++, is presented, and finally the more modern SegFormer method based on the Transformer architecture. We have tested the segmentation of three types of images: a plain PET image, a CT image and a combination of PET and CT images and show the difference in segmentation complexity of these different types of images. Our experiments show that PET images are the easiest to segment, with the best results. CT images proved to be much more challenging to segment, while the combination of PET and CT images did not yield a significant improvement.
|