izpis_h1_title_alt

Anotacija kolonoskopskih slik s temeljnimi modeli
ID Lazić, Miha (Author), ID Emeršič, Žiga (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (5,78 MB)
MD5: 1633FB8A5A7A07F029F545AAEFE25FBF

Abstract
Anotacija slik je ključen, a večkrat zamuden korak pri pripravi slikovnih podatkovnih zbirk. Poleg grafičnega vmesnika anotacijskega orodja, na hitrost anotacije vplivajo implementirani segmentacijski pristopi. Z razvojem globokega učenja na področju računalniškega vida se je pojavila možnost nadomestitve ročne anotacije in tradicionalnih segmentacijskih algoritmov s hitrejšimi in bolj natančnimi pristopi. Eden takšnih je temeljni model Segment Anything, ki smo ga analizirali v večih različicah (ViT-b, ViT-l, ViT-h, MobileSAM, SAM-Med2D, MedSAM) in testirali na podatkovni zbirki kolonoskopskih slik Kvasir-SEG in kolonoskopskih inštrumentov Kvasir-Instrument. Ovrednotili smo natančnost segmentacije in časovno zahtevnost modelov z resničnimi maskami objektov in na podlagi rezultatov, implementirali funkcionalnosti najboljšega modela v prototipni anotacijski program.

Language:Slovenian
Keywords:anotacija, segmentacija, globoko učenje, Vision Transformer, temeljni model, Segment Anything, kolonoskopija
Work type:Bachelor thesis/paper
Organization:FRI - Faculty of Computer and Information Science
Year:2024
PID:20.500.12556/RUL-160506 This link opens in a new window
COBISS.SI-ID:208428803 This link opens in a new window
Publication date in RUL:29.08.2024
Views:227
Downloads:46
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Colonoscopy image annotation with foundation models
Abstract:
Image annotation is a crucial but often time-consuming step in preparing image datasets. In addition to the graphical interface of the annotation tool, the speed of annotation is influenced by the implemented segmentation approaches. With the development of deep learning in the field of computer vision, the possibility has arisen to replace manual annotation and traditional segmentation algorithms with faster and more accurate approaches. One such model is the foundation model Segment Anything, which we analyzed in various versions (ViT-b, ViT-l, ViT-h, MobileSAM, SAM-Med2D, MedSAM) and tested on the Kvasir-SEG dataset of colonoscopic images and the Kvasir-Instrument dataset of colonoscopic instruments. We evaluated the segmentation accuracy and time complexity of the models with ground-truth object masks and, based on the results, implemented the functionalities of the best model into a prototype annotation program.

Keywords:annotation, segmentation, deep learning, Vision Transformer, foundation model, Segment Anything, colonoscopy

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back