Details

Vizualno sledenje in segmentacija delov objektov
ID Mesarec, Jaka (Author), ID Lukežič, Alan (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (10,61 MB)
MD5: 7E5692F33A172967CB997193672AA05C

Abstract
Sodobne metode na področju vizualnega sledenja in segmentacije v zadnjih letih dosegajo odlične rezultate. Velik razlog za napredek je uveljavitev metod na osnovi spomina. Še zlasti perspektivna je metoda Segment Anything Model 2 (SAM2). Kljub njihovi naprednosti imajo nove metode težave pri sledenju delov objektov. Razlog za to je treniranje metod na podatkovnih zbirkah, ki vsebujejo anotacije celotnih objektov. Primeri sledenja delov objektov so v obstoječih podatkovnih zbirkah redki, specializirana zbirka pa za to nalogo še ne obstaja. V tem delu predstavimo učno podatkovno zbirko YT-VOS-PT (train) in evalvacijsko podatkovno zbirko YT-VOS-PT (eval), ki sta zasnovani na zbirki YouTube-VOS in vsebujeta anotirane primere sledenja delov objektov. Učno zbirko uporabimo za ponovno učenje metode SAM2. Različne mehanizme treniranja metode ovrednotimo na evalvacijski zbirki YT-VOS-PT (eval), kjer pokažemo izboljšavo rezultata J&F do 7 %. Na izbranih primerih iz podatkovne zbirke DiDi, kjer sledimo delom objektov, pokažemo izboljšavo kakovosti sledenja do 16 %, pri integraciji z DAM4SAM pa do 39 %.

Language:Slovenian
Keywords:računalniški vid, strojno učenje, vizualno sledenje objektom, segmentacija objektov v posnetku, sledenje delov objektov
Work type:Master's thesis/paper
Typology:2.09 - Master's Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2025
PID:20.500.12556/RUL-176045 This link opens in a new window
COBISS.SI-ID:257970435 This link opens in a new window
Publication date in RUL:19.11.2025
Views:103
Downloads:16
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Visual tracking and segmentation of object parts
Abstract:
In recent years, modern methods in the field of visual tracking and segmentation have achieved excellent results. A major reason for this progress is the adoption of memory-based approaches. One particularly promising method is the Segment Anything Model 2 (SAM2). Despite their sophistication, new methods face challenges when tracking parts of objects. The main reason for this is that the methods are trained on datasets containing annotations of entire objects. Examples of object part tracking are rare in existing datasets, and a specialized dataset for this task does not yet exist. In this work, we present a training dataset YT-VOS-PT (train) and an evaluation dataset YT-VOS-PT (eval), both based on the YouTube-VOS dataset and containing annotated examples of object part tracking. The training dataset is used to retrain the SAM2 method. Various training mechanisms of the method are evaluated on the T-VOS-PT (eval) dataset, where we demonstrate an improvement of the J&F score by up to 7%. On selected examples from the DiDi dataset, where we track object parts, we show tracking quality improvements of up to 16%, and up to 39% when integrated with DAM4SAM.

Keywords:computer vision, machine learning, visual object tracking, video object segmentation, object part tracking

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back