izpis_h1_title_alt

Izboljšava kvalitete generiranih slik z uporabo modelov za translacijo med slikami
ID TANKO, URBAN (Author), ID Skočaj, Danijel (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (12,15 MB)
MD5: C9617FCE10CED2E7FEEBE8EBC724EE14

Abstract
Pri generiranju slik se vse več uporabljajo metode GAN. Ena od slabosti je dolgotrajnost njihovega učenja. V diplomski nalogi jo poskusimo odpraviti z uporabo modelov za translacijo med slikami, s katerimi želimo izboljšati kvaliteto generiranih slik. To storimo tako, da zberemo podatkovno množico in na njej naučimo model za generiranje slik StyleGAN. Generirane slike nato poženemo skozi naslednje modele za translacijo med slikami: SR-GAN, Pix2pix, CycleGAN, Pix2pixHD, U-GAT-IT in DeblurGAN. Za vsakega od modelov opišemo generirane slike in jih ocenimo z metriko FID ter človeško oceno, pridobljeno z uporabo ankete. Pridobljene rezultate tudi primerjamo med seboj.

Language:Slovenian
Keywords:strojno učenje, umetna inteligenca, nevronske mreže, generativne nasprotniške mreže, translacija med slikami, podatkovna množica, ekstrakcija podatkov
Work type:Bachelor thesis/paper
Typology:2.11 - Undergraduate Thesis
Organization:FRI - Faculty of Computer and Information Science
Year:2020
PID:20.500.12556/RUL-114979 This link opens in a new window
COBISS.SI-ID:1538565827 This link opens in a new window
Publication date in RUL:06.04.2020
Views:1012
Downloads:230
Metadata:XML RDF-CHPDL DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Improving the quality of generated images using image-to-image translation models
Abstract:
The application of GAN methods for the purpose of image synthesis has grown considerably. One of their weaknesses is long training time. In this thesis we try to eliminate it by using image-to-image translation models to improve generated image quality. We first gather our dataset and train an image synthesis model StyleGAN. We then feed the generated images into various image-to-image translation models: SR-GAN, Pix2pix, CycleGAN, Pix2pixHD, U-GAT-IT in DeblurGAN. For each of the models we describe the visual properties of generated images. We also calculate the FID scores and human scores, obtained with a survey. At the end we compare the results of the models.

Keywords:machine learning, artificial intelligence, neural networks, generative adversarial networks, image-to-image translation, dataset, data scraping

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back