The application of GAN methods for the purpose of image synthesis has grown considerably. One of their weaknesses is long training time. In this thesis we try to eliminate it by using image-to-image translation models to improve generated image quality. We first gather our dataset and train an image synthesis model StyleGAN. We then feed the generated images into various image-to-image translation models: SR-GAN, Pix2pix, CycleGAN, Pix2pixHD, U-GAT-IT in DeblurGAN. For each of the models we describe the visual properties of generated images. We also calculate the FID scores and human scores, obtained with a survey. At the end we compare the results of the models.