Recently, convolutional models based on neural networks have achieved great
success in super-resolution using a single input image, called Single-Image-Super-
Resolution or SISR. Such models are very flexible and efficient in non-linear map-
ping of low-resolution images to high-resolution ones. In this work, we present a
novel super-resolution procedure based on two autoencoders and coupled latent
spaces. The first autoencoder is capable of reconstructiong low-resolution images,
while the second one is capable od reconstructing high-resolution images. The
latent spaces of the two autoencoders are connected by a linking network which
allows conversion between the low- and high- resolution latent spaces. Using the
low-resolution encoder, the linking network and the high-resolution decoder it is
possible to efficiently upscale an arbitrary low-resolution inpout image. The re-
sults of the above method area tested on four datasets, CASIA-WebFace, LFW,
QMUL-TinyFace and QMUL-SurvFace. Part of the CASIA-WebFace database
was used to train all models, the rest for testing. The QMUL-TinyFace and
QMUL-SurvFace databases are used to verify the system performance on real
images where we do not have high-resolution pairs. Finally, the results of the
super-resolution model are further compared with existing approaches such as
bicubic interploation, SRCNN and SRGAN. In the cases frontal face images are
used as input, our approach outperforms bicubic interpolation and the SRCNN
model. The faces are more pronounced and smoother, but do not contain less
high-resolution details than faces produced by SRGAN.
|