We implemented a deep neural network, which we trained to generate image captions. The neural network connects computer vision and natural language processing. We followed existing architectures for the same problem and implemented our architecture with Keras library in Python. We retrieved data from an online data collection MS COCO. Our solution implements a bimodal architecture and uses deep convolutional, recurrent and fully connected neural networks. For processing and collecting image features we used the VGG16 architecture. We used GloVe embeddings for word representation. The final model was trained on a collection of 82.783 and tested on 40.504 images and their descriptions. We evaluated the model with the BLEU score metric and obtained a value of 49.0 and classification accuracy of 60 %. Current state-of-the-art models were not surpassed, but we see many possibilities for improvements.
|