This master thesis explores the development of an explainable recommender system for movies. It addresses the problem of information overload and the need for more transparent and understandable recommendation systems, particularly within the context of movie streaming platforms. The core purpose is to implement a system that not only provides recommendations but also explains why those recommendations are made.
The work utilizes two publicly available datasets: MovieLens and CoMoDa. Investigated were two recommendation system algorithms, matrix factorization and Bayesian matrix factorization. The study also examines the impact of removing user bias from rating data. The thesis focuses on incorporating Shapley values to determine the contribution of individual features to a given recommendation, enabling the generation of explanations in natural language.
Experimental results are presented for both datasets, comparing the performance of both algorithms with and without user bias removal. The thesis not only evaluates the performance of the system, but also the explainability through entropy analysis. The findings suggest that while removing user bias improves certain aspects of explainability, it negatively impacts overall recommendation accuracy. Furthermore, the results reveal that adding more context to language models can improve both the quality and diversity of explanations.
|