In my thesis, I focus on the presence of stereotypes in large language models.
These models have become a key part of modern artificial intelligence, as their
ability to generate text similar to human language enables a wide range of
applications, like chatbots, writing and translation tools. Since these models
are trained on vast amounts of data, primarily collected from the internet,
there is a risk that they adopt and reproduce the stereotypes present in these
sources. In this thesis, I explore how these stereotypes manifest in generated
text, where they appear, and what impact they have. The thesis includes
data collection, definition of stereotypes, testing of selected models and the
analysis of results.
|