Speaker segmentation (diarisation, diarization) is a process that separates the audio clip in sections regarding the identity of the speakers. Speaker diarization in sound recordings answers the question of who spoke when?
This thesis is dedicated to the automatic segmentation of speakers in a variety of sound recordings. We prepare a test data of audio recordings in Slovenian, which are obtained from field recordings. The recordings contain two or more speakers and very often they contain other sounds, silence, overlap between the speakers and the like. We manually build transcriptions for the number of speakers in them and the time interval of the speeches for each speaker which will represent our ground-truth. We run all the algorithms for diarization that we evaluate on this test data. We write a program which takes the results from the algorithms as an input which have different formats and different types of representation, and the results are converted to a common format. We evaluate the accuracy of the algorithms and analyse how well they work in different situations.