In this thesis we explore the problem of automatic music transcription using deep neural networks, more specific convolutional neural networks.
Automatic music transcription is a task of writing the sheet music from musical recordings.
We analysed previous studies and found that there was a lack of research about the size and the shape of architecture of deep models.
We explored the performance of four different architectures of convolutional neural networks on the piano recordings dataset MAPS, which is a common benchmark for learning automatic music transcription.
We also compared two different normalization techniques for spectrograms: standardization and the logarithmic compression.
We found out that the performance of transcription is highly correlated with the higher number of convolutional layers.
Transcription is also 10\% more successful with logarithmic compression instead of standardization.
|