Transformer based models, such as pretrained BERT model, are currently the most successful approach to text processing tasks. When tuning BERT for a specific task, we usually fine-tune all the model's parameters. We investigate methods for fine-tuning BERT models, which fine-tune only a fraction of parameters for a specific task. We analyze results on Slovene classification tasks. We fine-tune multilingual models CroSloEngual BERT and mBERT on named entity recognition and UPOS tagging. We compare four fine-tuning methods: full model fine-tuning, tuning only the classification head, adapter tuning, and AdapterFusion fine-tuning. We show that adapter tuning achieves good results, despite the small number of tuned parameters, and that AdapterFusion tuning can achieve better results than full model fine-tuning. We discover that AdapterFusion tuning is more beneficial when solving higher level classification tasks. The downside of this method is that it is time consuming.
|