Catastrophic forgetting is phenomenon when an artificial neural network immediately and almost completely forgets previously learned tasks when trained incrementally on new ones.
It is a well-known problem and although there are many approaches to alleviating it, none of them solves it completely.
We experimentally check for main causes of catastrophic forgetting.
Analysis is performed on a deep convolutional neural network for image classification.
Results are interpreted by confusion matrices and classification accuracy graphs, we also visualize changes of weights and biases of network.
Analytical findings serve as a basis for designing different approaches to updating network parameters, aiming to prevent or alleviate catastrophic forgetting.
We also evaluate effects of availability of Oracle, capable of determining subset of all possible classes for classification, when using the network.
We implement one of existing approaches to preventing catastrophic forgetting and adapt it to work without Oracle.
Findings, presented in thesis serve as a starting point for design of new approaches aimed at preventing catastrophic forgetting.
|