The most frequently used deep learning models are deep neural networks. Although they have been successfully applied to various problems, they require large training sets and careful tuning of parameters. An alternative to deep neural networks is the deep forest model, which we independently implemented to verify the replicability of results in (Zhou and Feng, 2017). We test if the accuracy of deep forest can be improved by including random subspace forests or by using stacking to combine predictions of cascade forest's last layer. We evaluate the original implementation and our improvements on five data sets. The algorithm with added stacking achieves equal or better results on all five data sets, whereas the addition of random subspace forests brings worse results on three data sets and better results on two data sets.
|