Mobile robotic systems capable of autonomous navigation in non-structured environments depend on their vision module in order to safely navigate through the environment. The vision module provides perception of the surrounding area and it is often required to identify particular objects of interest, which is done by classifying image segments into pre-learned semantic classes. There are many methods which provide remarkable semantic segmentation results, but unfortunately only on specific datasets, which are not necessarily correlated to the scenes observed by a mobile robot. To verify the dataset's capability of transferring knowledge to a new domain we explore how well it generalises its classes. We examine the transfer of knowledge on a specific semantic segmentation method, which we adjust to best fit our needs.