This diploma thesis presents a method for semantic segmentation of driving scenes. Modern methods for semantic segmentation of driving scenes can be divided into three categories. The first category uses only cameras, the second uses only LiDAR sensors, and the third combines data from both sensors to capture data. In this paper, we focus on the fusion of LiDAR and RGB image data using cross-attention mechanism. We develop SWINCrossFusion, a method based on the SWIN transformer architecture, and introduce a new SWIN transformer block for sensor fusion using cross-attention. The method computes queries over data from one sensor, and keys and values over data from the other sensor. This results in an efficient and fast merging of the measurements of the two sensors. We evaluate the method on the SemanticKITTI dataset and compare it with the reference PMF method. The developed method is with 54 % mIoU two percent worse than the reference method, but processes the input data 40 % faster and consumes 1 GB less graphic memory.