Today multi-camera systems are used in large variety of fields. One of the fields is sports. In sports, the system is used to cover the whole playground. That way we can capture large scale motion during a game (for example, the motion of a whole basketball team running across the entire playground, when performing an attack). All these information from the videos are very desirable in sports, because they can be used to analyze the game and to discover a team's weaknesses. But getting this information is not trivial. The first issue that we could run into is with the synchronization of the whole system, because in most cases free running cameras are used without a synchronizing signal. We also need an algorithm that can extract the information we need from such a system.
In this work we present a method that allows us to synchronize a multi-camera system. The method also includes an algorithm that can calculate 3D points of players and their trajectories from a synchronized multi-camera system.
We first interpolated the obtained videos, which allowed us to synchronize them. After the synchronization, we used a deep neural network OpenPose to detect all the players on the synchronized videos and get their 2D skeletal points. The obtained data is then used as an input for our algorithm (tracker) to calculate 3D points of the players and their trajectories. Finally we compared our results with the results obtained by another tracker and based on that evaluated the performance of our algorithm.