Quadcopters are becoming more popular and integrated into modern society. From high resolution video recording to autonomous navigation at high speed, quadcopters even shine as an everyday toy. We are now familiar with controlling quadcopters via our mobile phones. In this work we set out to develop a quadcopter gesture control system. We aspired to develop a system that can be used on a low-cost quadcopter equipped with a simple RGB camera and a powerful embedded computer. We also assembled such a quadcopter. The system is split into three modules - action detection with optical flow, human pose estimation with convolutional neural networks and gesture classification with relational features computed on the human pose. The integrated system is developed with the help of OpenCV and meta operating system ROS. For the purpose of development and evaluation we also assembled our own dataset called DS2017, in which 640 gestures are performed by 20 people. We show that action detection can detect actions sufficiently well, the human pose estimation works very well at high speed and gesture classification achieves high accuracy.