The master's thesis presents a project that combines industrial robotmotion planning and object recognition using a depth camera. The depth camera captures a colour-and-depth image of the robot's workspace. A convolutional neural network is then used to detect the target object in the captured images. The identified area is analyzed and the position and orientation of the object in space are determined. Using the Dex-Net convolution model, potential gripping points and grip probabilities are determined and evaluated. The locations of the grips and the position of the object are communicated to the motion planner. Motion planning of Franka Emika Panda robot is performed using the MoveIt software framework and broken down into a series of subtasks. If the motion planner finds a solution for the pick and-place movement, the movement is realized in a virtual environment based on Gazebo robotic simulator.
|