izpis_h1_title_alt

Globoko spodbujevalno učenje robotske manipulacije z objekti
ID Težak, Domen (Author), ID Podobnik, Janez (Mentor) More about this mentor... This link opens in a new window, ID Mihelj, Matjaž (Comentor)

.pdfPDF - Presentation file, Download (8,59 MB)
MD5: 200AECA349EDA93FBF7C3620EE0BDD9B

Abstract
V tem magistrskem delu je predstavljena izdelava programskega ogrodja za učenje robotskih strategij z uporabo globokega spodbujevalnega učenja. Na področju robotike je danes avtonomnost robotov v večini primerov omejena s klasičnimi metodami vodenja, želja pa je, da bi se roboti približali učinkovitosti in prilagodljivosti ljudi v kompleksnih in dinamičnih okoljih. To je privedlo do novega področja raziskav, kjer se je spodbujevalno učenje izkazalo kot obetaven pristop. Najprej smo raziskali možnosti uporabe spodbujevalnega učenja na področju manipulacije objektov brez prijemanja, bolj natančno s potiskanjem. V drugem delu pa smo nadgradili naše programsko ogrodje s simulacijskim okoljem za učenje robotskih strategij z globokim spodbujevalnim učenjem na modelu robota Franka Emika Panda, z namenom prenosa naučenih strategij na pravega robota. Prvi sklop te naloge je zajemal pregled in izbiro ustreznih odprtokodnih orodji za implementacijo algoritmov globokega spodbujevalnega učenja. Izbrali smo repozitorij implementiranih algoritmov globokega spodbujevalnega učenja Stable-Baselines3 (SB3), programske knjižnice za izdelavo fizikalne simulacije PyBox2D in MuJoCo in programski vmesnik Gymnasium, za povezavo vseh izbranih orodij. Sledila je izdelava fizikalnih simulacijskih okolij v izbranih programskih knjižnicah in nato učenje nalog potiskanja v simulaciji preko globokega spodbujevalnega učenja. Na koncu prvega dela smo naučene modele prenesli na pravega robota ter testirali delovanje celotnega sistema. V drugem sklopu pa smo naredili simulacijsko okolje z visoko kakovostnim modelom robota Franka Emika Panda. V tem simulacijskem okolju smo nato, z uporabo globokega spodbujevalnega učenja, robota naučili nalogo pobiranja objekta s hkratnim izogibanjem oviri. Prav tako smo implementirali hitrostno vodenje robota v notranjih koordinatah. Naučen model smo nato prenesli iz simulacije na pravega robota in preizkusili delovanje na realnem sistemu. Na koncu so predstavljeni rezultati primerjave izbranih algoritmov globokega spodbujevalnega učenja DQN (angl. Deep Q Network) in TQC (angl. Truncated Quantile Critics), primerjave fizikalnih simulacijskih okolij PyBox2D in MuJoCo, uspešnosti naučenih modelov za naloge potiskanja, v simulaciji in na pravem robotu ter rezultati naloge pobiranja objekta z modelom našega robota. V diskusiji komentiramo dobljene rezultate in opozorimo na težave, ki so se pojavile med delom ter predlagamo možne rešitve in smeri za nadaljnjo delo.

Language:Slovenian
Keywords:globoko spodbujevalno učenje, simulacija MuJoCo, algoritem učenja TQC, pomnilnik HER, robot Franka Emika Panda
Work type:Master's thesis/paper
Organization:FE - Faculty of Electrical Engineering
Year:2024
PID:20.500.12556/RUL-156033 This link opens in a new window
COBISS.SI-ID:194240259 This link opens in a new window
Publication date in RUL:30.04.2024
Views:102
Downloads:29
Metadata:XML RDF-CHPDL DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Deep Reinforcement Learning for Robotic Object Manipulation
Abstract:
This master's thesis presents the development of a software framework for learning robot strategies using deep reinforcement learning. In the field of robotics today, the autonomy of robots is in most cases limited by classical control methods, but the desire is to bring robots closer to the efficiency and adaptability of humans, especially in complex and dynamic environments. This has led to a new field of research where reinforcement learning is proving to be a promising approach. In our work, we first explored the possibilities of using reinforcement learning in the field of non-prehensile manipulation of objects, more specifically by pushing them. In the second part, we upgraded our software framework with a simulation environment for learning robotic strategies with deep reinforcement learning on a Franka Emika Panda robot model, in order to transfer the learned strategies to a real robot. The first part included the review and selection of appropriate open-source tools for the implementation of deep reinforcement learning algorithms. We selected the Stable-Baselines3 (SB3) repository of implemented deep reinforcement learning algorithms, the PyBox2D and MuJoCo software libraries for building physics simulation, and the Gymnasium software interface for interfacing all the selected tools. This was followed by building the physics simulation environments in the selected software libraries and then learning the pushing tasks in the simulation through deep reinforcement learning. At the end of the first part, we transferred the models to a real robot and tested the performance of the whole system. In the second part, we created a simulation environment with a high-quality model of the Franka Emika Panda robot. In this simulation environment we then used deep reinforcement learning to teach the robot the task of picking up an object while avoiding an obstacle. We also implemented velocity control of the robot in his joint coordinates. We then transferred the learned model from simulation to a real robot and tested the performance on the real system. Finally, we present the results of comparing selected deep reinforcement learning algorithms DQN (Deep Q Network) and TQC (Truncated Quantile Critics), results of comparing physical simulation environments PyBox2D and MuJoCo, the success rates of learned models for pushing tasks, both in simulation and on the real robot and the results of the object-picking task with our robot model. We also comment on the obtained results and the challenges encountered during the work and propose possible solutions and directions for further work.

Keywords:deep reinforcement learning, MuJoCo simulation, TQC learning algorithm, HER buffer, robot Franka Emika Panda

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back