izpis_h1_title_alt

Deep reinforcement learning for target-driven robot navigation
ID Dobrevski, Matej (Author), ID Skočaj, Danijel (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (23,71 MB)
MD5: 6C6C27D04E8884FB0040D374C83B622C

Abstract
Mobile robots that operate in real-world environments need to be able to safely navigate their surroundings. Obstacle avoidance and path planning are crucial capabilities for achieving autonomy in such systems. However, for new or dynamic environments, navigation methods that rely on an explicit map of the environment can be impractical or impossible to use. The resurgence of neural networks has enabled great progress in reinforcement learning methods. In this thesis, we propose local navigation methods, that do not rely on a map, modeled by deep neural networks and trained using reinforcement learning in simulation. We combine the power of data-driven learning and the dynamic model of the robot, enabling adaptation to the current environment as well as guaranteeing collision-free movement and smooth trajectories of the mobile robot. We evaluate and compare our navigation approaches with related work and a standard map-based approach to navigation scenarios in simulation and demonstrate that our methods are able to navigate the robot when the standard approaches fail and outperform the related work. We also show that our policy can be transferred to a real robot.

Language:English
Keywords:deep learning, reinforcement learning, mobile robotics, navigation, obstacle avoidance.
Work type:Doctoral dissertation
Typology:2.08 - Doctoral Dissertation
Organization:FRI - Faculty of Computer and Information Science
Year:2024
PID:20.500.12556/RUL-154022 This link opens in a new window
COBISS.SI-ID:182594819 This link opens in a new window
Publication date in RUL:19.01.2024
Views:504
Downloads:89
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:Slovenian
Title:Metode globokega spodbujevanega učenja za ciljno-vodeno robotsko navigacijo
Abstract:
Mobilni roboti morajo biti za delovanje v resničnih okoljih sposobni varne navigacije po svoji okolici. Izogibanje oviram in načrtovanje poti sta ključni sposobnosti za doseganje avtonomije takih sistemov. Vendar pa so lahko za nova ali dinamična okolja navigacijske metode, ki se zanašajo na eksplicitni zemljevid okolja, nepraktične ali nemogoče za uporabo. Ponovna oživitev razvoja nevronskih mrež je omogočila velik napredek pri razvoju metod spodbujevanega učenja. V tej disertaciji predlagamo metode za lokalno navigacijo mobilnega robota, ki se ne opirajo na zemljevid, modelirane z globokimi nevronskimi mrežami in učene s pomočjo globokega spodbujevanega učenja v simulaciji. Združimo moč podatkovno vodenega učenja in dinamičnega modela robota, kar omogoči prilagajanje trenutnemu okolju ter zagotovi premikanje mobilnega robota brez trkov po gladkih trajektorijah. V disertaciji ovrednotimo predlagane navigacijske metode in jih primerjamo s standardnim pristopom, ki temelji na uporabi zemljevida. Eksperimente izvedemo v simuliranem okolju in pokažemo, da so naše metode sposobne zagotoviti uspešno navigacijo robota tudi v primerih, ko standardni pristopi odpovedo. Pokažemo tudi, da je naučeno nevronsko mrežo mogoče prenesti na pravega robota in demonstriramo uporabo v resničnem okolju.

Keywords:globoko učenje, spodbujevano učenje, mobilna robotika, navigacija, izogibanje ovir

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back