izpis_h1_title_alt

Postopno učenje robotskih nalog sestavljanja s sodelovanjem človeka z robotom
ID SIMONIČ, MIHAEL (Author), ID NEMEC, BOJAN (Mentor) More about this mentor... This link opens in a new window

.pdfPDF - Presentation file, Download (16,58 MB)
MD5: F88F7C2797EF61D8BE9591D199C85E86

Abstract
Robotsko sestavljanje je eno ključnih opravil v proizvodnih procesih. Pomembno je tudi za prihodnjo generacijo robotov, ki bo namenjena uporabi izven industrijskih okolij. Tradicionalno programiranje robotov za naloge sestavljanja je lahko zamudno. Vnaprej je potrebno predvideti vse možne situacije in ustvariti robotski program, ki je največkrat zapleten. To zahteva dobro znanje programiranja in razumevanje robotike. Uporaba robotov je tudi zaradi tega omejena večinoma na industrijske aplikacije s predvidljivimi in dobro opredeljenimi nalogami. Za širšo uporabo robotov je potrebno razviti hitre, intuitivne in učinkovite rešitve za prenos človeškega znanja na robotski sistem. V tej disertaciji preučujemo možnosti sodelovanja med človekom in robotom za postopno učenje nalog sestavljanja. Pri tem se osredotočamo na izkoriščanje spremenljive podajnosti sodelujočih robotov. Zasnovali smo sistem za robotsko učenje nalog sestavljanja, prilagodljiv različnim razmeram. Robota učimo na dva dopolnjujoča se načina: prek demonstracij in s sledenjem okoljskim omejitvam. Izvedbo nalog sestavljanja je mogoče sčasoma izboljšati z različnimi postopki sodelovanja med človeškim operaterjem in robotom: postopno izpopolnjevanje s kinestetičnim vodenjem in učenjem reševanja iz nepredvidenih situacij, ki se lahko pojavijo med sestavljanjem. V prvem delu disertacije se osredotočamo na izboljševanje metod kinestetičnega vodenja. Za učinkovito uporabo teh metod pri nalogah sestavljanja je namreč bistvenega pomena kakovost zajetih demonstracij. Naše delo temelji na opažanju, da je izvajanje nalog sestavljanja mogoče bistveno izboljšati v nekaj ponovitvah, podobno kot človek z vajo in prejemanjem povratnih informacij izpili svoje spretnosti. V ta namen smo razvili postopek za postopno izboljševanje obstoječih trajektorij s kinestetičnim vodenjem po navideznih tunelih s poljubno hitrostjo. Pri tem človeški operater robotu lahko demonstrira začetne gibe in jih nato izpopolnjuje, dokler ni dosežena želena izvedba naloge. Ker je zapletene gibe pri nizki hitrosti možno natančneje prikazati, sistem omogoča tudi ločeno izpopolnjevanje prostorskega in časovnega dela naloge. Operater najprej pri poljubni hitrosti izpopolni prostorski del, tj. obliko trajektorije. V zadnji fazi postopka demonstrira še želeno hitrost izvajanja naloge. Spreminjanje oblike in hitrosti trajektorije izvajamo s premikanjem vrha robota vzdolž navideznega tunela. V drugem delu disertacije se ukvarjamo z zagotavljanjem zanesljivega izvajanja nalog sestavljanja v nestrukturiranih okoljih, kjer nepredvidene situacije lahko povzročijo napake, četudi je robot skrbno programiran in optimiziran. Predlagamo sodelujoč pristop za obravnavo tovrstnih situacij. Za zagotavljanje robustnega izvajanja nalog robotskega sestavljanja predlagamo večstopenjski postopek, ki ob nastopu nepredvidene situacije izbere primerno strategijo za vrnitev v normalni način delovanja. Sprva robotski sistem nima znanja za izbiro primerne strategije. Zato si zapomni okoliščine, pri katerih je prišlo do napake in opazuje, kako bo situacijo reševal operater s pomočjo našega postopka za izboljševanja nalog. Če kasneje nastopi podobna situacija, lahko robotski sistem na podlagi tako pridobljenega znanja z uporabo tehnik statističnega posploševanja avtonomno reši nastali problem. Cilj našega pristopa je povečati zanesljivost robotskega sistema in hkrati zmanjšati potrebo po človeškem posredovanju pri nalogah sestavljanja. Učenje operacij sestavljanja je dolgotrajen in zahteven proces. Zato je cilj v tretjem delu disertacije omogočiti robotu, da se samostojno nauči določenih operacij v procesu sestavljanja. Eden glavnih izzivov za avtonomno učenje v robotiki je namreč velik iskalni prostor, ki ga mora robot raziskati, preden se nauči pravilno izvajati nalogo. Robot je med robotskim sestavljanjem praviloma v fizičnem stiku z okoljem. Učenje robotskih nalog v stiku z okoljem tradicionalno sicer velja za zahtevnejše — saj je potrebno upoštevati interakcijske sile med robotskim sistemom in okoljem — vendar je lahko pod določenimi pogoji učinkovitejše. Ključna ugotovitev pri tem je, da je zaradi okoljskih omejitev manj dopustnih smeri gibanja, kar zmanjša število učnih parametrov in omogoča hitrejše učenje. Na podlagi te ugotovitve uvedemo hierarhično shemo spodbujevanega učenja. Na nižji ravni vključuje vodenje, ki robota premika vzdolž okoljskih omejitev. Vmesna raven sistematično išče stanja, v katerih je gibanje možno v različnih smereh. Na višji ravni izvedemo optimizacijo zaporedja prehodov med identificiranimi stanji. Vse tri razvite pristope za izboljšanje učenja robotskega sestavljanja smo preizkusili z uporabo sodelujoče robotske platforme in ovrednotili v različnih eksperimentih, kar dokazuje učinkovitost predlaganih metod.

Language:Slovenian
Keywords:robotsko sestavljanje, učenje z demonstracijo, dinamični generatorji gibov, postopno izboljševanje robotskih nalog, obravnavanje napak, statistično posploševanje, večmodalno združevanje podatkov, učenje v stiku z okoljem, učenje v omejenem okolju
Work type:Doctoral dissertation
Organization:FE - Faculty of Electrical Engineering
Year:2023
PID:20.500.12556/RUL-147041 This link opens in a new window
COBISS.SI-ID:156815107 This link opens in a new window
Publication date in RUL:21.06.2023
Views:1479
Downloads:94
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Secondary language

Language:English
Title:Incremental Learning of Robot Assembly Tasks Based on Human- Robot Collaboration
Abstract:
Robot assembly is an essential operation in manufacturing processes, and future generations of robots are intended for use outside of industrial environments. Traditional robot programming for assembly tasks can be time-consuming, as it is necessary to anticipate all possible situations in advance and create a usually complex robotic program. This requires profound programming skills and an in-depth understanding of robotics, which limits the use of robots to industrial applications with predictable, welldefined tasks. For the broader use of robots, rapid, intuitive, and effective solutions for transferring human knowledge to the robotic system should be developed. In this dissertation, we examine the potential of human-robot collaboration for incremental learning of assembly tasks, focused on exploitation of variable stiffness. We design a robot learning system adaptable to various tasks and settings as it can be taught in two complementary ways: from demonstrations and by following environmental constraints. Assembly tasks can be improved over time through various collaborative processes between a human operator and a robot: gradual refinement of the robot control policies through kinesthetic guidance and providing additional policies to teach the robot system how to handle unpredictable situations that can arise during the assembly. In the first part of the dissertation, we focus on enhancing methods for effective kinesthetic teaching to facilitate their application in assembly tasks. The quality of the demonstrations is crucial for accurate and correct assembly. Our work is based on the observation that the performance of assembly tasks can be significantly improved in a few iteration steps, much like humans improve their skills through repeated exercise and receiving feedback. To this end, we develop a procedure for gradually refining existing trajectories through kinesthetic guidance along virtual tunnels at arbitrary speeds, with the ability for humans to demonstrate initial movements and refine them until satisfactory robot execution is achieved. Since complex movements can be more accurately demonstrated at a low speed, it is also necessary to allow for a separate refinement of the spatial and temporal part of the task. We propose to first refine the spatial part, i.e., the shape of the trajectory, with an arbitrary speed. The desired speed is learned in the last stage of the procedure. In our approach, both changing of the shape and the speed of the trajectory is implemented by moving the end-effector back and forth inside the virtual tunnel. In the second part of this dissertation, we address the challenge of ensuring the robust execution of assembly tasks in unstructured environments where unforeseen situations may cause errors even if the robot is carefully programmed and optimized. We propose a collaborative approach to handle exception scenarios, which consists of a multi-stage process that enables the robot to recover from errors and continue with the assembly task. First, the system remembers the context in which the error occurred and observes how the operator handles the situation using our incremental policy refinement method. The robot then uses statistical learning based on previous human actions to apply the appropriate strategy and autonomously solve the problem. Our approach aims to increase the reliability of the robot system while reducing the need for human intervention in assembly tasks. Learning assembly operations is a time-consuming and challenging process. Therefore, the last goal is to enable the robot to autonomously learn certain suboperations of the assembly process. One of the obstacles that autonomous learning methods in robotics need to overcome is the large search space that the robot has to explore before it learns to perform the task correctly. The robot is inherently in physical contact with the environment during the robot assembly. Although learning contact tasks is traditionally considered more challenging because one has to consider interaction forces between the robot system and the environment, learning physically constrained tasks can be more efficient. The key observation is that there are fewer admissible movement directions due to environmental constraints, reducing the number of learning parameters and enabling faster policy learning. In that respect, we propose a novel three-level hierarchical reinforcement learning scheme that includes a compliant controller at the lower level, which moves safely along the constraints, the intermediate level that systematically searches for possible states where movement is possible in different directions, and the high-level sequence optimization. All developed approaches have been validated on a collaborative robotic platform and evaluated in various experiments, demonstrating the effectiveness of the proposed methods.

Keywords:robot assembly, learning from demonstration, dynamic movement primitives, incremental policy improvement, error handling, statistical generalization, multimodal data fusion, learning in contact with environment, reinforcement learning

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back