<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-14T06:20:54Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:10256/28374" metadataPrefix="qdc">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:10256/28374</identifier><datestamp>2026-03-07T19:50:53Z</datestamp><setSpec>com_2072_452966</setSpec><setSpec>com_2072_2054</setSpec><setSpec>col_2072_452969</setSpec></header><metadata><qdc:qualifieddc xmlns:qdc="http://dspace.org/qualifieddc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="http://purl.org/dc/elements/1.1/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dc.xsd http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dcterms.xsd http://dspace.org/qualifieddc/ http://www.ukoln.ac.uk/metadata/dcmi/xmlschema/qualifieddc.xsd">
   <dc:title>Deep Reinforcement Learning for robot manipulation</dc:title>
   <dc:creator>Mulia, Vania Katherine</dc:creator>
   <dc:subject>DRL (Deep Reinforcement Learning)</dc:subject>
   <dc:subject>Deep learning (Machine learning)</dc:subject>
   <dc:subject>Aprenentatge profund (Aprenentatge automàtic)</dc:subject>
   <dc:subject>Robots -- Control systems</dc:subject>
   <dc:subject>Sim-to-real transfer</dc:subject>
   <dc:subject>Peg-in-hole task</dc:subject>
   <dc:subject>Robots -- Sistemes de control</dc:subject>
   <dcterms:abstract>Robotic manipulation continues to be an active area of research due to its&#xd;
broad range of real-world applications. Among its benchmark tasks, the peg-in hole problem remains particularly challenging, requiring high-precision control&#xd;
under environmental uncertainty. This thesis presents a framework based on Deep&#xd;
Reinforcement Learning (DRL) to train a robotic manipulator to autonomously&#xd;
solve the peg-in-hole task. The proposed approach uses curriculum learning&#xd;
to train a single policy capable of handling all phases of the task: approach,&#xd;
contact-based hole search, and insertion. The curriculum is further extended to&#xd;
incorporate observation noise and force penalization, encouraging the emergence of&#xd;
compliant behaviors during contact. Training is conducted in a custom-designed,&#xd;
physics-based simulation environment. Simulation results demonstrate that the&#xd;
learned policy can complete the peg-in-hole task, though it faces difficulties in&#xd;
balancing task success with compliant interaction. To evaluate the potential for&#xd;
real-world deployment, the trained policy is transferred to a physical robot. Tests&#xd;
reveal several sources of sim-to-real discrepancy, particularly in the modeling&#xd;
of contact dynamics. Nonetheless, partial success in real-world trials suggests&#xd;
the viability of sim-to-real transfer for DRL-trained policies. Overall, this work&#xd;
contributes to the understanding of DRL’s capabilities and limitations in solving&#xd;
complex robotic manipulation tasks such as peg-in-hole assembly.</dcterms:abstract>
   <dcterms:abstract>9</dcterms:abstract>
   <dcterms:dateAccepted>2026-03-07T19:50:53Z</dcterms:dateAccepted>
   <dcterms:available>2026-03-07T19:50:53Z</dcterms:available>
   <dcterms:created>2026-03-07T19:50:53Z</dcterms:created>
   <dcterms:issued>2025-06</dcterms:issued>
   <dc:type>info:eu-repo/semantics/masterThesis</dc:type>
   <dc:identifier>https://hdl.handle.net/10256/28374</dc:identifier>
   <dc:rights>Attribution-NonCommercial-NoDerivatives 4.0 International</dc:rights>
   <dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights>
   <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
   <dc:publisher>Universitat de Girona. Institut de Recerca en Visió per Computador i Robòtica</dc:publisher>
   <dc:source>Erasmus Mundus Joint Master in Intelligent Field Robotic Systems (IFROS)</dc:source>
</qdc:qualifieddc></metadata></record></GetRecord></OAI-PMH>