<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-14T07:09:47Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:10230/69309" metadataPrefix="oai_dc">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:10230/69309</identifier><datestamp>2025-12-13T21:23:35Z</datestamp><setSpec>com_2072_6</setSpec><setSpec>col_2072_452952</setSpec></header><metadata><oai_dc:dc xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
   <dc:title>Hierarchies of reward machines</dc:title>
   <dc:creator>Furelos Blanco, Daniel</dc:creator>
   <dc:creator>Law, Mark</dc:creator>
   <dc:creator>Jonsson, Anders</dc:creator>
   <dc:creator>Broda, Krysia</dc:creator>
   <dc:creator>Russo, Alessandra</dc:creator>
   <dc:subject>Reward machines</dc:subject>
   <dc:subject>Hierarchies</dc:subject>
   <dc:description>Reward machines (RMs) are a recent formalism for representing the reward function of a reinforcement learning task through a finite-state machine whose edges encode subgoals of the task using high-level events. The structure of RMs enables the decomposition of a task into simpler and independently solvable subtasks that help tackle longhorizon and/or sparse reward tasks. We propose a formalism for further abstracting the subtask structure by endowing an RM with the ability to call other RMs, thus composing a hierarchy of RMs (HRM). We exploit HRMs by treating each call to an RM as an independently solvable subtask using the options framework, and describe a curriculum-based method to learn HRMs from traces observed by the agent. Our experiments reveal that exploiting a handcrafted HRM leads to faster convergence than with a flat HRM, and that learning an HRM is feasible in cases where its equivalent flat representation is not.</dc:description>
   <dc:description>Anders Jonsson is partially funded by TAILOR, AGAUR SGR and Spanish grant PID2019-108141GB-I00</dc:description>
   <dc:date>2025-01-27T13:54:20Z</dc:date>
   <dc:date>2025-01-27T13:54:20Z</dc:date>
   <dc:date>2023</dc:date>
   <dc:type>info:eu-repo/semantics/conferenceObject</dc:type>
   <dc:type>info:eu-repo/semantics/publishedVersion</dc:type>
   <dc:identifier>Furelos-Blanco D, Law M, Jonsson A, Broda K, Russo A. Hierarchies of reward machines. In: Krause A, Brunskill E, Cho K, Engelhardt B, Sabato S, Scarlett J, editors. Proceedings of the 40th International Conference on Machine Learning, PMLR; 2023 Jul 23-29; Honolulu, Hawaii, USA. San Diego; 2023. p.10494-541</dc:identifier>
   <dc:identifier>http://hdl.handle.net/10230/69309</dc:identifier>
   <dc:identifier>https://doi.org/10.48550/arXiv.2205.15752</dc:identifier>
   <dc:language>eng</dc:language>
   <dc:relation>info:eu-repo/grantAgreement/ES/2PE/PID2019-108141GB-I00</dc:relation>
   <dc:rights>Copyright 2023 by the author(s).</dc:rights>
   <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
   <dc:format>application/pdf</dc:format>
   <dc:format>application/pdf</dc:format>
   <dc:publisher>PMLR</dc:publisher>
</oai_dc:dc></metadata></record></GetRecord></OAI-PMH>