<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-17T01:07:23Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:2117/460294" metadataPrefix="didl">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:2117/460294</identifier><datestamp>2026-04-15T05:50:18Z</datestamp><setSpec>com_2072_1033</setSpec><setSpec>col_2072_452950</setSpec></header><metadata><d:DIDL xmlns:d="urn:mpeg:mpeg21:2002:02-DIDL-NS" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="urn:mpeg:mpeg21:2002:02-DIDL-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/did/didl.xsd">
   <d:Item id="hdl_2117_460294">
      <d:Descriptor>
         <d:Statement mimeType="application/xml; charset=utf-8">
            <dii:Identifier xmlns:dii="urn:mpeg:mpeg21:2002:01-DII-NS" xsi:schemaLocation="urn:mpeg:mpeg21:2002:01-DII-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/dii/dii.xsd">urn:hdl:2117/460294</dii:Identifier>
         </d:Statement>
      </d:Descriptor>
      <d:Descriptor>
         <d:Statement mimeType="application/xml; charset=utf-8">
            <oai_dc:dc xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
               <dc:title>Ladder of intentions: unifying agent architectures for explainability and transferability</dc:title>
               <dc:creator>Giménez Ábalos, Víctor</dc:creator>
               <dc:creator>Tormos Llorente, Adrián</dc:creator>
               <dc:creator>Edström, Filip</dc:creator>
               <dc:creator>Álvarez Napagao, Sergio</dc:creator>
               <dc:creator>Vázquez Salceda, Javier</dc:creator>
               <dc:creator>Brännström, Mattias</dc:creator>
               <dc:creator>Lindqvist, John</dc:creator>
               <dc:subject>Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Agents intel·ligents</dc:subject>
               <dc:subject>XAI</dc:subject>
               <dc:subject>Intentions</dc:subject>
               <dc:subject>Agent explainability</dc:subject>
               <dc:subject>Knowledge representation</dc:subject>
               <dc:subject>Knowledge transfer</dc:subject>
               <dc:subject>Cognitive architecture</dc:subject>
               <dc:subject>Telic explanations</dc:subject>
               <dc:subject>Explainable agency</dc:subject>
               <dc:subject>RL</dc:subject>
               <dc:subject>BDI</dc:subject>
               <dc:subject>Agentic AI</dc:subject>
               <dc:description>Within the field of Autonomous Agents, the predominant paradigm is that agents perceive, reflect, reason, and act on an environment, employing some specific decision mechanism to pick actions. Nonetheless, the process that originates the decisions may differ depending on the agent, as this paradigm is agnostic about its concrete action selection inference. However, the need for being able to explain these decisions is constantly increasing, and the heterogeneity of the internal processes of agents has resulted in different ad hoc techniques for each architecture, for providing explanations with disparate validation mechanisms, hindering efforts at comparing mechanisms.&#xd;
&#xd;
To tackle this, in this contribution, we propose a unifying architecture framework based on causality, beliefs, and intentions. This framework allows for the examination of heterogeneous agents (from BDI and RL to LLM-based agents) without modification. This approach clearly decouples declarative and procedural knowledge, as well as designer-given versus learnt representations. It categorises what kind of questions can be answered by each agent reasoning component and allows a more seamless workflow for transferring knowledge between diverse agent architectures.</dc:description>
               <dc:description>This work has been partially supported by the HUMANE (Grant agreement ID: 952026) and V. Gimenez-Abalos’ fellowship within the “Generación D” initiative, Red.es, MTDFP, for talent atraction (C005/24-ED CV1). Funded by the European Union NextGenerationEU funds, through PRTR.</dc:description>
               <dc:description>Peer Reviewed</dc:description>
               <dc:description>Postprint (author's final draft)</dc:description>
               <dc:date>2025</dc:date>
               <dc:type>Conference lecture</dc:type>
               <dc:relation>https://link.springer.com/chapter/10.1007/978-3-032-01399-6_8</dc:relation>
               <dc:rights>Restricted access - publisher's policy</dc:rights>
               <dc:publisher>Springer</dc:publisher>
            </oai_dc:dc>
         </d:Statement>
      </d:Descriptor>
   </d:Item>
</d:DIDL></metadata></record></GetRecord></OAI-PMH>