<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-13T06:46:01Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:2117/381159" metadataPrefix="didl">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:2117/381159</identifier><datestamp>2025-07-16T22:36:20Z</datestamp><setSpec>com_2072_1033</setSpec><setSpec>col_2072_452949</setSpec></header><metadata><d:DIDL xmlns:d="urn:mpeg:mpeg21:2002:02-DIDL-NS" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="urn:mpeg:mpeg21:2002:02-DIDL-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/did/didl.xsd">
   <d:Item id="hdl_2117_381159">
      <d:Descriptor>
         <d:Statement mimeType="application/xml; charset=utf-8">
            <dii:Identifier xmlns:dii="urn:mpeg:mpeg21:2002:01-DII-NS" xsi:schemaLocation="urn:mpeg:mpeg21:2002:01-DII-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/dii/dii.xsd">urn:hdl:2117/381159</dii:Identifier>
         </d:Statement>
      </d:Descriptor>
      <d:Descriptor>
         <d:Statement mimeType="application/xml; charset=utf-8">
            <oai_dc:dc xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
               <dc:title>Focus! rating XAI methods and finding biases</dc:title>
               <dc:creator>Arias Duart, Anna</dc:creator>
               <dc:creator>Parés, Ferran</dc:creator>
               <dc:creator>Garcia-Gasulla, Dario</dc:creator>
               <dc:subject>Àrees temàtiques de la UPC::Informàtica::Arquitectura de computadors</dc:subject>
               <dc:subject>High performance computing</dc:subject>
               <dc:subject>Bias</dc:subject>
               <dc:subject>Explainable AI</dc:subject>
               <dc:subject>Bias detection</dc:subject>
               <dc:subject>Image classification</dc:subject>
               <dc:subject>Càlcul intensiu (Informàtica)</dc:subject>
               <dc:subject>Intel·ligència artificial</dc:subject>
               <dc:subject>Artificial intelligence</dc:subject>
               <dc:description>Explainability has become a major topic of research in&#xd;
Artificial Intelligence (AI), aimed at increasing trust in models&#xd;
such as Deep Learning (DL) networks. However, trustworthy&#xd;
models cannot be achieved with explainable AI (XAI) methods&#xd;
unless the XAI methods themselves can be trusted.&#xd;
To evaluate XAI methods one may assess interpretability,&#xd;
a qualitative measure of how understandable an explanation is&#xd;
to humans [1]. While this is important to guarantee the proper&#xd;
interaction between humans and the model, interpretability&#xd;
generally involves end-users in the process [2], inducing strong&#xd;
biases. In fact, a qualitative evaluation alone cannot guarantee&#xd;
coherency to reality (i.e., model behavior), as false explanations&#xd;
can be more interpretable than accurate ones. To enable&#xd;
trust on XAI methods, we also need quantitative and objective&#xd;
evaluation metrics, which validate the relation between the&#xd;
explanations produced by the XAI method and the behavior&#xd;
of the trained model under assessment.&#xd;
In this work we propose a novel evaluation score for feature&#xd;
attribution methods, described in §I-A. Our input alteration&#xd;
approach induces in-distribution noise into samples, that is,&#xd;
alterations on the input which correspond to visual patterns&#xd;
found within the original data distribution. To do so we modify&#xd;
the context of the sample instead of the content, leaving the&#xd;
original pixels values untouched. In practice, we create a&#xd;
new sample, composed of samples of different classes, which&#xd;
we call a mosaic image (see examples in Figure 2). Using&#xd;
mosaics as input has a major benefit: each input quadrant is&#xd;
an image from the original distribution, producing blobs of&#xd;
activations in each quadrant which are consequently coherent.&#xd;
Only the pixels forming the borders between images, and&#xd;
the few corresponding activations, may be considered out of&#xd;
distribution.&#xd;
By inducing in-distribution noise, mosaic images introduce&#xd;
a problem in which XAI methods may objectively err (focus on&#xd;
something it should not be focusing on). On those composed&#xd;
mosaics we ask a XAI method to provide explanation for just&#xd;
one of the contained classes, and follow its response. Then,&#xd;
we measure how much of the explanation generated by the&#xd;
XAI is located on the areas corresponding to the target class,&#xd;
quantifying it through the Focus score. This score allows us to&#xd;
compare methods in terms of explanation precision, evaluating&#xd;
the capability of XAI methods to provide explanations related&#xd;
to the requested class. Using mosaics has another benefit. Since&#xd;
the noise introduced is in-distribution, the explanation errors&#xd;
identify and exemplify biases of the model. This facilitates&#xd;
the elimination of biases in models and datasets, potentially&#xd;
resulting in more reliable solutions. We illustrate how to do so&#xd;
in §I-C.</dc:description>
               <dc:date>2022-05</dc:date>
               <dc:type>Conference report</dc:type>
               <dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights>
               <dc:rights>Open Access</dc:rights>
               <dc:rights>Attribution-NonCommercial-NoDerivatives 4.0 International</dc:rights>
               <dc:publisher>Barcelona Supercomputing Center</dc:publisher>
            </oai_dc:dc>
         </d:Statement>
      </d:Descriptor>
   </d:Item>
</d:DIDL></metadata></record></GetRecord></OAI-PMH>