<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-14T04:05:52Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:10230/46544" metadataPrefix="didl">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:10230/46544</identifier><datestamp>2025-12-21T17:58:05Z</datestamp><setSpec>com_2072_6</setSpec><setSpec>col_2072_452952</setSpec></header><metadata><d:DIDL xmlns:d="urn:mpeg:mpeg21:2002:02-DIDL-NS" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="urn:mpeg:mpeg21:2002:02-DIDL-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/did/didl.xsd">
   <d:Item id="hdl_10230_46544">
      <d:Descriptor>
         <d:Statement mimeType="application/xml; charset=utf-8">
            <dii:Identifier xmlns:dii="urn:mpeg:mpeg21:2002:01-DII-NS" xsi:schemaLocation="urn:mpeg:mpeg21:2002:01-DII-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/dii/dii.xsd">urn:hdl:10230/46544</dii:Identifier>
         </d:Statement>
      </d:Descriptor>
      <d:Descriptor>
         <d:Statement mimeType="application/xml; charset=utf-8">
            <oai_dc:dc xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
               <dc:title>Color illusions also deceive CNNs for low-level vision tasks: analysis and implications</dc:title>
               <dc:creator>Gómez Villa, Alexander</dc:creator>
               <dc:creator>Martín, Adrian</dc:creator>
               <dc:creator>Vazquez-Corral, Javier</dc:creator>
               <dc:creator>Bertalmío, Marcelo</dc:creator>
               <dc:creator>Malo, Jesús</dc:creator>
               <dc:description>The study of visual illusions has proven to be a very useful approach in vision science. In this work we start by&#xd;
showing that, while convolutional neural networks (CNNs) trained for low-level visual tasks in natural images&#xd;
may be deceived by brightness and color illusions, some network illusions can be inconsistent with the perception&#xd;
of humans. Next, we analyze where these similarities and differences may come from. On one hand, the&#xd;
proposed linear eigenanalysis explains the overall similarities: in simple CNNs trained for tasks like denoising or&#xd;
deblurring, the linear version of the network has center-surround receptive fields, and global transfer functions&#xd;
are very similar to the human achromatic and chromatic contrast sensitivity functions in human-like opponent&#xd;
color spaces. These similarities are consistent with the long-standing hypothesis that considers low-level visual&#xd;
illusions as a by-product of the optimization to natural environments. Specifically, here human-like features&#xd;
emerge from error minimization. On the other hand, the observed differences must be due to the behavior of the&#xd;
human visual system not explained by the linear approximation. However, our study also shows that more&#xd;
‘flexible’ network architectures, with more layers and a higher degree of nonlinearity, may actually have a worse&#xd;
capability of reproducing visual illusions. This implies, in line with other works in the vision science literature, a&#xd;
word of caution on using CNNs to study human vision: on top of the intrinsic limitations of the L+NL formulation&#xd;
of artificial networks to model vision, the nonlinear behavior of flexible architectures may easily be&#xd;
markedly different from that of the visual system.</dc:description>
               <dc:description>This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement number 761544 (project HDR4EU) and under grant agreement number 780470 (project SAUCE), and by the Spanish government and FEDER Fund, grant Ref. PGC2018-099651-B-I00 (MCIU/AEI/FEDER, UE). The work of AM was supported by the Spanish government under Grant FJCI-2017–31758. JM has been supported by the Spanish government under the MINECO grant Ref. DPI2017-89867 and by the Generalitat Velanciana grant Ref. GrisoliaP-2019-035. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.</dc:description>
               <dc:date>2021-02-19T09:09:38Z</dc:date>
               <dc:date>2020</dc:date>
               <dc:type>info:eu-repo/semantics/article</dc:type>
               <dc:type>info:eu-repo/semantics/acceptedVersion</dc:type>
               <dc:relation>Vision Research. 2020 Nov;176:156-74.</dc:relation>
               <dc:relation>info:eu-repo/grantAgreement/EC/H2020/761544</dc:relation>
               <dc:relation>info:eu-repo/grantAgreement/EC/H2020/780470</dc:relation>
               <dc:relation>info:eu-repo/grantAgreement/ES/2PE/PGC2018-099651-B-I00</dc:relation>
               <dc:relation>info:eu-repo/grantAgreement/ES/2PE/FJCI-2017-31758</dc:relation>
               <dc:relation>info:eu-repo/grantAgreement/ES/2PE/DPI2017-89867</dc:relation>
               <dc:rights>© Elsevier http://dx.doi.org/10.1016/j.visres.2020.07.010</dc:rights>
               <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
               <dc:publisher>Elsevier</dc:publisher>
            </oai_dc:dc>
         </d:Statement>
      </d:Descriptor>
   </d:Item>
</d:DIDL></metadata></record></GetRecord></OAI-PMH>