<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-17T01:14:20Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:2117/449440" metadataPrefix="didl">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:2117/449440</identifier><datestamp>2026-01-16T04:10:54Z</datestamp><setSpec>com_2072_1033</setSpec><setSpec>col_2072_452950</setSpec></header><metadata><d:DIDL xmlns:d="urn:mpeg:mpeg21:2002:02-DIDL-NS" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="urn:mpeg:mpeg21:2002:02-DIDL-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/did/didl.xsd">
   <d:Item id="hdl_2117_449440">
      <d:Descriptor>
         <d:Statement mimeType="application/xml; charset=utf-8">
            <dii:Identifier xmlns:dii="urn:mpeg:mpeg21:2002:01-DII-NS" xsi:schemaLocation="urn:mpeg:mpeg21:2002:01-DII-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/dii/dii.xsd">urn:hdl:2117/449440</dii:Identifier>
         </d:Statement>
      </d:Descriptor>
      <d:Descriptor>
         <d:Statement mimeType="application/xml; charset=utf-8">
            <oai_dc:dc xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
               <dc:title>A Single-neuron-per-class Readout for image-encoded sensor time series</dc:title>
               <dc:creator>Bernal Casas, David</dc:creator>
               <dc:creator>Gallego Vila, Jaime</dc:creator>
               <dc:subject>Àrees temàtiques de la UPC::Informàtica</dc:subject>
               <dc:subject>End-to-end learning</dc:subject>
               <dc:subject>Single-neuron-per-class readout</dc:subject>
               <dc:subject>Neuromorphic computing</dc:subject>
               <dc:subject>Image-encoded time series</dc:subject>
               <dc:subject>Neural networks</dc:subject>
               <dc:subject>Spiking neural networks</dc:subject>
               <dc:subject>Resonate-and-fire (RAF) neuron</dc:subject>
               <dc:subject>Noisy environments</dc:subject>
               <dc:description>We introduce an ultra-compact, single-neuron-per-class end-to-end readout for binary classification of noisy, image-encoded sensor time series. The approach compares a linear single-unit perceptron (E2E-MLP-1) with a resonate-and-fire (RAF) neuron (E2E-RAF-1), which merges feature selection and decision-making in a single block. Beyond empirical evaluation, we provide a mathematical analysis of the RAF readout: starting from its subthreshold ordinary differential equation, we derive the transfer function H(j¿), characterize the frequency response, and relate the output signal-to-noise ratio (SNR) to |H(j¿)|2 and the noise power spectral density Sn(¿)¿¿a (brown, pink, and blue noise). We present a stable discrete-time implementation compatible with surrogate gradient training and discuss the associated stability constraints. As a case study, we classify walk-in-place (WIP) in a virtual reality (VR) environment, a vision-based motion encoding (72 × 56 grayscale) derived from 3D trajectories, comprising 44,084 samples from 15 participants. On clean data, both single-neuron-per-class models approach ceiling accuracy. At the same time, under colored noise, the RAF readout yields consistent gains (typically +5–8% absolute accuracy at medium/high perturbations), indicative of intrinsic band-selective filtering induced by resonance. With ~8 k parameters and sub-2 ms inference on commodity graphical processing units (GPUs), the RAF readout provides a mathematically grounded, robust, and efficient alternative for stochastic signal processing across domains, with virtual reality locomotion used here as an illustrative validation.</dc:description>
               <dc:description>Peer Reviewed</dc:description>
               <dc:description>Postprint (published version)</dc:description>
               <dc:date>2025-12-05</dc:date>
               <dc:type>Article</dc:type>
               <dc:relation>https://www.mdpi.com/2227-7390/13/24/3893</dc:relation>
               <dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights>
               <dc:rights>Open Access</dc:rights>
               <dc:rights>Attribution 4.0 International</dc:rights>
               <dc:publisher>Multidisciplinary Digital Publishing Institute (MDPI)</dc:publisher>
            </oai_dc:dc>
         </d:Statement>
      </d:Descriptor>
   </d:Item>
</d:DIDL></metadata></record></GetRecord></OAI-PMH>