<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-14T04:37:38Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:10230/33115" metadataPrefix="mets">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:10230/33115</identifier><datestamp>2025-12-22T13:46:15Z</datestamp><setSpec>com_2072_6</setSpec><setSpec>col_2072_452952</setSpec></header><metadata><mets xmlns="http://www.loc.gov/METS/" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" ID="&#xa;&#x9;&#x9;&#x9;&#x9;DSpace_ITEM_10230-33115" TYPE="DSpace ITEM" PROFILE="DSpace METS SIP Profile 1.0" xsi:schemaLocation="http://www.loc.gov/METS/ http://www.loc.gov/standards/mets/mets.xsd" OBJID="&#xa;&#x9;&#x9;&#x9;&#x9;hdl:10230/33115">
   <metsHdr CREATEDATE="2026-04-14T06:37:38Z">
      <agent ROLE="CUSTODIAN" TYPE="ORGANIZATION">
         <name>RECERCAT</name>
      </agent>
   </metsHdr>
   <dmdSec ID="DMD_10230_33115">
      <mdWrap MDTYPE="MODS">
         <xmlData xmlns:mods="http://www.loc.gov/mods/v3" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
            <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Nogueira, Waldo</mods:namePart>
               </mods:name>
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Gajecki, Tom</mods:namePart>
               </mods:name>
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Krüger, Benjamin</mods:namePart>
               </mods:name>
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Janer Mestres, Jordi</mods:namePart>
               </mods:name>
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Büchner, Andreas</mods:namePart>
               </mods:name>
               <mods:originInfo>
                  <mods:dateIssued encoding="iso8601">2017-10-30T09:03:15Z2017-10-30T09:03:15Z2016</mods:dateIssued>
               </mods:originInfo>
               <mods:identifier type="none"/>
               <mods:abstract>Comunicació presentada a la 12th ITG Conference on Speech Communication, celebrada els dies 5 a 7 d&amp;apos;octubre de 2016 a Paderborn, Alemanya.The aim of this study is to investigate whether a source separation algorithm based on a deep recurrent neural network (DRNN) can provide a speech perception benefit for cochlear implant users when speech signals are mixed with another competing voice. The DRNN is based on an existing architecture that is used in combination with an extra masking layer for optimization. The approach has been evaluated using the HSM sentence test (male voice) mixed with a competing voice (female voice) for a monaural speech separation task. Two DRNNs with two levels of complexity have been used. The algorithms have been evaluated in 8 normal hearing listeners using a Vocoder and in 3 CI users. Both DRNNs show a large and significant improvement in speech intelligibility using Vocoded speech. Preliminary results in 3 CI users seem to confirm the improvement observed using Vocoded simulations.This work was supported by the DFG Cluster of Excellence EXC 1077/1 Hearing4all.</mods:abstract>
               <mods:language>
                  <mods:languageTerm authority="rfc3066"/>
               </mods:language>
               <mods:accessCondition type="useAndReproduction">© 20165 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.&#xd;
The final published article can be found at http://ieeexplore.ieee.org/document/7776166/ info:eu-repo/semantics/openAccess</mods:accessCondition>
               <mods:subject>
                  <mods:topic>Música -- Anàlisi</mods:topic>
               </mods:subject>
               <mods:titleInfo>
                  <mods:title>Development of a sound coding strategy based on a deep recurrent neural network for monaural source separation in cochlear implants</mods:title>
               </mods:titleInfo>
               <mods:genre>info:eu-repo/semantics/conferenceObject info:eu-repo/semantics/acceptedVersion</mods:genre>
            </mods:mods>
         </xmlData>
      </mdWrap>
   </dmdSec>
   <structMap LABEL="DSpace Object" TYPE="LOGICAL">
      <div TYPE="DSpace Object Contents" ADMID="DMD_10230_33115"/>
   </structMap>
</mets></metadata></record></GetRecord></OAI-PMH>