<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-17T11:46:21Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:2117/180457" metadataPrefix="qdc">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:2117/180457</identifier><datestamp>2026-02-07T09:30:50Z</datestamp><setSpec>com_2072_1033</setSpec><setSpec>col_2072_452950</setSpec></header><metadata><qdc:qualifieddc xmlns:qdc="http://dspace.org/qualifieddc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="http://purl.org/dc/elements/1.1/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dc.xsd http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dcterms.xsd http://dspace.org/qualifieddc/ http://www.ukoln.ac.uk/metadata/dcmi/xmlschema/qualifieddc.xsd">
   <dc:title>Time-domain speech enhancement using generative adversarial networks</dc:title>
   <dc:creator>Pascual de la Puente, Santiago</dc:creator>
   <dc:creator>Serra, Joan</dc:creator>
   <dc:creator>Bonafonte Cávez, Antonio</dc:creator>
   <dc:subject>Àrees temàtiques de la UPC::Enginyeria de la telecomunicació</dc:subject>
   <dc:subject>Speech processing systems</dc:subject>
   <dc:subject>Neural networks (Computer science)</dc:subject>
   <dc:subject>Speech enhancement</dc:subject>
   <dc:subject>Audio transformation</dc:subject>
   <dc:subject>Generative adversarial network</dc:subject>
   <dc:subject>Neural networks</dc:subject>
   <dc:subject>Processament de la parla</dc:subject>
   <dc:subject>Reconeixement automàtic de la parla</dc:subject>
   <dc:subject>Xarxes neuronals (Informàtica)</dc:subject>
   <dcterms:abstract>Speech enhancement improves recorded voice utterances to eliminate noise that might be impeding their intelligibility or compromising their quality. Typical speech enhancement systems are based on regression approaches that subtract noise or predict clean signals. Most of them do not operate directly on waveforms. In this work, we propose a generative approach to regenerate corrupted signals into a clean version by using generative adversarial networks on the raw signal. We also explore several variations of the proposed system, obtaining insights into proper architectural choices for an adversarially trained, convolutional autoencoder applied to speech. We conduct both objective and subjective evaluations to assess the performance of the proposed method. The former helps us choose among variations and better tune hyperparameters, while the latter is used in a listening experiment with 42 subjects, confirming the effectiveness of the approach in the real world. We also demonstrate the applicability of the approach for more generalized speech enhancement, where we have to regenerate voices from whispered signals.</dcterms:abstract>
   <dcterms:abstract>Peer Reviewed</dcterms:abstract>
   <dcterms:abstract>Postprint (author's final draft)</dcterms:abstract>
   <dcterms:issued>2019-11-01</dcterms:issued>
   <dc:type>Article</dc:type>
   <dc:relation>https://www.sciencedirect.com/science/article/abs/pii/S0167639319301359</dc:relation>
   <dc:relation>info:eu-repo/grantAgreement/MINECO//TEC2015-69266-P/ES/TECNOLOGIAS DE APRENDIZAJE PROFUNDO APLICADAS AL PROCESADO DE VOZ Y AUDIO/</dc:relation>
   <dc:rights>Open Access</dc:rights>
</qdc:qualifieddc></metadata></record></GetRecord></OAI-PMH>