<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-17T13:53:53Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:10230/42380" metadataPrefix="qdc">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:10230/42380</identifier><datestamp>2025-12-22T13:46:23Z</datestamp><setSpec>com_2072_6</setSpec><setSpec>col_2072_452952</setSpec></header><metadata><qdc:qualifieddc xmlns:qdc="http://dspace.org/qualifieddc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="http://purl.org/dc/elements/1.1/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dc.xsd http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dcterms.xsd http://dspace.org/qualifieddc/ http://www.ukoln.ac.uk/metadata/dcmi/xmlschema/qualifieddc.xsd">
   <dc:title>Prosodic phrase alignment for machine dubbing</dc:title>
   <dc:creator>Öktem, Alp</dc:creator>
   <dc:creator>Farrús, Mireia</dc:creator>
   <dc:creator>Bonafonte Cávez, Antonio</dc:creator>
   <dc:subject>Audiovisual translation</dc:subject>
   <dc:subject>Dubbing</dc:subject>
   <dc:subject>Spoken machine translation</dc:subject>
   <dc:subject>Prosody</dc:subject>
   <dcterms:abstract>Comunicació presentada a: Interspeech 2019, celebrat del 15 al 19 setembre de 2019 a Graz, Àustria.</dcterms:abstract>
   <dcterms:abstract>Dubbing is a type of audiovisual translation where dialogues are&#xd;
translated and enacted so that they give the impression that the&#xd;
media is in the target language. It requires a careful alignment&#xd;
of dubbed recordings with the lip movements of performers in&#xd;
order to achieve visual coherence. In this paper, we deal with&#xd;
the specific problem of prosodic phrase synchronization within&#xd;
the framework of machine dubbing. Our methodology exploits&#xd;
the attention mechanism output in neural machine translation&#xd;
to find plausible phrasing for the translated dialogue lines and&#xd;
then uses them to condition their synthesis. Our initial work in&#xd;
this field records comparable speech rate ratio to professional&#xd;
dubbing translation, and improvement in terms of lip-syncing&#xd;
of long dialogue lines.</dcterms:abstract>
   <dcterms:issued>2019-10-04T09:39:14Z</dcterms:issued>
   <dcterms:issued>2019-10-04T09:39:14Z</dcterms:issued>
   <dcterms:issued>2019</dcterms:issued>
   <dc:type>info:eu-repo/semantics/conferenceObject</dc:type>
   <dc:type>info:eu-repo/semantics/publishedVersion</dc:type>
   <dc:relation>Interspeech 2019; 2019 Sep 15-19; Graz, Austria. [Baixas]: ISCA; 2019.</dc:relation>
   <dc:rights>© 2019 ISCA</dc:rights>
   <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
   <dc:publisher>International Speech Communication Association (ISCA)</dc:publisher>
</qdc:qualifieddc></metadata></record></GetRecord></OAI-PMH>