<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-14T02:05:13Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:2117/410161" metadataPrefix="oai_dc">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:2117/410161</identifier><datestamp>2025-07-17T15:23:02Z</datestamp><setSpec>com_2072_1033</setSpec><setSpec>col_2072_452951</setSpec></header><metadata><oai_dc:dc xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
   <dc:title>Efficient finetuning strategies for multilingual neural machine translation</dc:title>
   <dc:creator>Sánchez I Maltas, Gerard</dc:creator>
   <dc:contributor>Universitat Politècnica de Catalunya. Departament de Ciències de la Computació</dc:contributor>
   <dc:contributor>Escolano Peinado, Carlos</dc:contributor>
   <dc:subject>Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial</dc:subject>
   <dc:subject>Machine translating</dc:subject>
   <dc:subject>Machine translation</dc:subject>
   <dc:subject>Machine Translation</dc:subject>
   <dc:subject>Multilingual Models</dc:subject>
   <dc:subject>M2M100</dc:subject>
   <dc:subject>Fine-Tuning</dc:subject>
   <dc:subject>LoRA</dc:subject>
   <dc:subject>Traducció automàtica</dc:subject>
   <dc:description>Driven by the ambition to eliminate language barriers worldwide, Machine Translation has become a central area of interest in today's artificial intelligence research. Despite significant advancements, the concentration of research and resources has been predominantly on a highresource languages. This discrepancy in language coverage points out a critical gap in the field. Recent breakthroughs in Machine Translation have seen the emergence of Multilingual Large Pre-Trained models, which have set new benchmarks across the field by enabling low-resource languages benefit from zero-shot translation. However, these models obtain high performances at the cost of requiring huge amounts of data and hardware resources. The focus of this thesis is to explore and formulate a fine-tuning strategy for a multilingual machine translation model, such as M2M100. Specifically, the project aims to extend their linguistic capabilities by incorporating new low-resource languages by fine-tuning specific language adapters using Low-Rank Adaptation methods. To evaluate the performance of the strategy, state-of-the-art techniques and evaluation metrics are employed, considering factors like scalability, catastrophic forgetting and zero-shot translation. The implemented approach has successfully developed a M2M100 language translator in a low-resource context resulting in a SacreBLEU score of 5.6 when just training a 13 % of the parameters while the full fine-tuning methodology reach a score of 7.43. Furthermore, this framework demonstrates its capability to produce more advanced and efficient machine translation models, which can deliver high-quality translations with reduced computational demands.</dc:description>
   <dc:date>2024-01-25</dc:date>
   <dc:type>Master thesis</dc:type>
   <dc:identifier>https://hdl.handle.net/2117/410161</dc:identifier>
   <dc:identifier>182832</dc:identifier>
   <dc:language>eng</dc:language>
   <dc:rights>Open Access</dc:rights>
   <dc:format>application/pdf</dc:format>
   <dc:publisher>Universitat Politècnica de Catalunya</dc:publisher>
</oai_dc:dc></metadata></record></GetRecord></OAI-PMH>