<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-14T03:14:51Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:10230/35952" metadataPrefix="mets">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:10230/35952</identifier><datestamp>2025-12-18T01:24:32Z</datestamp><setSpec>com_2072_6</setSpec><setSpec>col_2072_452952</setSpec></header><metadata><mets xmlns="http://www.loc.gov/METS/" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" ID="&#xa;&#x9;&#x9;&#x9;&#x9;DSpace_ITEM_10230-35952" TYPE="DSpace ITEM" PROFILE="DSpace METS SIP Profile 1.0" xsi:schemaLocation="http://www.loc.gov/METS/ http://www.loc.gov/standards/mets/mets.xsd" OBJID="&#xa;&#x9;&#x9;&#x9;&#x9;hdl:10230/35952">
   <metsHdr CREATEDATE="2026-04-14T05:14:51Z">
      <agent ROLE="CUSTODIAN" TYPE="ORGANIZATION">
         <name>RECERCAT</name>
      </agent>
   </metsHdr>
   <dmdSec ID="DMD_10230_35952">
      <mdWrap MDTYPE="MODS">
         <xmlData xmlns:mods="http://www.loc.gov/mods/v3" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
            <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Slizovskaia, Olga</mods:namePart>
               </mods:name>
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Gómez Gutiérrez, Emilia, 1975-</mods:namePart>
               </mods:name>
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Haro Ortega, Gloria</mods:namePart>
               </mods:name>
               <mods:originInfo>
                  <mods:dateIssued encoding="iso8601">2018-12-04T09:28:59Z2018-12-04T09:28:59Z2017</mods:dateIssued>
               </mods:originInfo>
               <mods:identifier type="none"/>
               <mods:abstract>Comunicació presentada a la International Conference on Multimedia Retrieval celebrada del 6 al 9 de juny de 2017 a Bucarest, Romania.This paper presents a method for recognizing musical instruments in user-generated videos. Musical instrument recognition from music signals is a well-known task in the music information retrieval (MIR) field, where current approaches rely on the analysis of the good-quality audio material. This work addresses a real-world scenario with several research challenges, i.e. the analysis of user-generated videos that are varied in terms of recording conditions and quality and may contain multiple instruments sounding simultaneously and background noise. Our approach does not only focus on the analysis of audio information, but we exploit the multimodal information embedded in the audio and visual domains. In order to do so, we develop a Convolutional Neural Network (CNN) architecture which combines learned representations from both modalities at a late fusion stage. Our approach is trained and evaluated on two large-scale video datasets: YouTube-8M and FCVID. The proposed architectures demonstrate state-of-the-art results in audio and video object recognition, provide additional robustness to missing modalities, and remains computationally cheap to train.is work is partly supported by the Spanish Ministry of Economy&#xd;
and Competitiveness under the Maria de Maeztu Units of&#xd;
Excellence Programme (MDM-2015-0502), the CASAS Spanish research&#xd;
project (TIN2015-70816-R), and project TIN2015-70410-C2-&#xd;
1-R (MINECO/FEDER, UE). We gratefully acknowledge the support&#xd;
of NVIDIA Corporation with the donation of the Titan X GPU used&#xd;
for this research.</mods:abstract>
               <mods:language>
                  <mods:languageTerm authority="rfc3066"/>
               </mods:language>
               <mods:accessCondition type="useAndReproduction">© 2017 Association for Computing Machinery info:eu-repo/semantics/openAccess</mods:accessCondition>
               <mods:subject>
                  <mods:topic>Multimodal musical instrument classification</mods:topic>
               </mods:subject>
               <mods:subject>
                  <mods:topic>Convolutional neural networks</mods:topic>
               </mods:subject>
               <mods:subject>
                  <mods:topic>Multimodal video analysis</mods:topic>
               </mods:subject>
               <mods:subject>
                  <mods:topic>Feature fusion</mods:topic>
               </mods:subject>
               <mods:subject>
                  <mods:topic>Multimedia information retrieval</mods:topic>
               </mods:subject>
               <mods:titleInfo>
                  <mods:title>Musical instrument recognition in user-generated videos using a multimodal convolutional neural network architecture</mods:title>
               </mods:titleInfo>
               <mods:genre>info:eu-repo/semantics/conferenceObject info:eu-repo/semantics/acceptedVersion</mods:genre>
            </mods:mods>
         </xmlData>
      </mdWrap>
   </dmdSec>
   <structMap LABEL="DSpace Object" TYPE="LOGICAL">
      <div TYPE="DSpace Object Contents" ADMID="DMD_10230_35952"/>
   </structMap>
</mets></metadata></record></GetRecord></OAI-PMH>