<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-14T03:50:21Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:10230/71510" metadataPrefix="marc">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:10230/71510</identifier><datestamp>2025-10-18T20:14:53Z</datestamp><setSpec>com_2072_6</setSpec><setSpec>col_2072_452952</setSpec></header><metadata><record xmlns="http://www.loc.gov/MARC21/slim" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd">
   <leader>00925njm 22002777a 4500</leader>
   <datafield ind2=" " ind1=" " tag="042">
      <subfield code="a">dc</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="720">
      <subfield code="a">Miron, Marius</subfield>
      <subfield code="e">author</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="720">
      <subfield code="a">Cortès Sebastià, Guillem</subfield>
      <subfield code="e">author</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="720">
      <subfield code="a">Molina, Emilio</subfield>
      <subfield code="e">author</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="720">
      <subfield code="a">Ciurana, Alex</subfield>
      <subfield code="e">author</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="720">
      <subfield code="a">Serra, Xavier</subfield>
      <subfield code="e">author</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="260">
      <subfield code="c">2025-10-15T12:26:17Z</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="260">
      <subfield code="c">2025-10-15T12:26:17Z</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="260">
      <subfield code="c">2025</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="520">
      <subfield code="a">Data de publicació electrònica 13-10-2025</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="520">
      <subfield code="a">Music identification is crucial for distributing royalties in the music industry. This problem is solved using Audio fingerprinting (AFP) algorithms. However, these methods often struggle in real-world scenarios such as TV broadcasting, when music is in the background, masked by other sounds such as speech. While prior research has focused on improving AFP robustness to pitch and tempo variations, less attention has been
given to enhancing robustness for background music identification. In this work, we assess whether source separation systems improve background music identification by recovering the music signal in these recordings. We present the first extensive study comprising 13 source separation algorithms and five AFP models. We evaluate them on a public dataset of TV recordings, assessing both music identification performance and computational cost. Our results show that source separation substantially improves peakbased AFP identifications, particularly when music is in the background. Additionally, this finding extends to foreground music, making the approach versatile for various music identification tasks, such as query-by-example. Deep learning-based model NeuralFP* (tailored for background music identification) shows no substantial benefit from adding a separation model as preprocessing. This reproducible study provides a comprehensive evaluation framework, offering valuable insights into using source separation methods to improve music identification in real-world contexts.</subfield>
   </datafield>
   <datafield ind2=" " ind1=" " tag="520">
      <subfield code="a">This research is part of NextCore – New generation of music monitoring technology (RTC2019-007248-7)
funded by the Spanish Ministerio de Ciencia e Innovación and the Agencia Estatal de Investigación; and
resCUE – Smart system for automatic usage reporting of musical works in audiovisual productions (SAV20221147) funded by CDTI and the European Union - Next Generation EU, and supported by the Spanish Ministerio de Ciencia, Innovación y Universidades and the Ministerio para la Transformación Digital y de la Función Pública.</subfield>
   </datafield>
   <datafield tag="653" ind2=" " ind1=" ">
      <subfield code="a">Audio fingerprinting</subfield>
   </datafield>
   <datafield tag="653" ind2=" " ind1=" ">
      <subfield code="a">Monitoring</subfield>
   </datafield>
   <datafield tag="653" ind2=" " ind1=" ">
      <subfield code="a">Background music</subfield>
   </datafield>
   <datafield tag="653" ind2=" " ind1=" ">
      <subfield code="a">Music loudness</subfield>
   </datafield>
   <datafield tag="653" ind2=" " ind1=" ">
      <subfield code="a">Source separation</subfield>
   </datafield>
   <datafield ind2="0" ind1="0" tag="245">
      <subfield code="a">Enhanced television broadcast monitoring with source separation-assisted audio fingerprinting: a case study</subfield>
   </datafield>
</record></metadata></record></GetRecord></OAI-PMH>