<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-14T06:55:50Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:20.500.14342/6069" metadataPrefix="mets">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:20.500.14342/6069</identifier><datestamp>2026-03-19T19:59:01Z</datestamp><setSpec>com_2072_482405</setSpec><setSpec>com_2072_183628</setSpec><setSpec>col_2072_482408</setSpec></header><metadata><mets xmlns="http://www.loc.gov/METS/" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" ID="&#xa;&#x9;&#x9;&#x9;&#x9;DSpace_ITEM_20.500.14342-6069" TYPE="DSpace ITEM" PROFILE="DSpace METS SIP Profile 1.0" xsi:schemaLocation="http://www.loc.gov/METS/ http://www.loc.gov/standards/mets/mets.xsd" OBJID="&#xa;&#x9;&#x9;&#x9;&#x9;hdl:20.500.14342/6069">
   <metsHdr CREATEDATE="2026-04-14T08:55:50Z">
      <agent ROLE="CUSTODIAN" TYPE="ORGANIZATION">
         <name>RECERCAT</name>
      </agent>
   </metsHdr>
   <dmdSec ID="DMD_20.500.14342_6069">
      <mdWrap MDTYPE="MODS">
         <xmlData xmlns:mods="http://www.loc.gov/mods/v3" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
            <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Thomas, Llewellyn</mods:namePart>
               </mods:name>
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Romasanta, Angelo Kenneth</mods:namePart>
               </mods:name>
               <mods:name>
                  <mods:role>
                     <mods:roleTerm type="text">author</mods:roleTerm>
                  </mods:role>
                  <mods:namePart>Pujol Priego, Laia</mods:namePart>
               </mods:name>
               <mods:extension>
                  <mods:dateAccessioned encoding="iso8601">2026-03-19T19:59:01Z</mods:dateAccessioned>
               </mods:extension>
               <mods:extension>
                  <mods:dateAvailable encoding="iso8601">2026-03-19T19:59:01Z</mods:dateAvailable>
               </mods:extension>
               <mods:originInfo>
                  <mods:dateIssued encoding="iso8601">2026-01</mods:dateIssued>
               </mods:originInfo>
               <mods:identifier type="issn">0148-2963</mods:identifier>
               <mods:identifier type="uri">https://hdl.handle.net/20.500.14342/6069</mods:identifier>
               <mods:identifier type="doi">https://doi.org/10.1016/j.jbusres.2025.115804</mods:identifier>
               <mods:abstract>Large Language Models (LLMs) are increasingly viewed as a valuable tool for academic research. While LLMs have some benefits, a ‘crisis of replicability’ in management scholarship mitigates against unrestrained use. In this paper we investigate the reproducibility of LLM analyses. We analyze three LLMs—ChatGPT, Claude and Mistral—over fifteen weeks, testing the consistency, accuracy and their interaction using the same prompts on the same data corpus. While our results demonstrate significant variations in reliability and consistency across the three LLMs, we also show that LLMs can exhibit deterministic and reliable behavior under specific, well-defined constraints. We argue that replicable LLM-based research will rely on understanding and validating the task-specific operational boundaries of the LLM. To ensure the responsible integration of LLMs into management research, we highlight a need for robust frameworks, transparency, ethical guidelines, and ongoing evaluation. We conclude with actionable guidance for management researchers.</mods:abstract>
               <mods:language>
                  <mods:languageTerm authority="rfc3066">eng</mods:languageTerm>
               </mods:language>
               <mods:accessCondition type="useAndReproduction">© L'autor/a Attribution-NonCommercial-NoDerivatives 4.0 International</mods:accessCondition>
               <mods:subject>
                  <mods:topic>Generative AI</mods:topic>
               </mods:subject>
               <mods:subject>
                  <mods:topic>LLM</mods:topic>
               </mods:subject>
               <mods:subject>
                  <mods:topic>Replication</mods:topic>
               </mods:subject>
               <mods:subject>
                  <mods:topic>Reproducibility</mods:topic>
               </mods:subject>
               <mods:subject>
                  <mods:topic>Consistency</mods:topic>
               </mods:subject>
               <mods:subject>
                  <mods:topic>Accuracy</mods:topic>
               </mods:subject>
               <mods:titleInfo>
                  <mods:title>Jagged competencies: Measuring the reliability of generative AI in academic research</mods:title>
               </mods:titleInfo>
               <mods:genre>info:eu-repo/semantics/article</mods:genre>
            </mods:mods>
         </xmlData>
      </mdWrap>
   </dmdSec>
   <structMap LABEL="DSpace Object" TYPE="LOGICAL">
      <div TYPE="DSpace Object Contents" ADMID="DMD_20.500.14342_6069"/>
   </structMap>
</mets></metadata></record></GetRecord></OAI-PMH>