<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-14T05:07:22Z</responseDate><request verb="GetRecord" identifier="oai:www.recercat.cat:10230/54069" metadataPrefix="qdc">https://recercat.cat/oai/request</request><GetRecord><record><header><identifier>oai:recercat.cat:10230/54069</identifier><datestamp>2025-12-19T20:29:33Z</datestamp><setSpec>com_2072_6</setSpec><setSpec>col_2072_452954</setSpec></header><metadata><qdc:qualifieddc xmlns:qdc="http://dspace.org/qualifieddc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:doc="http://www.lyncode.com/xoai" xsi:schemaLocation="http://purl.org/dc/elements/1.1/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dc.xsd http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dcterms.xsd http://dspace.org/qualifieddc/ http://www.ukoln.ac.uk/metadata/dcmi/xmlschema/qualifieddc.xsd">
   <dc:title>Analyzing how context size and symmetry influence word embedding information</dc:title>
   <dc:creator>Gabanes Anuncibay, Inés</dc:creator>
   <dc:subject>Semantics</dc:subject>
   <dc:subject>Embeddings</dc:subject>
   <dc:subject>Context</dc:subject>
   <dc:subject>Distributional</dc:subject>
   <dc:subject>Similarity</dc:subject>
   <dc:subject>Relatedness</dc:subject>
   <dc:subject>GloVe</dc:subject>
   <dc:subject>WordSim-353</dc:subject>
   <dc:subject>SimLex-999</dc:subject>
   <dcterms:abstract>Treball de fi de màster en Lingüística Teòrica i Aplicada. Director: Dr. Thomas Brochhagen</dcterms:abstract>
   <dcterms:abstract>Word embeddings represent word meaning in the form of a vector; however, the encoded&#xd;
information varies depending on the parameters the vector has been trained with. This paper&#xd;
analyzes how two parameters, context size and symmetry, influence word embedding&#xd;
information and aims to find if there exists a single distributional parametrization for capturing&#xd;
semantic similarity as well as relatedness. The models were trained with GloVe with different&#xd;
parametrizations; then, they were quantitatively evaluated through a similarity task, using&#xd;
WordSim-353 (for relatedness) and SimLex-999 (for semantic similarity) as benchmarks. The&#xd;
results show a minimal variation when manipulating some of the analyzed parameters, in&#xd;
particular between symmetric and asymmetric contexts, which leads us to conclude that it is&#xd;
not necessary to train models with large contexts for achieving good performance.</dcterms:abstract>
   <dcterms:issued>2022-09-14T17:43:30Z</dcterms:issued>
   <dcterms:issued>2022-09-14T17:43:30Z</dcterms:issued>
   <dcterms:issued>2022-09-14</dcterms:issued>
   <dc:type>info:eu-repo/semantics/masterThesis</dc:type>
   <dc:rights>Llicència CC Reconeixement-NoComercial-SenseObraDerivada 4.0 Internacional (CC BY-NC-ND 4.0)</dc:rights>
   <dc:rights>https://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights>
   <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
</qdc:qualifieddc></metadata></record></GetRecord></OAI-PMH>