Otros/as autores/as

Universitat Politècnica de Catalunya. Departament de Ciències de la Computació

Universitat Politècnica de Catalunya. IDEAI-UPC - Intelligent Data sciEnce and Artificial Intelligence Research Group

Fecha de publicación

2026



Resumen

During extensive training and tuning of large language models (LLMs) and foundational models (FM), researchers will inevitably encounter machine learning (ML) bias and fairness questions, which cast a shadow over the FM development and deployment process. In an FM, bias manifests as an unfair preference or prejudice toward a specific class, distorting learning and ultimately compromising the model’s performance. Transparency is crucial for understanding the inner workings of foundation models. Equity metrics and fairness metrics in AI serve distinct purposes in evaluating the ethical, legal, socioeconomic, and cultural implications of FM. However, current evaluation methods face several limitations, including the potential for overfitting to popular benchmarks, data contamination issues, and inadequate assessment of diversity, creativity, and real-world generalization.


Peer Reviewed


Postprint (published version)

Tipo de documento

Part of book or chapter of book

Lengua

Inglés

Publicado por

Springer

Documentos relacionados

https://link.springer.com/referencework/10.1007/978-3-031-61050-9

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

http://creativecommons.org/licenses/by/4.0/

Open Access

Este ítem aparece en la(s) siguiente(s) colección(ones)

E-prints [72263]