Other authors

Universitat Politècnica de Catalunya. Departament de Ciències de la Computació

Universitat Politècnica de Catalunya. IDEAI-UPC - Intelligent Data sciEnce and Artificial Intelligence Research Group

Publication date

2026



Abstract

During extensive training and tuning of large language models (LLMs) and foundational models (FM), researchers will inevitably encounter machine learning (ML) bias and fairness questions, which cast a shadow over the FM development and deployment process. In an FM, bias manifests as an unfair preference or prejudice toward a specific class, distorting learning and ultimately compromising the model’s performance. Transparency is crucial for understanding the inner workings of foundation models. Equity metrics and fairness metrics in AI serve distinct purposes in evaluating the ethical, legal, socioeconomic, and cultural implications of FM. However, current evaluation methods face several limitations, including the potential for overfitting to popular benchmarks, data contamination issues, and inadequate assessment of diversity, creativity, and real-world generalization.


Peer Reviewed


Postprint (published version)

Document Type

Part of book or chapter of book

Language

English

Publisher

Springer

Related items

https://link.springer.com/referencework/10.1007/978-3-031-61050-9

Recommended citation

This citation was generated automatically.

Rights

http://creativecommons.org/licenses/by/4.0/

Open Access

This item appears in the following Collection(s)

E-prints [72263]