Altres autors/es

Universitat Politècnica de Catalunya. Departament de Ciències de la Computació

Universitat Politècnica de Catalunya. IDEAI-UPC - Intelligent Data sciEnce and Artificial Intelligence Research Group

Data de publicació

2026



Resum

During extensive training and tuning of large language models (LLMs) and foundational models (FM), researchers will inevitably encounter machine learning (ML) bias and fairness questions, which cast a shadow over the FM development and deployment process. In an FM, bias manifests as an unfair preference or prejudice toward a specific class, distorting learning and ultimately compromising the model’s performance. Transparency is crucial for understanding the inner workings of foundation models. Equity metrics and fairness metrics in AI serve distinct purposes in evaluating the ethical, legal, socioeconomic, and cultural implications of FM. However, current evaluation methods face several limitations, including the potential for overfitting to popular benchmarks, data contamination issues, and inadequate assessment of diversity, creativity, and real-world generalization.


Peer Reviewed


Postprint (published version)

Tipus de document

Part of book or chapter of book

Llengua

Anglès

Publicat per

Springer

Documents relacionats

https://link.springer.com/referencework/10.1007/978-3-031-61050-9

Citació recomanada

Aquesta citació s'ha generat automàticament.

Drets

http://creativecommons.org/licenses/by/4.0/

Open Access

Aquest element apareix en la col·lecció o col·leccions següent(s)

E-prints [72263]