Universitat Politècnica de Catalunya. Departament de Ciències de la Computació
Universitat Politècnica de Catalunya. IDEAI-UPC - Intelligent Data sciEnce and Artificial Intelligence Research Group
2026
During extensive training and tuning of large language models (LLMs) and foundational models (FM), researchers will inevitably encounter machine learning (ML) bias and fairness questions, which cast a shadow over the FM development and deployment process. In an FM, bias manifests as an unfair preference or prejudice toward a specific class, distorting learning and ultimately compromising the model’s performance. Transparency is crucial for understanding the inner workings of foundation models. Equity metrics and fairness metrics in AI serve distinct purposes in evaluating the ethical, legal, socioeconomic, and cultural implications of FM. However, current evaluation methods face several limitations, including the potential for overfitting to popular benchmarks, data contamination issues, and inadequate assessment of diversity, creativity, and real-world generalization.
Peer Reviewed
Postprint (published version)
Part of book or chapter of book
Inglés
Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Aprenentatge automàtic; Àrees temàtiques de la UPC::Informàtica::Aspectes socials; Transparency; Evaluation; Large language models; Foundational models
Springer
https://link.springer.com/referencework/10.1007/978-3-031-61050-9
http://creativecommons.org/licenses/by/4.0/
Open Access
E-prints [72263]