Evaluating the Performance of Large Language Models for Spanish Language in Undergraduate Admissions Exams

Sabino Miranda, Obdulia Pichardo-Lagunas, Bella Martinez-Seis, Pierre Baldi

Abstract


This study evaluates the performance of large language models, specifically GPT-3.5 and BARD (supported by Gemini Pro model), in undergraduate admissions exams proposed by the National Polytechnic Institute in Mexico. The exams cover Engineering/Mathematical and Physical Sciences, Biological and Medical Sciences, and Social and Administrative Sciences. Both models demonstrated proficiency, exceeding the minimum acceptance scores for respective academic programs to up to 75% for some academic programs. GPT-3.5 outperformed BARD in Mathematics and Physics, while BARD performed better in History and questions related to factual information. Overall, GPT-3.5 marginally surpassed BARD with scores of 60.94% and 60.42%, respectively.

Keywords


Large language models; ChatGPT; BARD; undergraduate admissions exams

Full Text: PDF