The Ethics Institute serves as the central hub for ethical guidance, collaboration, and community engagement around AI in research, education, and university practices at Michigan State University. From convening the university’s AI Summit to supporting the development of guidelines, research, and faculty working groups, the Institute has led efforts to ensure MSU’s approach to AI is thoughtful, inclusive, and aligned with our institutional values. By bringing together voices from across disciplines, the Ethics Institute fosters dialogue and shapes practices that prioritize equity, responsibility, and innovation in the age of automation.
The Evidence Driven Learning Innovation (EDLI) research center is a collaboration of educators and researchers in the Colleges of Arts and Letters, Business and Natural Science, MSU Libraries, and MSU IT. Our mission is to humanize the digital learning experience and use a values-driven approach to develop and evaluate digital pedagogies and technologies for 21st-century learning.
The integration of Artificial Intelligence (AI) in STEM education marks a revolutionary shift in pedagogical methods and learning outcomes. AI's role in customizing and enhancing educational experiences is paramount. The Center for Education and Emerging Technologies explores and pushes forward the use of AI in STEM Education.
The Michigan State University AI Research (MAIR), housed within the Department of Computer Science and Engineering (CSE) at Michigan State University (MSU), holds a distinguished position in the dynamic field of Artificial Intelligence (AI). With a rich history of innovation and a commitment to pushing the boundaries of technology, MAIR provides a hub of creativity and discovery in AI research. Led by a diverse team of renowned experts, MAIR endeavors include a wide range of research domains, spanning biometrics, computer vision, data mining, natural language processing, and machine learning.
Ph.D. Student, Computer Science and Engineering
My research, conducted under the supervision of Dr. Mohammad M. Ghassemi at the Human Augmentation and Artificial Intelligence Laboratory (HAAIL), focuses on the reliability and calibration of Large Language Models (LLMs). While AI tools are increasingly used for complex tasks, they often struggle with knowing when they do not know, sometimes providing incorrect information with high confidence. My work involves developing methods to better estimate and calibrate this confidence, effectively creating tools that can identify when a model’s output is likely stable and correct versus when it is unreliable.
I apply these theoretical frameworks through active research collaborations with JPMorgan AI Research and Henry Ford Health. By integrating confidence estimation into clinical and financial models, my work addresses the specific risks of deploying AI in high-stakes domains where a single error can have significant consequences. This research aims to ensure that AI systems accurately account for uncertainty and defer to human expert judgment in situations where precision is critical.
In addition to my doctoral work at Michigan State University, I am set to begin an Applied AI Research Scientist Internship at Microsoft in Redmond during the summer of 2026.
Recent articles:
Foundational AI and Confidence Estimation:
Clinical and Financial AI Applications:
Learn more about Reza and his work on his personal website.
