
Core Team
Maximilian May
Agentic AI in Education
PhD Researcher HSGAbout
Maximilian May is a PhD candidate at the University of St. Gallen researching how artificial intelligence can improve learning in higher education. His work sits at the intersection of digital learning, information systems, and agentic AI, with a particular focus on making student thinking more visible and feedback more effective at scale. Rather than treating AI as a shortcut for content generation, he explores how intelligent systems can deepen reasoning, strengthen engagement, and support meaningful learning outcomes in real educational settings. Drawing on field experiments and design science research, Maximilian builds and evaluates tools that are both scientifically rigorous and practically deployable.
Research Areas
Project
Scalable Oral AI for Higher Education Assessment
Higher education is entering a new assessment era. As generative AI becomes embedded in students’ everyday workflows, written submissions and conventional online exercises are becoming weaker signals of genuine understanding. Maximilian’s project develops an agentic formative assessment system that brings key strengths of oral examinations into a scalable digital format. Through interactive, oral exam-style dialogue, the system probes how students reason, surfaces misconceptions in real time, and provides targeted feedback that helps learners improve throughout a course. The aim is not only to assess answers, but to capture the quality of thought behind them. Scientifically, the project addresses a timely and important question: how should AI-supported assessment systems be designed so that they are pedagogically sound, aligned with course goals, and capable of improving cognitive engagement? The research combines design science with experimental evaluation to iteratively develop an agentic oral examiner and test it in authentic higher-education settings. By examining both assessment quality and downstream learning outcomes, the work contributes new evidence on how conversational AI can support deeper conceptual understanding rather than superficial performance. It also advances broader research on trustworthy agentic systems in education. From a practical perspective, the project responds to a major unmet need in universities and professional education: high-quality individualized assessment that remains feasible in large-scale settings. Oral examinations are widely valued because they reveal reasoning, allow adaptive questioning, and reduce the limits of standardized testing, yet they are rarely deployable at scale due to staffing constraints. A robust AI-based alternative could offer institutions a new layer of continuous assessment, richer learning analytics, and more meaningful performance signals in AI-saturated environments. That creates clear relevance for educational platforms, universities, online programs, and workforce training providers seeking more reliable and engaging ways to evaluate learning.
Other team members
Students
Interested in collaborating?
We are always looking for talented students, researchers and industry partners.































