
Core Team
Alpay Hasanli
Agentic AI in Science
Master StudentAbout
Alpay Hasanli is a master’s student from the University of Twente completing his thesis at the Agentic Systems Lab at ETH Zurich. His work sits at the intersection of agentic AI, retrieval-augmented generation, and formal verification, with a particular focus on building systems that can support high-stakes reasoning tasks reliably. Before joining ASL, he worked across native iOS and Android development, distributed systems, and production-grade platform engineering, including Kubernetes-based infrastructure for e-commerce environments. He also developed a RAG-supported compliance analysis agent for ING Bank, where he translated advanced AI capabilities into a tool that could be demonstrated directly to business stakeholders. At ASL, he is now applying that product-oriented engineering mindset to one of the most important bottlenecks in science: ensuring that research claims are not only well written, but logically sound and verifiable.
Research Areas
Connect
Project
Verification-First AI for Scientific Peer Review
Scientific publishing is under growing pressure as journals face rising submission volumes and reviewers are asked to assess increasingly complex work under tight time constraints. This project addresses this challenge by developing a verification-first framework for AI-assisted peer review. Rather than focusing only on surface-level checks such as formatting, references, or style, the system is designed to evaluate whether a paper’s core claims are logically consistent, internally supported, and free from contradictions. To do this, it combines language models with formal reasoning tools such as theorem provers, enabling a pipeline in which scientific statements can be structured, tested, and checked against the broader argument of the manuscript. The result is a new layer of review support aimed at improving rigor before a human referee even begins deep evaluation. From a scientific perspective, the project explores an important frontier: how natural-language scientific arguments can be translated into representations that permit formal verification. This is interesting not only for peer review, but also for the broader question of whether AI systems can help make scientific reasoning itself more explicit, testable, and reproducible. By connecting LLM-based interpretation with symbolic reasoning, the work contributes to ongoing efforts in trustworthy AI, machine-assisted science, and computational epistemology. It also raises deeper methodological questions about how claims, assumptions, and evidence should be represented if science is to become more machine-interpretable without losing nuance. Journals, publishers, and high-stakes R&D organizations all face the cost of flawed analysis, unsupported conclusions, and slow review cycles. A system that can flag weak reasoning early could reduce reviewer burden, improve publication quality, and lower the downstream risk of acting on unreliable findings. In sectors such as biotech, healthcare, and advanced engineering, where incorrect claims can have major financial and operational consequences, verification infrastructure of this kind could become an important part of the research workflow. Positioned correctly, this work points toward a highly valuable layer of scientific quality assurance that fits naturally into existing publishing and innovation pipelines.
Other team members
Students
Interested in collaborating?
We are always looking for talented students, researchers and industry partners.































