Gabor Hollbeck

Core Team

Gabor Hollbeck

Agentic AI in Science

Master Student

About

Gabor Hollbeck is a researcher working at the intersection of AI, complexity science, and computational social science. His work explores how intelligent systems shape reasoning, coordination, and decision-making in high-stakes scientific and civic contexts. Across projects in scientific infrastructure, democratic technology, and multi-agent systems, he studies how AI can make complex institutions more legible, trustworthy, and effective. He combines technical research with institution-building. As the founder of OpenDemocracy and the creator of Diplomatica.ai, Gabor has developed AI-enabled systems for civic participation, dialogue, and public reasoning. His broader research agenda spans scientific knowledge management, democratic resilience, agent interaction in strategic environments, and AI safety. Rather than treating AI as a narrow optimization tool, he is interested in how it can strengthen epistemic systems: the processes through which societies, institutions, and researchers form reliable beliefs and act on them.

Project 1

ScienceOS: Infrastructure for AI-Native Research

ScienceOS is a modular operating layer for modern scientific work: a suite of AI-native tools designed to support literature review, citation verification, peer review, research organization, and technical writing. The project addresses a growing mismatch between the increasing speed of scientific production and the limited tools available for tracking claims, checking evidence, and managing knowledge across complex research workflows. Rather than automating science away from researchers, ScienceOS aims to make scientific reasoning more navigable, auditable, and coherent. Planned components include AI-assisted literature synthesis, citation and claim verification, robustness checks for automated research agents, improved paper and reference management, and writing support tailored to technical research environments. Scientifically, ScienceOS is interesting because it turns the research process itself into an object of investigation. It creates a practical framework for studying AI-assisted epistemology: how claims are validated, how evidence is synthesized, where citation drift and hallucinations emerge, and how human oversight can remain meaningful in increasingly agentic workflows. The project also contributes to meta-science by enabling research on reproducibility, interpretability, and trustworthy human-AI collaboration. In this sense, ScienceOS is not only infrastructure for science, but also a scientific instrument for understanding the future of scientific work. As scientific publishing, R&D, and machine-assisted knowledge production scale, institutions increasingly need systems that improve trust, provenance, and verification rather than merely generating more text. ScienceOS is well positioned to support research labs, publishers, universities, and knowledge-intensive industries that depend on reliable evidence workflows. By providing an interoperable layer for verification, review, and scientific knowledge management, it offers a path toward becoming foundational infrastructure for AI-enabled research environments.

Project 2

OpenDemocracy: AI for Democratic Reasoning

OpenDemocracy is a civic technology initiative building the next generation of democratic infrastructure. The project began with AI-powered voting advice systems deployed across multiple elections and has expanded into a broader platform for political transparency, civic participation, and public reasoning. Its tools are designed to help citizens navigate political information more clearly by analyzing political texts, matching users with party positions, explaining issues conversationally, and making political content more accessible across languages and contexts. The long-term vision is to develop open, verifiable systems that support collective reasoning in democratic societies under conditions of technological acceleration. From a scientific perspective, OpenDemocracy serves as a live research environment for studying how AI reshapes political systems and public discourse. It opens questions around bias in language models, reliability in AI-generated political information, retrieval-grounded reasoning in public contexts, and the design of pluralistic, auditable AI systems for institutional use. The project also creates a valuable empirical setting for understanding how AI affects collective sensemaking, whether it helps clarify disagreement or deepens fragmentation. This places it at the intersection of AI safety, political epistemology, and institutional design. Governments, NGOs, educational institutions, civic organizations, and media platforms increasingly need robust tools for policy explanation, election guidance, multilingual information access, and public consultation. Because OpenDemocracy is modular and localizable, it can be adapted across jurisdictions and organizational settings. Its value lies in making political information more usable, transparent, and actionable, creating a compelling foundation for institutional partnerships, public-interest deployments, and tailored applied systems for governance environments.

Project 3

Multi-Agent Equilibria in Strategic AI Systems

This research investigates how agentic AI systems behave when many actors interact strategically inside simulated environments. The project focuses on emergent phenomena such as cooperation, collusion, coordination failure, market dynamics, and equilibrium formation under incomplete information and shifting incentives. Example environments include repeated games, business simulations, supply-chain coordination settings such as the MIT Beer Game, and adversarial benchmarks where agents interact in competitive or institution-like settings. The goal is to understand not only whether AI agents converge toward equilibria, but also what kinds of equilibria they produce and when individually rational behavior generates harmful system-level outcomes. Scientifically, this work contributes to AI safety, game theory, complexity science, and computational social science. It provides a framework for examining how language-model agents negotiate, defect, cooperate, exploit, and adapt under different rules and informational constraints. The project is especially valuable because it bridges formal theoretical intuitions with empirical simulation, testing where classical equilibrium concepts remain useful and where agentic AI introduces qualitatively new dynamics. As multi-agent AI systems become more capable, understanding ecosystem-level behavior becomes as important as understanding any single model in isolation. As companies experiment with autonomous agents in business operations, markets, logistics, and digital services, the ability to evaluate collective agent behavior becomes essential. This research helps identify when agent ecosystems remain stable, efficient, and aligned with institutional goals, and when they drift toward fragile or undesirable outcomes. Such evaluation capacity is highly relevant for organizations deploying multi-agent workflows in operational settings, especially where reliability, incentives, and governance matter at scale.

Other team members

Students

Interested in collaborating?

We are always looking for talented students, researchers and industry partners.

Get in Touch