Discovery
How can AI agents search, hypothesize, and optimize across science, engineering, and infrastructure?
Inaugural ACM CAIS 2026 Workshop
Methods, evaluation, and deployment for AI agents used in real-world discovery.
Key Dates
AI agents are increasingly used to search over code, experiments, and designs to produce candidate discoveries. This workshop focuses on agents that operate beyond benchmarks — under expensive evaluations, noisy measurements, and real deployment constraints.
We invite work on discovery agents in science, engineering, and infrastructure, where validation is hard and human oversight matters.
How can AI agents search, hypothesize, and optimize across science, engineering, and infrastructure?
How do you validate agent-driven discoveries when ground truth is limited and experiments are expensive?
What breaks when discovery agents move from benchmarks to real-world settings?
We welcome submissions on building, evaluating, and deploying AI agents that operate under real-world constraints. These areas are representative, not exhaustive.
Agents that explore design spaces, generate candidate solutions, and optimize over code, configurations, or experimental parameters — including evolutionary, iterative, and tool-augmented approaches.
AI agents applied to scheduling, capacity planning, database tuning, network architecture, cluster management, performance debugging, and other infrastructure operations.
Agents that assist with hypothesis generation, experiment design, proof construction, data analysis, or literature synthesis — across domains such as mathematics, natural sciences, and engineering.
Methods for validating agent behavior when ground truth is limited, experiments are expensive, feedback is noisy, or reliability requirements are strict.
Case studies from applied settings, including adoption challenges, failure modes, operational lessons, and what didn't work.
Workflows where domain experts direct, audit, or intervene in agent-driven processes — including trust calibration, delegation boundaries, and expert-in-the-loop design.
We invite submissions on AI agents for discovery in real-world settings. Deployment reports, case studies, and lessons from applied systems — including failures — are especially welcome.
Paper submission
4-page short papers and 9-page long papers, using the official paper format.
Important dates · All deadlines follow the Anywhere on Earth (AoE) timezone.
May 26, 2026 · San Jose, CA · subject to change
UC Berkeley
UC Berkeley
UC Berkeley
UC Berkeley / Bespoke Labs
Stanford
UC Berkeley
Databricks
UC Berkeley
UC Berkeley
UC Berkeley
UC Berkeley / Databricks
Bespoke Labs is a sponsor of this workshop.
For sponsorship inquiries or any workshop-related questions, contact cais26.ai.discovery@gmail.com.