Artificial Intelligence (AI) is beginning to measurably accelerate scientific discovery across materials science, the life sciences, and Earth systems. This workshop focuses on the principles, models, and methods that should form the foundation for AI for Science. We are interested in the next generation of scientific foundation models—systems that advance real scientific inquiries such as forecasting extreme climate events, accelerating materials discovery, and uncovering biological mechanisms—co-developed with domain experts and validated against real-world data, experiments, and downstream decision impact.
Rethinking foundation models for science. We argue that foundation models for science must be built differently from those for language and vision. Scientific data and workflows encode physical and causal structure, span space and time, and often combine heterogeneous modalities across multiple scales. This motivates models with the right inductive biases—invariances, constraints, and causal structure—for reliable generalization, and evaluations that prioritize mechanistic consistency, calibrated uncertainty (UQ), and decision usefulness. Achieving this will require scientific priors, multi-modal architectures, and tight integration with classical scientific tools to build hybrid systems that are faster, more accurate, and more trustworthy than either component alone.
The workshop will surface principles, methods, and open problems unique to science, including enforcing conservation laws and symmetries; learning across vast spatial and temporal scales; representing extremes, rare events, and tipping points; calibrating and validating UQ; and developing evaluation protocols that reward mechanistic insight and actionable reliability. Our goal is to articulate a roadmap for building, training, and deploying scientific foundation models that accelerate discovery while respecting the structure of the natural world.
There are three aims that shape the content and format of the workshop:
-
Aim 1: Define the scientific questions that foundation models should address.
What classes of scientific problems—across the physical, life, and Earth sciences—are most amenable to foundation-model approaches? Which questions require learning from large, heterogeneous datasets, and where current modeling or simulation pipelines fall short? We aim to clarify the scientific use cases that genuinely benefit from foundation models, beyond incremental performance improvements.
-
Aim 2: Identify principles for building foundation models tailored to science.
What modeling assumptions, architectures, objectives, and training strategies are appropriate for scientific data that are physical, causal, spatiotemporal, and often sparse or biased? How should scientific priors, constraints, uncertainty quantification, and multi-modality be incorporated? This aim focuses on extracting common principles rather than prescribing a single paradigm.
-
Aim 3: Bridge machine learning methods with scientific workflows and evaluation.
How can foundation models be integrated into real scientific workflows involving experiments, simulations, and decision-making? What evaluation protocols, validation strategies, and collaboration structures are needed to ensure scientific credibility, interpretability, and long-term impact? We aim to surface best practices and open challenges for deploying models in the loop of scientific discovery.
Scientific Domains. We invite submissions spanning a broad range of scientific domains, including (but not limited to) astrophysics and space science; biomedicine (e.g., proteins, biosequences, virtual screening); computational science (e.g., PDEs, forecasting); Earth science; materials science (e.g., batteries, chemical synthesis); quantum science (e.g., fusion and many-body systems); and small molecules. We also encourage applications-driven work in AI for Science and scientific machine learning (SciML), especially submissions grounded in real scientific workflows, data, and validation.
Short Paper Track. Following ICLR 2026 guidance, we will host a dedicated short paper track to encourage early-stage, high-potential ideas that may not yet be mature enough for full-length submissions. Papers in this track will receive a light but fair review focused on novelty, clarity, and potential impact. To ensure integrity and originality, AI-generated papers will not be permitted for this track. Selected short papers will be presented as posters or short talks to foster interactive discussion and community feedback.
Speakers / Panelists (A-Z by Last Name)
Steven Brunton
Professor of Mechanical Engineering, University of Washington
Aditi Krishnapriyan
Assistant Professor, Chemical Engineering and EECS, UC Berkeley
Michael Mahoney
Professor, University of California at Berkeley
Vice President, International Computer Science Institute (ICSI)
Group Lead, Machine Learning and Analytics Group, Lawrence Berkeley National Laboratory
Mahdi Soltanolkotabi
Director USC Center on AI Foundations for Science (AIF4S)
Professor, Departments of Electrical and Computer Engineering, Computer Science, and Industrial and Systems Engineering, University of Southern California
Yuyang (Bernie) Wang
Principal Scientist, AWS AI
Rebecca Willett
Professor of Statistics and Computer Science, University of Chicago
Director of AI in the Data Science Institute, University of Chicago
Rose Yu
Associate Professor, University of California San Diego
Call for Papers
We provide more submission details: Guidance for CFP at FM4Science Workshop (ICLR 2026).OpenReview submission portal: https://openreview.net/group?id=ICLR.cc/2026/Workshop/FM4Science
- Abstract Submission Deadline: February 8, 2026
- Paper Submission Deadline: February 10, 2026
- Review Bidding Period: February 8 - February 11, 2026
- Review Deadline: February 24, 2026
- Acceptance/Rejection Notification Date: March 1, 2026
- Import Workshop Program and Accepted Papers to iclr.cc: March 11, 2026
- Camera-Ready Submission: April 1, 2026
- Workshop Date: April 26 or 27, 2026
Schedule
All times are in Rio de Janeiro Time (GMT-3).
| Rio de Janeiro Time (GMT-3) | Event |
|---|---|
| 8:55-9:00 | Opening Remarks |
| 9:00-9:40 | Invited Talk I |
| 9:45-10:25 | Invited Talk II |
| 10:25-10:55 | Poster/Break |
| 11:00-11:40 | Invited Talk III |
| 11:40-12:10 | Contributed Talks |
| 12:10-13:30 | Lunch |
| 13:30-14:10 | Invited Talk IV |
| 14:15-14:55 | Invited Talk V |
| 15:00-15:30 | Poster/Break |
| 15:35-16:15 | Invited Talk VI |
| 16:20-17:00 | Invited Talk VII |
| 17:00-17:30 | Contributed Talks |
| 17:30-17:35 | Closing Remarks |
Organizers
Wuyang Chen
Assistant Professor, Simon Fraser University
Ben Erichson
Senior Research Scientist, International Computer Science Institute (ICSI)
Research Scientist, Berkeley Lab
Yongji Wang
Research Scientist, Google Deepmind
Laurence Perreault-Levasseur
Associate Professor, University of Montreal
Bo Li
Associate Professor, University of Illinois at Urbana-Champaign
Damian Borth
Professor of Artificial Intelligence & Machine Learning, University of St. Gallen
Swarat Chaudhuri
Professor, The University of Texas at Austin

