Skip to main content

In 2000, science fiction writer Ted Chiang imagined a future where “the frontiers of scientific inquiry have moved beyond the comprehensibility of humans.” In a new article published today in Science, James Evans (Faculty Co-Director of Novel Intelligence; Max Palevsky Professor of Sociology & Data Science; and Director of the Knowledge Lab) and co-author Eamon Duede suggest we may be on the threshold of that future. 

Their piece, “After Science,” explores how AI is fundamentally transforming scientific inquiry. Since the Enlightenment, a key way we’ve measured understanding has been through our ability to predict and control phenomena, from protein structures to fusion reactions. But with the integration of AI in science, our ability as humans to understand nature appears set to be outpaced by our ability to instrumentally control nature.

What does that mean for the future of scientific inquiry? Tracing the central role of human curiosity and diversity in driving discovery to date, Evans and Duede describe three novel challenges the integration of AI poses and how we can navigate them.    

Three Novel Challenges

  • Curiosity: Whereas scientific discovery has often come from a human drive to explore and explain, algorithms have more often been designed as tools, built to complete a task with high efficiency. Evans and Duede suggest encoding computational curiosity into models to allow for the possibility of unexpected or serendipitous discovery, encouraging models to seek out anomalies and surprises rather than just confirming existing knowledge.
  • Diversity: Prioritizing efficiency risks creating a monoculture where dominant approaches crowd out alternative methods in development. Just as human science thrives on diverse perspectives and disagreement, AI-driven science must explicitly generate and maintain diverse approaches to avoid premature convergence.
  • Confabulation: AI “hallucinations” and confabulations could flood science with low-quality findings faster than we can verify them. Pointing to cautionary tales like the beta-amyloid hypothesis in Alzheimer’s research, where manipulated images went undetected for 15 years, Evans and Duede argue that science must now invest in automating quality control to keep pace with AI’s generative capacity.

A New Kind of Science

If we successfully navigate these challenges, Evans and Duede envision a future where we’re able to tackle long-standing problems and advance understanding in unprecedented ways. But human scientists will increasingly shift from directly understanding nature to understanding the AI systems that understand nature, like Chiang’s prescient “hermeneutics of AI.”

“Science ‘After Science’ should still push the boundaries of human understanding,” they conclude, “while also turning [to] one seeking richer self-understanding about what humanity values and how to reach past human limits and capacity to achieve it.”

At the Data Science Institute, Professor Evans, in partnership with Professor Chenhao Tan, leads the Novel Intelligence Research Initiative, which focuses on how human intelligence and artificial intelligence can best complement one another’s unique strengths while mitigating one another’s weaknesses. You can learn more about their work here.

Read the full article in Science.

People

James Evans

Max Palevsky Professor of Sociology & Data Science; Director, Knowledge Lab; Faculty Co-Director, Novel Intelligence
arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallclosefacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass