The Michigan AI Safety Initiative (MAISI) is a student organization at the University of Michigan. Our mission is to:

  • Build the AI safety community at the University of Michigan

  • Launch students into high-impact AI safety careers

  • Conduct research that reduces the risk of AI-related catastrophes

Who we are

Hopefully not! AI has tremendous potential for making the world a better place, especially as the technology continues to develop. We’re already seeing some beneficial applications of AI in healthcare, manufacturing, agriculture, accessibility, automotive safety, and earth science, to name just a few.

But as an incredibly powerful technology, AI also poses some serious risks. At the very least, malicious actors could use AI to cause harm, such as building biological weapons, deploying hazardous malware, or empowering oppressive regimes.

Additionally, AI systems could become widespread and irreplaceable because of their potential for generating business revenue. Their actions and decisions would then exert significant influence over society, potentially in ways that conflict with human values. For instance, ubiquitous AI could have enduring negative effects on employment, human connection, and global stability. Some serious harms may already be arising through AI-powered social media feeds.

More speculatively, future AI systems could seek power over humans. AI is evolving rapidly, and we might see qualitatively different systems in the years ahead. Such systems may be able to form sophisticated plans and act autonomously to achieve their own goals. If that’s the case, they may try to acquire physical materials or resist shutdown attempts, as these strategies are useful for achieving a wide variety of goals. Highly capable AI systems might overcome human resistance to these efforts, similar to how modern chess machines defeat even the best human chess players.

These possibilities can sound like science fiction, and some AI practitioners are indeed skeptical. However, in one of the largest-ever surveys of AI researchers, well over half of respondents “considered extremely bad outcomes (e.g. human extinction) a nontrivial possibility (≥5% likely).” And in May 2023, hundreds of AI experts and notable figures signed a statement underscoring the severity of the risks: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Perhaps the biggest challenge is the breakneck pace of AI progress, which might accelerate as AI systems themselves become useful contributors to AI research, such as through writing code or designing hardware. Such rapid AI development could easily lead to rapid societal change—it’s even possible that future AI systems will cause “explosive” economic growth. Overall, as forthcoming AI breakthroughs create novel hazards, the world may have only a narrow timeframe to effectively intervene.

Will AI really cause a catastrophe?

Introductory resources

The brief arguments above ignored many important considerations. For more details on how AI might cause a catastrophe, check out these readings:

A relevant textbook is Introduction to AI Safety, Ethics, and Society by Dan Hendrycks.

Lastly, some thought-provoking books include The Alignment Problem by Brian Christian, Human Compatible by Stuart Russell, and Superintelligence by Nick Bostrom.