About Us

The AI 2027 scenario is the first major release from the AI Futures Project. We’re a new nonprofit forecasting the future of AI. We created this website in collaboration with Lightcone Infrastructure.

Get in touch

If you’re interested in doing work relevant to our scenario or are interested in collaborating, we’d love to get in touch. We run AGI tabletop exercises and are planning on working on policy research aimed at helping scenarios like AI 2027 end well. Please reach out to info@ai-futures.org.

Our team

Daniel Kokotajlo, Executive Director: Daniel oversees our research and policy recommendations. He previously worked as a governance researcher at OpenAI on scenario planning. When he left OpenAI, he called for better transparency of top AI companies. In 2021, he wrote What 2026 Looks Like, an AI scenario forecast for 2022–2026 that held up well. See also his Time100 AI 2024 profile.

Eli Lifland, Researcher: Eli works on scenario forecasting and specializes in forecasting AI capabilities. He also co-founded and advises Sage, which builds interactive AI explainers. He previously worked on Elicit, an AI-powered research assistant, and co-created TextAttack, a Python framework for adversarial examples in text. He ranks first on the RAND Forecasting Initiative all-time leaderboard.

Thomas Larsen, Researcher: Thomas works on scenario forecasting and focuses on understanding the goals and real-world impacts of AI agents. He previously founded the Center for AI Policy, an AI safety advocacy organization, and worked on AI safety research at the Machine Intelligence Research Institute.

Romeo Dean, Researcher: Romeo specializes in forecasting AI chip production and usage. He is completing a computer science master’s degree at Harvard University with a focus on hardware and machine learning. He previously was an AI Policy Fellow at the Institute for AI Policy and Strategy.

Jonas Vollmer, COO: Jonas focuses on our communications and operations. Separately, he also helps manage Macroscopic Ventures, an AI venture fund and philanthropic foundation. He previously co-founded the Atlas Fellowship, a global talent program, and the Center on Long-Term Risk, an AI safety research non-profit.

Contributions

Daniel Kokotajlo, Eli Lifland, Thomas Larsen, and Romeo Dean wrote the content of the scenario and endings. AI 2027 was informed by experience from more than a dozen tabletop exercises with hundreds of different people. Jonas Vollmer gave feedback throughout this process and helped run the exercises and helped build the website.

Oliver Habryka, Rafe Kennedy, and Raymond Arnold from Lightcone Infrastructure built and designed the website. Scott Alexander volunteered to rewrite our content in an engaging style; the fun parts of the text are his and the boring parts are ours.

FutureSearch provided independent forecasting for our timelines, takeoff, and revenue research. Thanks to Tom Liptay, Finn Hambly, Sergio Abriola, and Tolga Bilge for providing forecasts.

Many thanks to the many people who gave feedback on earlier drafts of the scenario and website; thanks especially to Lisa Thiergart and David Abecassis for contributing ideas and feedback early on. David wrote an alternate branch to the scenario which you can see here. Thanks also to Nikola Jurkovic for co-authoring the timelines forecast, and Jason Hausenloy for helping with various parts of the process.

Through the course of writing this, we had hundreds of people review the text. Of those, some agreed to be acknowledged specifically on our website. Thanks to all of:

Ada Lin, Adam Binksmith, Aidan O'Gara, Ajeya Cotra, Alice Schwarze, Alvin Ånestrand, Andreas Stuhlmüller, Anton Korinek, Aryan Bhatt, Ben Hayum, Ben Hoskin, Buck Shlegeris, Carl Shulman, Caroline Jeanmaire, Cheryl Luo, Coby Joseph, Daan Juijn, David Kasten, Evan R. Murphy, Gary Marcus, George Adamopoulos, Gretta Duleba, Harrison Durland, Helen Toner, Holden Karnofsky, Ilene and John Pachter, Jack Morris, Jacob Lagerros, James Campbell, Jeffrey Wu, Joe Carlsmith, John-Clark Levin, Jonathan Happel, Jonathan Mann, Joseph Rogero, JS Denain, Julian Hazell, Jun Shern Chan, Ken Lifland, Kendrea Beers, Kyle Scott, Laura King, Lowe Lundin, Lukas Finnveden, Lukas Gloor, Malo Bourgon, Matt Chessen, Matthew Kenney, Mauricio Baker, Miles Brundage, Nate Foss, Neel Nanda, Nicky Case, Nikola Jurkovic, Nuño Sempere, Ollie Stephenson, Oscar Delaney, Peter Hartree, Ramana Kumar, Richard Ngo, Rishi Gupta, Rose Hadshar, Rosie Campbell, Ryan Greenblatt, Sam Bowman, Samuel Hammond, Sebastian Schmidt, Siméon Campos, Sören Mindermann, Steve Newman, Steven Adler, Tolga Bilge, William MacAskill, Yoshua Bengio, Zershaaneh Qureshi, Zvi Mowshowitz.

We encourage you to debate and counter this scenario. We hope to spark a broad conversation about where we’re headed and how to steer toward positive futures. To incentivize this, we’re announcing the bets and bounties program:

  • If you find an error in our work we’ll pay you $100.

  • If you change our mind on an important forecast such that we would have written the scenario substantially differently, we'll pay you at least $250.

  • If you disagree about a forecast, we’d love to find a bet.

  • If you can write a high-quality alternative scenario, we’ll pay you $2,500. Example past alternative scenarios that would meet our bar include How AI Might Take Over in 2 Years, A History of The Future, and AI and Leviathan.

More information can be found here.

We’ve developed and run over 30 iterations of a table-top exercise (TTX) to simulate the development of AGI. We’ve found it helpful for informing our own thinking about how AGI might go, and so have many of our participants. Past participants have included researchers at OpenAI, Anthropic, and Google DeepMind, congressional staffers, and journalists.

The TTX starts in April 2027 of our scenario. A US AGI company has just built a superhuman coder. China is hot on their tail and has managed to steal the AI’s weights, allowing them to run their own version. Research on both sides is getting seriously accelerated within the company.

We start at this state because it’s the last time period in our scenario that feels anywhere close to a robust prediction. It’s unlikely that there would be major government intervention before anyone builds an automated research engineer, it’s unlikely that security in any AGI company becomes good enough to stop China in the near future, and the AGI race appears to be tightening. But after April 2027, major governments will consider extreme moves.

Every TTX is different. Most iterations of the exercise finish with the development of superintelligence. Sometimes the governments are asleep at the wheel, ceding power to the leading company. Sometimes the AGI follows human intentions, sometimes it is misaligned. There are always cyberattacks, sometimes Taiwan gets invaded, and in a few cases there’s escalation to a full World War 3.

If you would like to run this exercise (or have us run it for you), please reach out to us (info@ai-futures.org). The exercise takes 3–5 hours and works for 10–15 people.