About Us

The AI 2027 scenario is the first major release from the AI Futures Project. We’re a new nonprofit forecasting the future of AI. We created this website in collaboration with Lightcone Infrastructure.

Get in touch

You can reach us at info@ai-futures.org. We look forward to hearing from you.

If you would like to run our tabletop exercise (or have us run it for you), please fill in this form.

If you would like to get involved in steering toward a positive AGI future, we've written a blog post on What you can do about AI 2027.

If you are interested in working with us, see our available positions.

We accept donations online or through major DAF providers (EIN 99-4320292, AI Futures Project Inc, previously named Artificial Intelligence Forecasting Inc).

Our team

Daniel Kokotajlo, Executive Director: Daniel oversees our research and policy recommendations. He previously worked as a governance researcher at OpenAI on scenario planning. When he left OpenAI, he called for better transparency of top AI companies. In 2021, he wrote What 2026 Looks Like, an AI scenario forecast for 2022–2026 that held up well. See also his Time100 AI 2024 profile.

Eli Lifland, Researcher: Eli works on scenario forecasting and specializes in forecasting AI capabilities. He also co-founded and advises Sage, which builds interactive AI explainers. He previously worked on Elicit, an AI-powered research assistant, and co-created TextAttack, a Python framework for adversarial examples in text. He ranks first on the RAND Forecasting Initiative all-time leaderboard.

Thomas Larsen, Researcher: Thomas works on scenario forecasting and focuses on understanding the goals and real-world impacts of AI agents. He previously founded the Center for AI Policy, an AI safety advocacy organization, and worked on AI safety research at the Machine Intelligence Research Institute.

Romeo Dean, Researcher: Romeo specializes in forecasting AI chip production and usage. He graduated cum laude from Harvard University with a concurrent master’s in computer science and a research focus in security and hardware. He previously was an AI Policy Fellow at the Institute for AI Policy and Strategy.

Lauren Mangla, Head of Special Projects: Lauren works on the AI 2027 tabletop exercises, communications, and hiring. She previously managed all major fellowships and events at Constellation, an AI safety research center. Before that, she served as the Executive Director of the Supervised Program for Alignment Research (SPAR), and held internships at NASA, the Department of Transportation, and in New York City policy.

Contributions

Daniel Kokotajlo, Eli Lifland, Thomas Larsen, and Romeo Dean wrote the content of the scenario and endings. AI 2027 was informed by experience from more than a dozen tabletop exercises with hundreds of different people. Jonas Vollmer gave feedback throughout this process and helped run the exercises and helped build the website.

Oliver Habryka, Rafe Kennedy, and Raymond Arnold from Lightcone Infrastructure built and designed the website. Scott Alexander volunteered to rewrite our content in an engaging style; the fun parts of the text are his and the boring parts are ours.

FutureSearch provided independent forecasting for our timelines, takeoff, and revenue research. Thanks to Tom Liptay, Finn Hambly, Sergio Abriola, and Tolga Bilge for providing forecasts.

Many thanks to the many people who gave feedback on earlier drafts of the scenario and website; thanks especially to Lisa Thiergart and David Abecassis for contributing ideas and feedback early on. David wrote an alternate branch to the scenario which you can see here. Thanks also to Nikola Jurkovic for co-authoring the timelines forecast, and Jason Hausenloy for helping with various parts of the process.

Through the course of writing this, we had hundreds of people review the text. Of those, some agreed to be acknowledged specifically on our website. Thanks to all of:

Ada Lin, Adam Binksmith, Aidan O'Gara, Ajeya Cotra, Alice Schwarze, Alvin Ånestrand, Andreas Stuhlmüller, Anton Korinek, Aryan Bhatt, Augene Park, Ben Hayum, Ben Hoskin, Buck Shlegeris, Carl Shulman, Caroline Jeanmaire, Cheryl Luo, Coby Joseph, Daan Juijn, David Kasten, Evan R. Murphy, Gary Marcus, George Adamopoulos, Gretta Duleba, Harrison Durland, Helen Toner, Holden Karnofsky, Ilene and John Pachter, Jack Morris, Jacob Lagerros, James Campbell, Jeffrey Wu, Joe Carlsmith, John-Clark Levin, Jonathan Happel, Jonathan Mann, Joseph Rogero, JS Denain, Julian Hazell, Jun Shern Chan, Ken Lifland, Kendrea Beers, Kyle Scott, Laura King, Lukas Finnveden, Lukas Gloor, Malo Bourgon, Matt Chessen, Matthew Kenney, Mauricio Baker, Miles Brundage, Nate Foss, Neel Nanda, Nicky Case, Nikola Jurkovic, Nuño Sempere, Ollie Stephenson, Oscar Delaney, Peter Hartree, Ramana Kumar, Richard Ngo, Rishi Gupta, Rose Hadshar, Rosie Campbell, Ryan Greenblatt, Sam Bowman, Samuel Hammond, Sebastian Schmidt, Siméon Campos, Sören Mindermann, Steve Newman, Steven Adler, Thomas Woodside, Tim Fist, Tolga Bilge, William MacAskill, Yoshua Bengio, Zershaaneh Qureshi, Zvi Mowshowitz.

Update August 2025: We are no longer accepting bets and bounties submissions.

We encourage you to debate and counter this scenario. We hope to spark a broad conversation about where we’re headed and how to steer toward positive futures. To incentivize this, we’re announcing the bets and bounties program:

  • If you find an error in our work we’ll pay you $100.

  • If you change our mind on an important forecast such that we would have written the scenario substantially differently, we'll pay you at least $250.

  • If you disagree about a forecast, we’d love to find a bet.

  • If you can write a high-quality alternative scenario, we’ll pay you $2,500. Example past alternative scenarios that would meet our bar include How AI Might Take Over in 2 Years, A History of The Future, and AI and Leviathan.

More information can be found here.

We’ve developed and run over 30 iterations of a table-top exercise (TTX) to simulate the development of AGI. We’ve found it helpful for informing our own thinking about how AGI might go, and so have many of our participants. Past participants have included researchers at OpenAI, Anthropic, and Google DeepMind, congressional staffers, and journalists.

The TTX starts in April 2027 of our scenario. A US AGI company has just built a superhuman coder. China is hot on their tail and has managed to steal the AI’s weights, allowing them to run their own version. Research on both sides is getting seriously accelerated within the company.

We start at this state because it’s the last time period in our scenario that feels anywhere close to a robust prediction. It’s unlikely that there would be major government intervention before anyone builds an automated research engineer, it’s unlikely that security in any AGI company becomes good enough to stop China in the near future, and the AGI race appears to be tightening. But after April 2027, major governments will consider extreme moves.

Every TTX is different. Most iterations of the exercise finish with the development of superintelligence. Sometimes the governments are asleep at the wheel, ceding power to the leading company. Sometimes the AGI follows human intentions, sometimes it is misaligned. There are always cyberattacks, sometimes Taiwan gets invaded, and in a few cases there’s escalation to a full World War 3.

If you would like to run this exercise (or have us run it for you), please fill in this form. The exercise takes 4 hours and works for 8–14 people.

Changelog

Changes made to our scenarios and research since initial publication.


January 2026

January 27th

Correct range of AGI medians when we published AI 2027.
In footnote 3, change "Specifically, our medians ranged from 2028 to 2035" to "Specifically, our medians ranged from 2028 to 2032." The 2035 was based on a mistaken understanding of what one co-author's view was at the time of publication.

January 26th

Add Daniel's forecasts to the link in the foreword that displays our latest forecasts.
In the foreword, change "For our updated Dec 2025 views, see here" to "For our latest forecasts, see here" The URL change makes it so the page contains both Daniel's and Eli's forecasts, while previously it only contained Eli's.

January 21st

Add this changelog.
Adjust grammar in timelines clarification.
Change "Added Nov 22 2025: To prevent misunderstandings: we don't know" to "Added Nov 22 2025, to prevent misunderstandings: we don't know"

January 19th

Acknowledge OpenBrain net approval rating mistake.
Add the following on hovering over the net approval number on the side panel: "Added Jan 2026: We say OpenBrain has -25% net approval in Apr 2025, but we now believe the net approval was more like +15%, so our estimates were too low." We originally used a question from this poll which asked about people's approval of OpenAI, but before asking that question respondents were told negative things about OpenAI's safety behavior, so this isn't an accurate representation of people's unanchored views. In the meantime, this less leading poll had OpenAI at +17% approval.

January 7th

Edit team info on About page.
Replace Jonas Vollmer with Lauren Mangla.

January 5th

Update timelines clarification in the foreword to link to the AI Futures Model.
Change "For more detail on our views, see here." to: "For our updated Dec 2025 views, see here." Also move footnote 3 to the end of the previous sentence instead of the end of this one. Edit the second sentence in foonote 3 to: "See here for more information about what we were/are confident about, and what we aren't." (and change "moved it" to "added a clarification" in the first sentence.)

December 2025

December 31st

Add links to AI Futures Model at the top of the timelines and takeoff forecasts.
Add to the beginning of both the timelines and takeoff forecasts: "2025 Dec 31 update: We've published a revamped timelines and takeoff model at aifuturesmodel.com."

December 19th

Change DOD contracting scale-up language.
Change the wording from "Department of Defense (DOD) quietly begins contracting OpenBrain..." to "Department of Defense (DOD) quietly but significantly begins scaling up contracting,"
Add DPA uncertainty footnote for compute consolidation.
Add a footnote for the claim "if necessary, the government could use the Defense Production Act (DPA) to take trailing companies’ datacenters and give them to OpenBrain" to also say: "We aren't legal experts ourselves, and policy people we talk to have been divided about the legality of using the DPA to consolidate compute: some think it would be fine, others think it wouldn't fly. Our opinion is that there's probably a way to make it 'work' IF the CEOs of the companies are cooperative, and maybe even if not. Importantly, (a) the executive branch can just do things and wait for the courts to catch up later, and (b) POTUS wields many sticks and many carrots which he can use against big tech companies, and he can use the combination of sticks and carrots to pressure their CEOs into cooperating and e.g. not contesting his orders in court. Reminder that we are making predictions here not recommendations."
Correct Gigafactory Shanghai size estimate.
Add a footnote for the claim “Gigafactory Shanghai has an area of 4.5M sq ft”: “The 4.5M figure was based on the initial phase 1 size, but Gigafactory Shanghai is now about twice as large. This means our estimate was off by a factor of 2, though we still think the overall conclusion holds.”
Update Best-of-N reference paper.
Change the paper linked in the section "This also includes techniques like Best of N on verifiable tasks, and then keeping the best trajectories” from this paper to this paper.
Expand and clarify weights exfiltration scenario and add acknowledgment footnote.
Change the text in this dropdown from: “We imagine the theft of the weights as a series of coordinated small smash and grab thefts (meaning fast but non-covert) … (from the first server compromise to full weights exfiltration) is complete in under two hours.” to “We imagine the theft of the weights as a series of coordinated small smash and grab thefts (meaning fast but non-covert) across a series of Nvidia NVL72 GB300 servers running copies of the Agent-2 weights. The servers are compromised using legitimate employee access (a friendly, coerced, or unwitting insider with admin credentials helping the CCP theft effort). Insider credentials grant the attacker admin-level permissions to the servers. Using a microarchitectural side channel, the attacker extracts encryption keys from an Nvidia Confidential Computing-enabled Virtual Machine, allowing the them to intercept model weights as the VM is provisioned or updated. They initiate (or wait for) a routine update and exfiltrate the checkpoint in many small fragments, e.g., ~25 distinct servers each leaking ~4% of the model (~100 GB chunks for a ~2.5 TB half-precision checkpoint). The egress bandwidth of the entire datacenter is in the 100 GB/second range, so throttling to under ~1 GB/s per server avoids a major spike in network traffic; at that rate, each ~100 GB chunk can leave the datacenter in a couple of minutes. Live monitoring is either fooled by the attackers efforts to mask and split the transfers or outright disabled. The weights are then routed through various parallel channels and layers of IP masking to China to be decrypted locally with the stolen session key(s). The entire active part of the operation (from the first server compromise to full weights exfiltration) is complete in under two hours.” Add a footnote at the end of the first paragraph above that says “Thanks to Tjaden Hess for pointing out errors in an earlier version of this and thereby helping us improve the realism.”

December 16th

Fix labeling and totals in the compute forecast (Fig. 15).
- Remove Label unlabelled sliver from 2027 part of Fig 15 as “Rest of China”? - Change “Rest of Meta” in Fig 15 to be 2M to match bullets between figures and Fig 16 - Change “Rest of Google” in Fig 15 to be 2M to match bullets between figures and Fig 16 - Change Anthropic’s 2027 compute % in bullet #1 from 11% to 14%, xAI compute % bullet to “2% to 9%”, … - Then total compute in Fig 15 should add up to 99M, which is compatible (w/ rounding) with the table in Section 1 that says total H100e available in 2027 is 100M, not 107M
Clarify chip performance vs. efficiency trend (footnote 14).
Add footnote 14 to compute forecast clarifying the chip performance vs. chip efficiency trends difference (which happen to be the same). New footnote 14: "This can be calculated by downloading Epoch's historical data and dividing the FP16 and FP32 performance by the die area of the ML hardware data, and plotting the trend. Originally I (Romeo) had eyeballed the ~flat die size trend and chip performance trends and assumed it was also 1.35x/yr, and forgotten to look into it precisely. As of December 2025, thanks to Robi Rahman pointing this out in this X post I finally checked it out precisely, and found it was actually also exactly 1.35x/yr for the most relevant/longstanding precision formats. More explanation and graphs in this X reply."

December 12th

Fix typo in timelines expandable.
In "Why we forecast a superhuman coder in early 2027" expandable, add the missing space in "developed(added Dec 2025..."
Fix time horizon trend graph in timelines expandable.
Change the time horizon trend graph in the "Why we forecast a superhuman coder in early 2027" expandable in accordance with the issues discussed in our reponse to titotal's critique. nWe also added an explanation: "(added Dec 2025: we've updated the below graph due to a mistake in how the original curve was generated, to add an actual trajectory from our timelines model. We've also added trajectories for Daniel and Eli's all-things-considered SC medians at the time of publishing (Apr 2025). And we've added some new METR data points to the graph, but haven't updated the model trajectories based on them.)" We also added a note clarifying all-things-considered views: "(added Dec 2025: though as noted in the timelines forecast, adjusting for outside of model factors gave us slightly longer medians, e.g. Eli's was 2030)."

December 4th

Timelines forecast revisions based on titotal's critique.
Make the changes described in our response to titotal's critique.

November 2025

November 22nd

Add clarification about our timelines to the foreword.
Add this paragraph to the foreword: ("Added Nov 22 2025: To prevent misunderstandings: we don't know exactly when AGI will be built. 2027 was our modal (most likely) year at the time of publication, our medians were somewhat longer. For more detail on our views, see here.)" And add footnote 3, attached to the end of the sentence: "Specifically, our medians ranged from 2028 to 2035. When AI 2027 was first published we explained this in Footnote 1 as above, but to make our views more clear we have moved it to the foreword text. We are working on a website to track and explain our all-things-considered views on AI timelines as they update over time; we'll link to it here when it's ready." We removed the footnote to the AI 2027 title given that we added this paragraph and footnote.

September 2025

September 7th

Add commas to timelines forecast May update summary.
Add the commas in "2025 May 7 update: Eli has, based on feedback, made..."

August 2025

August 23rd

Stop acccepting bets and bounties submissions.
Add to About page: "Update August 2025: We are no longer accepting bets and bounties submissions."

August 22nd

Fix typo in the question of a Twitter poll.
Change 2x to 10x in: "From an informal Twitter poll re: increased speed from 2x compute..."

August 18th

Update Romeo Dean's bio on the About page.

August 5th

Add a link to a YouTube video depicting AI 2027.
Add "Watch" link above the scenario text, linking to this YouTube video.

July 2025

July 28th

Add footnote to the "AI 2027" title about our timelines.
Add the following footnote at the top right of "AI 2027" with the same text as the original Footnote 1: "We disagree somewhat amongst ourselves about AI timelines; our median AGI arrival date is somewhat longer than what this scenario depicts. This scenario depicts something like our mode. See our timelines forecast for more details."

July 7th

Consistently abbreviate the United States of America as "U.S.".
Previously we were inconsistent between "US" and "U.S."
Fix typo in footnote 56.
Change "...even if some elites wield more much power than other people." to "...even if some elites wield much more power than other people."
Add link to "What you can do about AI 2027" blog post on the About page.
Replace the paragraph on getting involved with: "If you would like to get involved in steering toward a positive AGI future, we've written a blog post on What you can do about AI 2027."

July 2nd

Update timelines expandable graph.
Further change the text at the bottom of the graph to say: "Forecast: Doublings may get faster due to fewer new skills being needed at higher timescales, and automation of AI R&D. The green curve is essentially a simplified version of the full AI 2027 timelines model. Upon AI 2027 release, our full model did not "backcast" previous data points as well as this curve. As of Jul 2025, we're working on updates."

July 1st

Timelines expandable updates.
Changes to the time horizon trend graph in the "Why we forecast a superhuman coder in early 2027" expandable: - Fix Claude 3.7 Sonnet's time horizon. - Add the METR trendline -Clarify that the time horizon is for 80% success rate at the top of the graph. - Edit the bottom text to say "Forecast: Doublings may get faster due to fewer new skills being needed at higher timescales, and automation of AI R&D. The trend is essentially a simplified version of the full AI 2027 timelines model. (Added Jul 2025: Upon AI 2027 release, our full model did not "backcast" previous data points as well as this curve. We're working on updates.)" Add a paragraph about the timelines forecast May update: "Added Jul 2025: We've made some updates to the forecast which push the median back 1.5 years while maintaining SC in 2027 as a serious possibility. We're working on further updates."

June 2025

June 27th

Timelines expandable clarification.
Change: "Such is the capability progression in our scenario:" to: "Such is roughly the capability progression in AI 2027. Here is a capability trajectory generated by a simplified version of our timelines model:"

June 23rd

Fix GPT/Agent-1 dots figure.
Change FLOPS->FLOP, make the amount of dots for Agent-1 made more precise

June 5th

Add limitations of the SC progress multiplier estimate in the takeoff forecast.
Add limitations of the SC progress multiplier analysis starting with "A few limitations of this analysis:" as follows: - We don’t take into account that the superhuman coder would also help some with experiment selection, which points toward a higher value. - An extension of the model used here gives an implausibly high progress multiplier when used for the SAR below, which points toward a lower value. Also remove the "to be slightly conservative" from: "We’re going to forecast 5x to be slightly conservative. We reiterate that this is just a guess and that it could be substantially faster or slower in reality."

May 2025

May 28th

Fix takeoff SAR progress multiplier estimate,.
Improve aggregation of researchers’ perspectives from a survey that we used to inform a SAR's progress multiplier (using log-mean rather than median) in our takeoff forecast. This changes the median estimate for "Method 3" from 35 to 24.
Link to Eli's analysis of algorithmic progress rather than Epoch's.
Change the link in: "Here we are only referring to (2), improved algorithms, which makes up about half of current AI progress." to this new blog post.
Timelines forecast revisions (May update).
Add our May timelines update to the timelines forecast appendix.

May 18th

Reduce the number of Superhuman AI Researchers in Apr 2028 in the slowdown ending.
Change 1 million to "almost half a million", correcting an arithmetic error.

May 6th

Add donation info to the About page.
Add: "We accept donations online or through major DAF providers (EIN 99-4320292, AI Futures Project Inc, previously named Artificial Intelligence Forecasting Inc)."

May 3rd

Show all of a timelines footnote which was getting cut off..
The footnote starting with "Recall that the time horizon..."

April 2025

April 27th

Add link to Forethought's report on AI-enabled coups.
Edited to: "More analysis on this risk is available in this report."

April 23rd

Adust power requirement projections slightly up..
Round up Dec 2024 projections and increase Dec 2027 projections for global AI power from 39->60 GW, US power 31.5->50, leading US company 5.4->10.
Clarify the definition of Superhuman AI Researcher in the takeoff forecast.
In particular, specify how much diversity of expertise is required.

April 22nd

Slightly increase projected AI company compute costs in the compute forecast.
Increase 2024 to 2027 projections by $4-20B.

April 21st

Add hiring interest form to the About page.

April 18th

Adjust 2024 compute shares of DeepCent and the rest of China in the compute forecast.

April 14th

Add links for getting involved to the About page.
Add: "If you would like to run our tabletop exercise (or have us run it for you), please fill in this form. If you would like to get involved, we recommend this AI Governance Course, the Horizon Fellowship, RAND’s Technology and Security Policy Fellowship, or the MATS Program."

April 10th

Lower some of the coding and hacking ability side panel estimates and raise some of the AI importance estimates.
Lower the coding and hacking abilities by between 0.03 and 0.24 in each of 10 time periods. Raise % of Americans who say AI is most important to be very high near the end of the scenario.
Remove old FutureSearch estimate from timelines forecast.
Remove "FutureSearch estimate of gap size: 18.3 months [1.7, 58]." as an estimate for "Other task difficulty gaps."
Remove a timelines forecast line about the next section that was no longer true due to last minute revisions.
Remove "The detailed reasoning for each gap estimate can be found in the following sections."
Clarify a timelines forecast footnote and fix display of another.
Changed "A full SC needs to do this faster and cheaper as well, but this will be discussed later." to "A full SC needs to do this faster and cheaper as well, but this will be accounted for later on in the time horizon extension method." and completed the "Recall that the time horizon..." footnote that was cut off.

April 9th

Fix timelines forecast headings.
Fix the display level of timelines forecast section headings.

April 8th

Improve explanation of why leading companies haven’t yet implemented neuralese.
Edit and slightly expand the paragraph starting with "To our knowledge, leading AI companies such as Meta, Google DeepMind, OpenAI, and Anthropic have not yet actually implemented this idea in their frontier models."

April 7th

Slightly raise forecasts for % saying AI is the most important issue.
Raise it by 2-3% at each of 5 time periods.
Refer to Xi as The General Secretary.
To be consistent with respect to not naming politicians.
Correctly display ">" sign in the timelines forecast.
Increase Eli’s all-things-considered timelines forecast 90th percentile.
Change Eli’s all-things-considered 90th percentile from 2050 to >2050.

April 4th

Remove link from takeoff forecast that didn't point anywhere.
Add a link to the scenario at the bottom of the summary.
Minor changes to figures in compute forecast.

April 3rd

Remove various captions from the timelines forecast.
Update GPT dots figure.
Fix the proportion of dots/boxes to correspond correctly to the FLOP amounts.
Fix typo: Change Agent-3 to Agent-4 in footnote 83.
Add "Change our mind" to the bets and bounties summary on the About page.
Add "If you change our mind on an important forecast such that we would have written the scenario substantially differently, we'll pay you at lesat $250."
Change FLOPS to FLOP/s in the R200 label in the compute forecast.
Clarify in the takeoff forecast figure caption that we assume no increases in training compute.
This change is for the first figure in the forecast, summarizing the results.
Delete redundant footnote.
Delete footnote 34, which was: "In fact, 5% of their staff is on the security team, but they are still mostly blocked from implementing policies that could slow down the research progress. See our Security Supplement for details." and was attached to the end of the sentence: "They are working hard to protect their weights and secrets from insider threats and top cybercrime syndicates (SL3),[^33] but defense against nation states (SL4&5) is barely on the horizon."
Make minor timelines forecast clarifications and fix typos.
Also more accurate description of the RE-Bench scores, in particular adding: "We focus on a subset of 5 of the 7 RE-Bench tasks due to issues with scoring in the remaining two, and will refer to this subset as “RE-Bench” in the rest of this report. In particular, we exclude Scaling Law Experiment because it’s easy enough for models to succeed at by luck that it’s not appropriate for Best-of-K scaffolding, and we exclude Restricted MLM Architecture because Claude 3.7 Sonnet reliably cheats at this task and METR has not yet been able to prompt the model to attempt the task without cheating."
Fix minor typos in the AI goals forecast.
Removing a “CITE” marker from "Plato (CITE the Republic)", changing to 'Plato's "The Republic"'. And finish an unfinished sentence: "Hypothesis \#2 is specifically about the more radical possibility that Agent-3 will side with the developer intentions even in cases where they conflict with the Spec."
Make the RE-Bench graph in the timelines forecast more descriptive.
Most importantly, say that we exclude Scaling Law Experiment and Restricted MLM Architecture due to issues with scoring.
Fix paper cited regarding steering vectors in footnote 24.
Change "research with steering vectors" to "research with steering vectors".