On October 22, 2025, over 850 people signed a statement calling for a prohibition on developing superintelligent AI. The signatories include Nobel laureates, AI godfathers Geoffrey Hinton and Yoshua Bengio, tech leaders like Steve Wozniak and Richard Branson, and even Prince Harry and Meghan Markle.
The statement, published by the Future of Life Institute, reads: "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in."
It's a noble effort. Thoughtful. Well-intentioned. Signed by some of the smartest people on the planet.
And it's going to accomplish absolutely nothing.
Here's why: While 850 concerned citizens are signing petitions, OpenAI just raised $6.6 billion specifically to build AGI, Google DeepMind is throwing billions at the same goal, and Anthropic is racing to keep pace. Sam Altman isn't reading petitions. He's building superintelligence as fast as humanly possible, because whoever gets there first wins everything.
The people calling for a ban are not the people with the power to stop it. And the people with the power to stop it are the ones spending tens of billions to make it happen.
So yeah, your job's still fucked. Petition or no petition.
What The Statement Actually Says
The Future of Life Institute's statement is surprisingly direct. No corporate hedging, no vague calls for "responsible development." Just straight up: Stop building superintelligence until we know it's safe and the public agrees.
The core argument:
- Superintelligence would be the most powerful technology humanity has ever created
- We currently have no scientific consensus on how to make it safe or controllable
- Building it anyway is reckless and potentially existential-risk-level dangerous
- Development should be banned until safety is proven and the public consents
They're not saying "slow down a bit." They're saying "full stop until we figure this out."
The statement cites a recent survey showing only 5% of U.S. adults support "fast, unregulated" superintelligence development. Meanwhile, a majority believe superhuman AI shouldn't be created until proven safe or controllable.
So public opinion is clear: people don't want this. And the people building it don't care.
Who Signed It (And Why It Doesn't Matter)
The signatory list reads like a who's who of people who understand AI deeply and are genuinely concerned about where this is heading:
AI Pioneers: Geoffrey Hinton and Yoshua Bengio - literally the "godfathers" of modern AI. Hinton left Google specifically to speak freely about AI risks. Bengio has been sounding alarms for years. These are not random critics. These are the people who built the foundation of deep learning.
Computer Scientists: UC Berkeley's Stuart Russell and other leading researchers who've spent decades thinking about AI safety.
Tech Leaders: Steve Wozniak (Apple co-founder), Richard Branson (Virgin Group founder), and other executives who understand technology's impact.
Public Figures: Prince Harry and Meghan Markle, former Irish President Mary Robinson, Susan Rice, and bizarrely, Steve Bannon and Glenn Beck. (That last part is wild - when Steve Bannon and AI researchers agree on something, you know it's serious.)
Nobel Laureates and policymakers across multiple fields.
This isn't a fringe group. This is credible expertise across technology, policy, and public influence. And none of it matters because they're not the ones building superintelligence.
The people who ARE building it? Sam Altman, Demis Hassabis, Dario Amodei. None of them signed this petition. And they're not stopping.
The fundamental problem: This petition is asking people to voluntarily stop pursuing potentially trillions of dollars in value, strategic dominance, and the most powerful technology in human history. Out of concern for safety and public opinion. You see the issue here, right?
Why This Won't Stop Anything
Let's be brutally realistic about why this petition - despite its impressive signatory list - will accomplish nothing:
1. It's Not Legally Binding
This is a statement, not legislation. It carries zero enforcement power. OpenAI, Google, Anthropic, and every other AI lab can (and will) completely ignore it. There's no penalty for proceeding. There's no regulatory authority stopping them.
It's the equivalent of a strongly worded letter asking companies to please stop building the thing they've bet their entire existence on.
2. The Race Dynamics Make Stopping Impossible
Even if Sam Altman personally agreed with this petition and wanted to stop, he can't. Because if OpenAI stops, Google doesn't. If Google stops, Anthropic doesn't. If all the U.S. companies stop, Chinese labs don't.
This is classic game theory prisoner's dilemma shit. The first mover advantage of achieving superintelligence is so massive that no rational actor can afford to stop while competitors continue. Whoever builds it first potentially controls everything. That's not hyperbole. That's the actual strategic calculus.
3. The Financial Incentives Are Incomprehensible
OpenAI just raised $6.6 billion at a $157 billion valuation specifically to build AGI. Google has spent tens of billions on DeepMind. Anthropic raised billions. Microsoft, Meta, Amazon - all pouring resources into this race.
We're talking about hundreds of billions of dollars already invested, with the promise of trillions in return. No petition is stopping that momentum. Not even one signed by AI pioneers.
4. Public Opinion Doesn't Control Private Companies
The petition cites that 95% of Americans don't support fast, unregulated superintelligence development. Cool. And?
OpenAI is a private company. So is Anthropic. Google's parent company Alphabet answers to shareholders, not public opinion polls. Unless this concern translates into actual regulation with enforcement mechanisms, it's just noise.
And regulation? In the current U.S. political environment? Where tech companies have massive lobbying power and AI is seen as a strategic competition with China? Yeah, good luck with that.
What Would Actually Work (And Why It Won't Happen)
If you actually wanted to slow down or stop superintelligence development, here's what would need to happen:
International Treaty With Enforcement: All major nations would need to agree to ban superintelligence development, with verification mechanisms and severe penalties for violations. Think nuclear non-proliferation, but for AI.
Why it won't happen: The U.S. and China are in an AI arms race. Neither will voluntarily handicap themselves while the other continues. There's zero trust that verification would work. And the economic incentives are too massive.
Domestic Regulation With Teeth: U.S. Congress passes legislation banning superintelligence research, with criminal penalties for violations and aggressive enforcement by federal agencies.
Why it won't happen: Tech companies have spent hundreds of millions on lobbying. AI is framed as critical to national security and economic competitiveness. Any attempt to ban it would be fought with unlimited resources and portrayed as "letting China win."
Industry Self-Regulation: Major AI labs voluntarily agree to binding safety standards and independent oversight before proceeding to superintelligence.
Why it won't happen: We've seen this movie before with social media, crypto, and every other tech sector. "Self-regulation" means doing the minimum PR-friendly gestures while racing ahead anyway. The competitive pressure is too intense.
So what's left? Petitions. Statements. Public appeals to the better nature of tech CEOs who are financially incentivized to ignore them.
What This Means For You
The superintelligence ban petition is a clear signal that even AI pioneers - the people who built this technology - are deeply concerned about where it's heading. That should terrify you more than comfort you.
When Geoffrey Hinton and Yoshua Bengio are saying "we need to stop until we know this is safe," and the response from industry is "lol no we're raising more billions to go faster," that tells you everything about what's coming.
Here's the reality:
- Superintelligence development is not slowing down. It's accelerating. Every major AI lab is in an existential race to achieve it first.
- Safety concerns are secondary to competitive pressure. No company will voluntarily stop while competitors continue.
- Public opinion and expert warnings carry zero weight without regulatory enforcement. And that enforcement isn't coming.
- The timeline to AGI/superintelligence is measured in years, not decades. OpenAI's internal projections suggest 2027-2028. That's potentially 2-3 years away.
For your job prospects, this means:
If you're counting on regulation or industry restraint to slow automation, you're delusional. The people with the power to slow it down are the ones racing to build it faster. Every delay by one company is an advantage to competitors.
The petition is notable because it shows the divide: the people who understand AI deeply are calling for caution. The people building AI commercially are ignoring them.
Which group do you think determines your employment future? The concerned scientists signing petitions, or the CEOs raising billions to automate your job?
The Bottom Line
850+ credible, intelligent, accomplished people signed a statement calling for a ban on superintelligence development until safety is proven and public consent is secured.
It won't matter.
The competitive dynamics, financial incentives, and strategic implications make stopping impossible without enforcement mechanisms that don't exist and won't be created.
So while AI pioneers politely ask companies to please consider safety, those companies are spending tens of billions to race ahead as fast as possible. The petition is a historical document - proof that people tried to sound the alarm before the consequences hit.
It won't stop superintelligence from being built. It won't slow the automation of your job. It won't change the trajectory we're on.
But hey, at least when it all goes sideways, we'll have a nicely signed statement from 850 people who said "we warned you."
That'll be super comforting when you're competing with superintelligent AI for employment.
Original Source:
CNBC: Hundreds of public figures urge AI 'superintelligence' ban