'Scary side': Tech figures are preparing for 'doomsday' after this Trump move

FILE PHOTO: U.S. President Donald Trump displays a signed executive order on gold card visa in the Oval Office at the White House in Washington, D.C., U.S., September 19, 2025. REUTERS/Ken Cedeno/File Photo
When President Donald Trump revoked the Biden administration’s 2023 executive order on AI safety, he removed a formal signal that the U.S. government would demand oversight of advanced AI models.
That move has only deepened anxiety among some technology leaders — including Mark Zuckerberg and OpenAI’s Ilya Sutskever — about the potential existential dangers of artificial intelligence.
In a recent article, BBC technology editor Zoe Kleinman examines how some of the most powerful figures in tech are quietly preparing for worst‑case scenarios, driven by mounting concerns that AI could one day outstrip human control.
Kleinman’s central claim is that, even as many tech leaders are racing to develop more advanced AI, they also harbor serious fears about how that technology could spin out of control. The paradox, she suggests, is that those with the power to push AI forward may also be among its greatest skeptics — or at least its most fearful guardians.
On Mark Zuckerberg’s 1,400‑acre Kauai property, she notes, he is building a “shelter” with energy and food autonomy; workers on the project were reportedly gagged by NDAs to prevent talk.
The author added: "Asked last year if he was creating a doomsday bunker, the Facebook founder gave a flat 'no'. The underground space spanning some 5,000 square feet is, he explained, 'just like a little shelter, it's like a basement.'"
She also cited reporting that, by mid-2023, OpenAI’s chief scientist Ilya Sutskever’s believed AI could soon reach “AGI” (artificial general intelligence).
At a meeting, he reportedly suggested building an underground shelter for top scientists before the launch of such a powerful system — a symbolic move to highlight the stakes of AI development.
She also emphasized that many in tech no longer see AI as merely a tool but as a possible threat to human control or dominance — a shift that has pushed warnings of “doomsday” scenarios from the fringe toward the mainstream.
Kleinman also mentioned the danger of Trump’s revocation of Biden’s AI safety order which required developers of powerful models to share safety test results with the federal government. By rescinding that, the new administration signals a lighter regulatory approach.
Without strong guardrails, she argued, the drive for speed and competitive advantage may override caution.
The article also mentioned that tech billionaire Elon Musk recently supported "the idea that AI will become so cheap and widespread that virtually anyone will want their 'own personal R2-D2 and C-3PO' (referencing the droids from Star Wars)."
"There is a scary side, of course," Kleinman wrote.
"Could the tech be hijacked by terrorists and used as an enormous weapon, or what if it decides for itself that humanity is the cause of the world's problems and destroys us?"