The Attacks: Molotov Cocktail and Gunfire in 72 Hours
In the early hours of Friday, April 10, 2026, at approximately 3:45 AM, someone threw a Molotov cocktail at OpenAI CEO Sam Altman's San Francisco home in the Russian Hill neighborhood. Security personnel stationed at the property extinguished the fire, which struck an exterior gate. Surveillance cameras captured the incident.
Altman later confirmed the attack in a personal blog post: "It bounced off the house and no one got hurt." The suspect then traveled to OpenAI's headquarters in the Mission Bay district, where he threatened to burn down the building. San Francisco police arrested him at the scene.
Two days later, on Sunday, April 12, gunshots were fired toward Altman's home — a second attack in just 72 hours. San Francisco police arrested two people in connection with the gunfire incident, which was a separate attack from the Molotov cocktail.
This marked the first time a major tech CEO had been the target of physical violence in the AI era — a grim escalation from heated online debates to real-world danger.
The Suspect: Daniel Alejandro Moreno-Gama
The Molotov cocktail suspect was identified as Daniel Alejandro Moreno-Gama, a 20-year-old from Spring, Texas. He was booked into San Francisco County Jail on Friday afternoon and faces charges of attempted murder (two counts — for Altman and for the security guard on-site), arson, and possession/manufacture of an incendiary device.
Moreno-Gama was carrying a manifesto when arrested and had traveled from Texas to San Francisco specifically to target Altman. He was reportedly driven by AI extinction fears — the belief that artificial intelligence poses an existential threat to humanity.
Investigations revealed he had been active on PauseAI's Discord server, where he posted 34 messages over approximately two years. PauseAI immediately condemned the attack, banned his account, and cooperated with authorities. One of his messages had previously been flagged as "ambiguous" and earned a warning from moderators.
The FBI subsequently raided a home in Spring, Texas linked to the suspect, seizing electronic devices and other evidence.
The New Yorker Exposé: Can He Be Trusted?
Just days before the attacks, The New Yorker published a bombshell investigative piece titled "Sam Altman May Control Our Future — Can He Be Trusted?" by Ronan Farrow and Andrew Marantz.
The article was the product of roughly 18 months of investigation and interviews with more than 200 people with firsthand knowledge of Altman and OpenAI.
"A relentless will to power" — Most sources described Altman as having "a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart."
"Sociopathic lack of concern" — An anonymous board member was quoted saying Altman combines "a strong desire to please people" with "a sociopathic lack of concern for the consequences that may come from deceiving someone."
"Pathological liar" — Multiple employees reportedly called Altman a "pathological liar."
Dario Amodei's documents — The Anthropic co-founder and former OpenAI VP of Research had compiled 200+ pages of documents detailing problems at OpenAI before leaving. He said: "The problem with OpenAI is Sam himself."
Documented deceptions — High-level employees provided documents detailing an "accumulation of alleged deceptions and manipulations," including offering the same job to two different people.
Safety concerns — One internal message stated that this environment "does not create an environment conducive to the creation of a safe Artificial General Intelligence."
Altman's Midnight Blog Post
On Friday evening, hours after the attack and the New Yorker piece's wider circulation, Altman published a personal blog post. He described writing it while "awake in the middle of the night and pissed."
He shared a family photo, writing that he was "hoping it might dissuade the next person from throwing a Molotov cocktail at our house." He initially called the New Yorker piece "incendiary" — later walking it back as a "bad word choice after a tough day."
Key admissions:
- "I am a flawed person in the center of an exceptionally complex situation"
- Admitted being "conflict-averse," which "caused great pain for me and OpenAI"
- "Not proud of handling myself badly in a conflict with our previous board"
- "I am sorry to people I've hurt and wish I had learned more faster"
He compared AI to Lord of the Rings — a "ring of power dynamic that makes people do crazy things." AGI isn't the ring itself, but "the totalizing philosophy of being the one to control AGI" is what corrupts. His solution: "power cannot be concentrated."
He also acknowledged that "the fear and anxiety about AI is justified" and called for "a society-wide response to be resilient to new threats."
The Bigger Picture: Why This Matters
These events unfold at arguably the most critical moment in AI history:
OpenAI's staggering valuation — Approximately $852 billion, making it one of the most valuable companies in history. The person running it is being called "sociopathic" by former colleagues.
AI capabilities are accelerating — Claude's Mythos and Glasswing models found thousands of zero-day vulnerabilities. OpenAI announced SPUD. AI models beat humans at complex desktop tasks.
Massive capital flows — An estimated $242 billion in AI VC funding in Q1 2026 alone.
AI anxiety is becoming physical — The attacks represent a dangerous escalation from online discourse to real-world violence.
The safety movement faces a reckoning — PauseAI condemned the attack, but the incident raises questions about existential risk rhetoric and radicalization. Violence undermines their legitimate policy message.
The trust question — Who should control the most powerful technology ever created? If the person leading the world's most advanced AI company is described as having a "sociopathic lack of concern" by those closest to him, what does that mean for the rest of us?
What Developers Should Take Away
Trust but verify — Whether it's AI tools degrading silently or leadership being questioned, developers need independent verification. Don't trust any single provider blindly.
Don't build critical infrastructure on a single provider — Use multiple AI providers. Open-source alternatives like GLM-5.1, Gemma 4, and Llama give you options.
The 'ring of power' applies to developers too — Don't let one AI tool or one company own your entire workflow. Maintain the ability to switch providers.
Safety isn't just corporate PR — Real people are getting hurt over AI anxiety. As developers building with AI, we have a responsibility to be transparent.
Stay informed — The AI landscape is changing faster than any technology in history.
The debate about who controls AI is getting louder — and more dangerous. At DevPik, we believe tools should be transparent, open, and in YOUR control. Every one of our 48+ free developer tools runs 100% client-side — no black boxes, no hidden changes, no surprises.




