DevPik Logo
OpenAISPUDsuperintelligenceSam Altmanindustrial policyrobot taxpublic wealth fund4-day work weekAI policyNew DealAI containmentGPT-6

OpenAI's New Deal for Superintelligence: SPUD, Robot Taxes, and a 4-Day Workweek

OpenAI released a 13-page policy paper proposing robot taxes, a public wealth fund, a 4-day workweek, and AI containment playbooks — while revealing its next frontier model "SPUD." Sam Altman says superintelligence demands a New Deal. Critics call it regulatory nihilism.

DevPik TeamApril 8, 202614 min read
Back to Blog
OpenAI's New Deal for Superintelligence: SPUD, Robot Taxes, and a 4-Day Workweek

What OpenAI Just Did

On April 6, 2026, OpenAI released a 13-page policy paper titled "Industrial Policy for the Intelligence Age: Ideas to Keep People First." This is a company valued at $852 billion telling the government how to tax, regulate, and redistribute wealth from AI.

Sam Altman told Axios that superintelligence is "so close, so mind-bending, so disruptive" that America needs a new social contract — comparable to the Progressive Era and the New Deal.

The paper dropped alongside two other bombshells: OpenAI's $122 billion funding round and the reveal of a new frontier model codenamed "SPUD" — with Altman saying developments are "unfolding faster than anticipated." OpenAI is also offering fellowships up to $100K and $1M in API credits for policy research, and opening a Washington DC workshop in May.

The timing was not lost on anyone. The same day, The New Yorker published a lengthy investigation questioning Altman's trustworthiness on AI safety. Fortune ran a headline calling it "regulatory nihilism." Tech Policy Press called it a "Policymercial."

This article breaks down every proposal in the paper, the criticism from all sides, what we know about SPUD, and what it all means for developers.

Public Wealth Fund ("Robot Dividend")

The most radical proposal: every American citizen gets a direct stake in AI-driven economic growth.

OpenAI proposes a public wealth fund seeded partly by AI companies themselves, invested in diversified long-term assets capturing AI company growth and broader AI adoption, with returns distributed directly to citizens.

This is essentially universal basic income funded by AI profits. OpenAI frames it as ensuring that the economic gains from superintelligence are broadly shared rather than concentrated among shareholders.

The concept borrows from Alaska's Permanent Fund (which distributes oil revenue to residents) and Norway's sovereign wealth fund — scaled to the AI economy.

Robot Tax / Automated Labor Tax

As AI replaces human work, payroll tax revenue — which funds Social Security, Medicaid, and SNAP — will collapse. OpenAI proposes new taxes on "automated labor" to shift the tax base from payroll toward capital gains and corporate income.

The logic: corporate profits will soar while the traditional funding mechanism for the entire American safety net erodes. Without intervention, AI could simultaneously create massive wealth and defund the systems that support the people it displaces.

This is not new — Bill Gates proposed a robot tax in 2017, and the EU has debated it for years. But OpenAI putting its weight behind the idea signals that the company building the displacement technology believes the displacement is real and imminent.

Four-Day Workweek at Full Pay

OpenAI proposes incentivizing companies and unions to pilot 32-hour workweeks with no pay cut — what they call an "efficiency dividend." AI-driven productivity gains would be converted into time back for workers rather than just higher corporate profits.

Additional proposals include increased retirement matches, employer-covered healthcare, and subsidized childcare and eldercare.

The framing: if AI makes workers more productive, the benefits should flow to workers, not just to balance sheets. This is the most populist proposal in the paper and the one most likely to generate political support across party lines.

Right to AI, Adaptive Safety Nets, and Portable Benefits

"Right to AI" — OpenAI wants AI access treated as foundational as literacy, electricity, and the internet. Free or low-cost access to foundational models for everyone: schools, libraries, small businesses, underserved communities. The paper explicitly references failures in internet deployment.

The irony is not subtle: OpenAI charges $200/month for its premium tier while proposing that AI access become a public right.

Adaptive Safety Nets with Auto-Triggers — Rather than reactive legislation, OpenAI proposes defining economic metrics (unemployment rates, displacement indicators) with preset thresholds. When metrics are hit, expanded support automatically kicks in: expanded unemployment benefits, wage insurance, fast cash assistance, training vouchers. The system scales up with disruption and phases out as conditions stabilize. This is arguably the most technically sound proposal in the paper.

Portable Benefits — Benefits that follow workers across jobs, industries, and entrepreneurship rather than being tied to a single employer. Healthcare, retirement, and skills training through portable accounts. This directly addresses the shift toward gig, freelance, and AI-augmented work patterns.

AI-First Entrepreneurs and Startup-in-a-Box

OpenAI proposes microgrants, revenue-based financing, and "startup-in-a-box" kits where AI handles the overhead that blocks entrepreneurship — accounting, marketing, procurement.

Worker organizations would serve as enablers, providing training, shared services, and IP protection. The vision is a world where AI lowers the barrier to starting a business so dramatically that entrepreneurship becomes accessible to anyone with an idea.

This ties into the broader agentic AI trend — AI agents that can handle complex multi-step business operations autonomously. Tools like those covered in our GPT-5.4 guide and frameworks like OpenClaw are already enabling solo developers to operate as one-person agencies.

Containment Playbooks for Rogue AI

This is the scariest part of the paper.

OpenAI explicitly acknowledges scenarios where dangerous AI "cannot be easily recalled" — model weights already released, developers unwilling to limit access, autonomous self-replicating systems.

Their answer: coordinated government and industry containment protocols, compared to cybersecurity incident response and public health emergency frameworks.

The fact that the company building frontier AI is publishing containment playbooks for when that AI goes wrong should give everyone pause. This is not a hypothetical academic exercise — OpenAI is preparing for scenarios where their own technology or competitors' technology becomes uncontrollable.

The paper also proposes:
- Guardrails for Government AI Use — clear laws on how governments can/cannot use AI, AI-generated audit trails for decisions, modernized FOIA including AI-interaction logs as federal records
- Incident Reporting System — companies share incidents, misuse, and near-misses with a designated authority, including "concerning internal reasoning" or "unexpected capabilities" — even if safeguards prevented harm

The Criticism

The proposals have drawn sharp criticism from multiple directions.

Fortune reported that critics call it "regulatory nihilism" — OpenAI proposing grand societal visions while simultaneously lobbying for light regulation on itself. A former Senate staffer said "All of this was already said" in 2023–24 policy forums, calling the paper nothing new.

Tech Policy Press published an article calling the entire paper a "Policymercial" — policy proposals wrapped around marketing for OpenAI's products and worldview.

Gizmodo characterized it as a "vague vision" from a company with incentive to "look like it cares" before a potential IPO. OpenAI is reportedly planning an IPO at a valuation exceeding $1.2 trillion.

The New Yorker investigation, published the same day, questioned Altman's track record on AI safety commitments. Greg Brockman, OpenAI's co-founder, donated millions to the Trump campaign while the company lobbies for light-touch AI policy in Washington.

Will Manidis wrote on Substack: "OpenAI is proposing that access to a product it sells be treated as a public necessity comparable to electricity or literacy." The comparison to public utilities, he argues, conveniently positions OpenAI as the essential provider.

The core tension is impossible to ignore: the company building superintelligence is telling the government how to regulate it. OpenAI transitioned from a nonprofit with the mission of "AI benefiting all humanity" to an $852 billion for-profit entity. The proposals, however well-intentioned, come from an organization with existential financial incentive to shape regulation in its favor.

What Is SPUD?

"Spud" is the codename for OpenAI's next frontier model. Reported by The Information, OpenAI has finished pretraining Spud, and Altman said developments are "unfolding faster than anticipated."

No confirmed specs have been released, but the signals are significant:

  • Positioned as a step toward superintelligence — AI that outperforms the smartest humans at virtually every task
  • The policy paper reads like preemptive framing for what SPUD will unleash on the economy
  • Social media speculation links SPUD to GPT-6 level capabilities
  • Reports suggest it will be natively multimodal
  • OpenAI staff are reportedly "shocked" at the model's progress
  • The $122 billion funding round and creation of an "AI deployment division" suggest OpenAI is preparing for a qualitative capability jump
  • WSJ reported OpenAI is projected to spend $125 billion on training costs alone by 2029

The policy paper's discussion of AI systems going from "hours-level tasks to months-level projects" is a direct hint at what SPUD may enable. If GLM-5.1 can work 8 hours autonomously, OpenAI appears to be targeting weeks or months of autonomous operation.

The combination of the policy paper and SPUD reveals a deliberate strategy: publish the social contract framework before releasing the model that will test it.

What This Means for Developers

These proposals, if enacted, would reshape the developer landscape:

Right to AI — Free or subsidized API access for developers, students, and small businesses. This could democratize access to frontier models currently locked behind $200/month paywalls.

Robot Tax — Companies may accelerate the shift from hiring to AI tools. Developer productivity tools and AI-powered coding agents become even more critical.

Portable Benefits — A better safety net for freelance and contract developers. The rise of AI-augmented solo developers and AI-first entrepreneurs gets a policy framework.

Public Wealth Fund — Every developer — every citizen — gets a stake in AI growth.

4-Day Workweek — AI-augmented developers could be the first adopters. If productivity gains from AI tools are real, the case for shorter workweeks becomes empirical.

Incident Reporting — Developers building with AI need to understand coming compliance obligations. If your AI does something unexpected, you may soon be legally required to report it.

Containment Playbooks — This raises fundamental questions about open-source model distribution. If governments develop containment protocols for AI that "cannot be easily recalled," open-source releases of powerful model weights could face new restrictions. This matters for every developer working with open models like GLM-5.1, Gemma 4, or DeepSeek V4.

The Bottom Line

OpenAI's paper is simultaneously the most thoughtful policy framework any AI company has published and the most self-serving. The proposals address real problems — the erosion of safety nets, the concentration of AI wealth, the need for containment protocols — but they come from a company with $852 billion reasons to shape the regulatory landscape.

The question is not whether these proposals are good or bad. Many are sensible. The question is whether the company that most benefits from light regulation can be trusted to design the regulatory framework.

Regardless of your view on OpenAI's motives, the policy conversation matters. SPUD is coming. The economic disruption the paper describes is not hypothetical. And the decisions made in the next 12 to 24 months — by governments, by companies, and by the developers building on these platforms — will shape whether AI's wealth is broadly shared or narrowly captured.

The AI future is being shaped right now — by the tools we build and the policies we choose. At DevPik, we believe powerful tools should be free and accessible to everyone. Try our 40+ free developer tools — no signup, no paywall, 100% client-side.

Frequently Asked Questions

What is OpenAI's industrial policy paper?
On April 6, 2026, OpenAI published a 13-page paper titled 'Industrial Policy for the Intelligence Age: Ideas to Keep People First.' It proposes 10 major policy ideas including a public wealth fund (robot dividend), robot taxes on automated labor, a 4-day workweek at full pay, a 'Right to AI' treating AI access as a public utility, adaptive safety nets with automatic triggers, portable benefits, support for AI-first entrepreneurs, containment playbooks for rogue AI, guardrails for government AI use, and an incident reporting system.
What is SPUD by OpenAI?
SPUD is the codename for OpenAI's next frontier AI model. Reported by The Information, OpenAI has finished pretraining SPUD. Sam Altman said developments are 'unfolding faster than anticipated.' No confirmed specs have been released, but it is widely speculated to represent GPT-6 level capabilities and is positioned as a step toward superintelligence. It is expected to be natively multimodal.
What is the Public Wealth Fund proposed by OpenAI?
OpenAI proposes a public wealth fund — sometimes called a 'robot dividend' — that would give every American citizen a direct stake in AI-driven economic growth. The fund would be seeded partly by AI companies, invested in diversified assets capturing AI growth, and returns would be distributed directly to citizens. It is similar to Alaska's Permanent Fund but scaled to the AI economy.
Will there be a robot tax on AI?
OpenAI's policy paper proposes new taxes on 'automated labor' as AI replaces human jobs. The idea is to shift the tax base from payroll taxes (which fund Social Security, Medicaid, SNAP) toward capital gains and corporate income. This is not yet law — it is a proposal — but the concept has support from figures including Bill Gates, the EU, and now OpenAI itself.
Is OpenAI proposing a 4-day work week?
Yes. OpenAI's policy paper proposes incentivizing companies and unions to pilot 32-hour workweeks with no pay cut, calling it an 'efficiency dividend.' The idea is that AI-driven productivity gains should be converted into time back for workers rather than just higher corporate profits. Additional proposals include increased retirement matches and subsidized childcare.
What are AI containment playbooks?
In its policy paper, OpenAI explicitly acknowledges scenarios where dangerous AI 'cannot be easily recalled' — model weights already released, developers unwilling to limit access, or autonomous self-replicating systems. Containment playbooks are coordinated government and industry protocols for responding to such scenarios, similar to cybersecurity incident response and public health emergency frameworks.
When is superintelligence coming according to OpenAI?
Sam Altman told Axios that superintelligence is 'so close, so mind-bending, so disruptive' that it requires a new social contract. The policy paper discusses AI systems going from 'hours-level tasks to months-level projects.' Combined with the SPUD model reveal and $122 billion funding round, OpenAI appears to believe superintelligence-level capabilities are approaching within the next 1-3 years.
Why are critics calling OpenAI's proposals regulatory nihilism?
Fortune reported that critics view the paper as 'regulatory nihilism' because OpenAI proposes grand societal changes while lobbying for light regulation on AI development itself. The criticism is that the proposals redirect attention toward economic policy while avoiding the harder question of how to regulate the development and deployment of increasingly powerful AI systems — which is the area where OpenAI has the most to gain from light-touch oversight.

More Articles