What OpenAI just told us about the future of work
OpenAI's Industrial Policy for the Intelligence Age paper argues that AI could remake work, production and knowledge at scale. Here is what it says, point by point.
A summary of its Industrial Policy for the Intelligence Age paper
No time? Jump to the point-by-point summary.
A bigger claim than automation
OpenAI’s new paper is presented as an opening bid, not holy writ. Published on April 6th 2026, it argues that as the world moves toward more powerful AI, incremental policy fixes will not do. The company says it wants a broader democratic debate about how to spread the gains, contain the risks and stop power from pooling in too few hands. To keep the argument alive, OpenAI says it is soliciting feedback, funding fellowships and research grants, and convening discussions in Washington, DC. (OpenAI)
That matters because the paper is not merely about better chatbots. It is about political economy. OpenAI says advanced AI could lift productivity, lower the cost of essential goods, accelerate scientific discovery and create new forms of work. But it also warns of job disruption, misuse in areas such as cyber and biology, loss of control over powerful systems, and a concentration of wealth and power if policy lags behind technology. Its three stated aims are to share prosperity broadly, mitigate risks and democratise access and agency. (↗)
The historical analogy is telling. OpenAI explicitly compares this moment to earlier technological upheavals that produced the Progressive Era and the New Deal. The implication is plain enough: if AI really does remake work, production and knowledge, then the social contract may need rewriting too. This is not the language of a firm asking only to be left alone. It is the language of a company preparing the ground for a much larger argument about who benefits from intelligence machines, and on what terms. (↗)
The worker, not just the model
At the centre of the paper is a shrewd political insight. People will not be soothed by higher productivity if they feel poorer, weaker and easier to replace. So OpenAI says workers should have a formal voice in how AI is introduced at work. It wants AI to strip out dangerous, repetitive, administrative and exhausting tasks, while placing limits on uses that intensify workloads, narrow autonomy or undermine scheduling and pay. (↗)
The paper also tries to imagine a more entrepreneurial labour market. It proposes helping workers become “AI-first entrepreneurs”, using AI to handle the overhead that often stops small firms before they start. That would mean microgrants, revenue-based finance and shared back-office tools. Alongside this comes a “right to AI”: cheap, reliable access to foundational models, plus the connectivity, training and infrastructure needed to use them. The intended beneficiaries are not only large firms, but also workers, small businesses, schools, libraries and poorer communities. (↗)
This is an attempt to push back against a familiar fear: that AI will be widely used but narrowly owned. The paper says that without intervention the gains may concentrate in a handful of firms, even while the technology becomes more capable and more widespread. In effect, OpenAI is arguing that access to useful AI may need to be treated less like a luxury good and more like a basic economic input. That is an inference, but it is the logic of the document.
Silicon Valley discovers redistribution
The paper is boldest when it turns to tax and redistribution. If AI shifts income away from wages and towards profits and capital gains, OpenAI argues, then the tax base underpinning social insurance may begin to erode. Its answer is to lean more heavily on capital-based revenues, perhaps including taxes linked to automated labour, while offering firms incentives to retain, retrain and invest in workers. (↗)
That is not all. OpenAI proposes a Public Wealth Fund so that citizens, including those with little exposure to financial markets, share directly in AI-driven growth. It also suggests “efficiency dividends”: if AI cuts routine workload and lowers operating costs, some of those gains should show up in workers’ lives, through better retirement contributions, more help with healthcare and care costs, and even pilots for a 32-hour, four-day week with no cut in pay where output can be maintained. (↗)
That is a notable turn. Much of the tech industry has preferred to promise abundance and move on. OpenAI is conceding something more awkward: abundance on its own may not feel legitimate unless it is distributed. The future of work, in this telling, is not just about smarter software. It is also about who gets the dividend. That final judgment is an inference from the paper’s proposals on taxes, direct sharing and worker benefits. (↗)
A softer landing
The paper also spends time on the plumbing of economic security. It calls for safety nets that are faster, more responsive and easier to use, with real-time measurement of AI’s effect on jobs, wages and disruption. Temporary support, such as expanded unemployment benefits, cash assistance, wage insurance and training vouchers, would switch on automatically when disruption passes pre-set thresholds and fade as conditions stabilise. (↗)
It then argues for portable benefits: healthcare, retirement savings and skills accounts that follow the worker rather than the job. That would make more sense in a world of frequent transitions between employers, sectors, retraining and self-employment. It also points to the “care and connection economy”, including childcare, eldercare, education, healthcare and community services, as a natural destination for some displaced workers. AI may reduce paperwork in those fields, the paper says, but human connection will remain indispensable. (↗)
Here the paper is more persuasive than in some of its grander passages. It recognises that not everyone displaced by AI will become a founder or flourish in a high-tech sector. Some will need transitional support. Others will move into jobs where trust, patience and physical presence still matter. OpenAI also calls for AI-enabled scientific infrastructure to be spread beyond a few elite institutions, so that the benefits of discovery do not remain locked inside rich laboratories and superstar firms. (↗)
The safety state
This is also a paper about power, misuse and control. OpenAI argues that resilience must extend beyond what happens before deployment. It proposes stronger safety systems for emerging risks, including cyber and biological ones; an “AI trust stack” of provenance, verification and privacy-preserving audit tools; and clearer governance frameworks so responsibility can be assigned when harm occurs. (↗)
It supports stronger auditing regimes for frontier models, especially a narrow set of highly capable systems that could materially advance chemical, biological, radiological, nuclear or cyber risks. Those systems, it says, may eventually require tighter pre- and post-deployment audits. Yet the paper is careful to say that such controls should apply only to a small number of firms and the most advanced models, so as not to smother smaller companies or broader access to less powerful tools. (↗)
The rest of the safety agenda is wide-ranging: playbooks for containing dangerous models once released; mission-aligned governance at frontier firms; legal guardrails for how governments may use AI; structured public input into alignment; incident and near-miss reporting; and international information-sharing among national AI institutes. In other words, OpenAI is arguing that labour policy, tax policy and safety policy can no longer be separated cleanly. If advanced AI becomes part of everything, governance becomes part of everything too. That last point is an inference, but a fair one. (↗)
An opening bid
The weakness of the paper is also its virtue. It is full of large ambitions and only partial machinery. OpenAI admits as much, calling the proposals early and exploratory rather than final recommendations. Still, the document is worth taking seriously, because it reveals where at least one leading AI company thinks the argument is heading. Not towards a narrow debate over model safety alone, but towards a broader fight over bargaining power, social insurance, public investment and the ownership of the gains from machine intelligence. (OpenAI)
Silicon Valley has often liked to present technology as destiny. This paper suggests something rather different. The future of work, OpenAI now says, will be shaped not just by what the models can do, but by taxes, institutions, labour rights, public goods and democratic choices. That is a larger claim than automation. It is the beginning of a politics. (↗)
Point-by-point summary
- OpenAI says AI is moving beyond narrow tasks and could soon reshape work, production and knowledge at much greater scale.
- It frames the paper as an early discussion document, not a finished blueprint, and says incremental policy updates will not be enough.
- Its three main aims are to share prosperity broadly, mitigate major risks and democratise access to useful AI.
- It argues for a new industrial policy akin in ambition to the reforms that followed earlier industrial upheavals.
- Workers should have a formal voice in AI deployment, with AI used to remove drudgery rather than worsen schedules, autonomy or pay.
- Workers should also be helped to become “AI-first entrepreneurs” through microgrants, revenue-based finance and shared business infrastructure.
- The paper proposes a “right to AI”: affordable access to foundational models plus the training and connectivity needed to use them.
- It says the tax base may need to shift from labour towards profits, capital gains and perhaps taxes tied to automated labour.
- It proposes a Public Wealth Fund so citizens share directly in AI-driven growth.
- It suggests “efficiency dividends”, including better benefits and pilots for a 32-hour, four-day week without pay cuts where productivity allows.
- It calls for faster, more adaptive safety nets that trigger automatically when AI disruption crosses pre-set thresholds.
- It backs portable benefits that follow people across jobs, sectors and periods of retraining or self-employment.
- It sees care, education, healthcare and other human-centred sectors as important destinations for some displaced workers.
- It wants AI-enabled science infrastructure spread beyond elite institutions so breakthroughs are more widely shared.
- On safety, it calls for stronger cyber and bio safeguards, provenance tools, privacy-preserving audit logs and clearer accountability.
- It supports tighter audits for the most dangerous frontier models, especially those that could materially increase high-consequence security risks.
- It also proposes containment playbooks, guardrails for government AI use, public input into alignment, incident reporting and international information-sharing.
- OpenAI says it will gather feedback, fund fellowships and grants, and hold discussions in Washington, DC.