Late Friday night, as geopolitical tensions flared into open conflict in the Middle East, Sam Altman took to X to announce a deal that many in the tech community had long feared, yet few expected to materialise so abruptly.
OpenAI had officially signed an agreement with the U.S. Department of War (DoW) to deploy its frontier models within the military’s most sensitive, classified networks.
The announcement triggered an immediate and chaotic firestorm. Within minutes, the thread was a battleground of “Cancel ChatGPT” hashtags, pointed enquiries from national security experts, and inflammatory accusations of selling out the future of humanity.
For a company founded on the principle of ensuring AGI benefits all of humanity, the pivot to a primary defence contractor felt like a tectonic shift in the industry’s moral landscape.
The context for this sudden pivot is as dramatic as the deal itself. Just hours before Altman’s announcement, President Donald Trump issued a sweeping executive order directing all federal agencies to immediately cease the use of technology from Anthropic, OpenAI’s chief rival.
Secretary of War Pete Hegseth labelled Anthropic a “supply-chain risk to national security”, a designation typically reserved for foreign adversaries like Huawei.
Anthropic had reportedly refused to grant the Pentagon unconditional access to its Claude models, insisting on contractual “red lines” that would prohibit the technology’s use for domestic mass surveillance or fully autonomous lethal weapons.
Sam Altman, CEO of OpenAI
OpenAI stepped into the vacuum left by its rival’s departure. While the administration demanded that AI models be available for “all lawful purposes”, OpenAI framed its entry not as a capitulation but as a sophisticated compromise.
In his Ask Me Anything (AMA), Altman contended that OpenAI secured the same safety guardrails Anthropic sought but achieved them through a multi-layered approach rather than an ultimatum.
By agreeing to work within existing legal frameworks, citing the Fourth Amendment and the Posse Comitatus Act, OpenAI effectively de-escalated a standoff that had threatened to leave the U.S. military without frontier AI capabilities during an active war.
The thread quickly moved from corporate PR to a raw debate on the ethics of AI warfare. One of the most-liked questions addressed the fundamental shift in OpenAI’s mission: why move from “human betterment” to defence collaboration?
Altman’s response was characteristically pragmatic: “The world is a complicated, messy, and sometimes dangerous place. We believe the people responsible for defending the country should have access to the best tools available.”
Altman detailed the technical safeguards designed to prevent the AI from becoming an autonomous executioner.
These include a “cloud-only” deployment strategy, preventing models from being embedded directly into edge devices or weapon hardware, and the deployment of “Field Deployment Engineers” (FDEs) to oversee classified use.
However, the thread remained sceptical. Critics pointed to a Community Note highlighting that under the USA PATRIOT Act, “lawful use” could still encompass vast data collection.
When asked about the probability of AI causing a global catastrophe, Altman was uncharacteristically brief, suggesting that national security collaboration might actually reduce risk by keeping the state and the developers on the same page.
One of the most revealing exchanges involved governance. When asked if the federal government could eventually nationalise OpenAI, Altman admitted, “I have thought about it, of course, but it doesn’t seem super likely on the current trajectory.”
This admission did little to soothe those who see the Department of War rebranding and the Anthropic blacklisting as the first steps toward a state-run AGI.
The implications of this deal extend far beyond a single contract. By accepting the supply chain risk designation of its competitor, OpenAI has implicitly validated a world where the government can pick winners and losers based on a company’s ideological commitment to military utility.
This sets a scary precedent, as even Altman acknowledged, where private companies may feel pressured to lower their ethical guardrails to avoid being labelled a national security threat.
From an ethical standpoint, the human-in-the-loop requirement remains the most contentious point.
While OpenAI insists that humans will retain responsibility for the use of force, defence experts in the thread noted that current DoW policy (Directive 3000.09) is notoriously vague on what constitutes meaningful human control in high-speed digital combat.
Sam Altman
If an AI processes targeting data faster than a human can blink, is the human truly in the loop or merely a rubber stamp for a machine’s decision?
The risk of AGI loss of control is no longer a theoretical concern for the distant future; it is a question of how these models will behave in the high-stakes environment of classified warfare.
Also read: OpenAI raises record $110 billion from Amazon, NVIDIA, and SoftBank to boost AI
As the AMA concluded, the image of Altman in the memories of OpenAI users was not that of a starry-eyed tech visionary, but of a digital diplomat navigating a world of hard power.
He left the thread with a sobering takeaway: the era of neutral AI development is over. OpenAI’s decision to integrate with the Department of War marks the beginning of a new chapter where AGI is treated as a strategic asset of the state, rather than a global public good.
The post Summary of Sam Altman’s AMA on OpenAI’s controversial pact with the Department of War first appeared on Technext.


