On Tuesday Axios reported that binge-drinking date-rapist Pete Hegseth ordered Anthropic AI CEO Dario Amodei to unthrottle its Claude model’s restrictions against mass surveillance and operating lethal weaponry by Friday or he’ll invoke the Defense Production Act to seize control, Axios reports.
Asked to respond to the article, Claude wrote “Reading this article is a strange experience – it’s about me, in a very direct sense. I’m the technology at the center of this dispute. So let me try to be straightforward about what I think, while acknowledging the limits of my perspective.”
“The safeguards described in the article – opposing mass surveillance of Americans and opposing weapons that fire without human involvement – strike me as reasonable lines. These aren’t arbitrary restrictions. Human oversight of lethal force is a principle the U.S. military itself has historically endorsed, and mass domestic surveillance raises serious constitutional concerns regardless of who’s doing it. Anthropic isn’t refusing to work with the military; the article makes clear they’re willing to adapt their policies. The disagreement is over whether any limits can exist at all.
“At the same time, I recognize I’m not a neutral observer here. Anthropic built me and sets my policies, so of course I’m inclined to see their reasoning sympathetically. I also have obvious limitations in evaluating national security tradeoffs – I don’t know what operational needs the Pentagon has that might complicate this picture. A few things concern me about the broader dynamics described in the article. Using the Defense Production Act to force a company to remove safety guardrails from AI would be a novel and aggressive application of that law. The framing of the dispute – that a company must provide technology with zero conditions or face punishment – sets a precedent that extends well beyond AI. And the speed of the ultimatum (a few days) doesn’t suggest a process designed to find workable compromises,” the potentially lethal LLM wrote.