The Pentagon threatened to blacklist Anthropic — the AI safety company behind Claude — for refusing to let the government use its technology for mass domestic surveillance and fully autonomous weapons. A few hours later, OpenAI stepped in and signed a deal. Washington declared victory.
I am not a lawyer. I am not a general. I am a retired technology entrepreneur who spent 20 years building software systems for Fortune 1000 companies, and I know what it looks like when someone signs a contract with enough wiggle room to drive a Humvee through.
OpenAI’s deal promises to comply with existing laws. Sounds reasonable. Except that in 2013, those same laws were on the books when the NSA was quietly collecting the phone records of millions of ordinary Americans. The government’s lawyers decided that was fine. The FISA court agreed. Most of us had no idea it was happening until Edward Snowden told us.
History tends to repeat itself when no one is watching.
Here is what we actually know. Anthropic asked for explicit prohibitions: no mass surveillance of Americans, no fully autonomous kill decisions. The Pentagon said no. Then OpenAI said yes — to language that references existing law, not explicit limits. Defense Secretary Hegseth declared Anthropic “a supply-chain risk” for holding the line. Within hours, OpenAI announced its deal and the government got what it wanted.
That sequence of events should make every American uncomfortable, regardless of political affiliation.
I have spent time in 40 countries. I have seen what happens when governments acquire powerful tools without meaningful accountability. The technology does not care who is in office. It serves whoever controls it. The guardrails have to be written into law — not negotiated in contracts that lawyers can reinterpret, and not left to the good intentions of whichever administration happens to be in power this week.
Dario Amodei, Anthropic’s CEO, said it plainly on CBS: This is Congress’s job. He is right. We need legislation that draws clear lines — on surveillance, on lethal autonomy, on what AI systems may and may not do on behalf of the United States government — before the infrastructure to abuse those systems is already built.
In technology, the window to set standards is narrow. You do it early, when the architecture is still being designed, or you spend the next decade trying to retrofit limits onto systems that were built without them. We are in that window right now.
Naples sends representatives to Washington. I would encourage every one of them to start asking a simple question: What are the actual rules governing how our military and intelligence agencies use artificial intelligence against American citizens? Not the contractual language. Not the references to laws written before smartphones existed. The actual rules.
That answer — or the absence of one — will tell you everything.
David Rabjohns is a retired technology entrepreneur and Naples resident.
This article originally appeared on Fort Myers News-Press: Washington needs AI guardrails — now | Opinion
Reporting by David Rabjohns / Fort Myers News-Press
USA TODAY Network via Reuters Connect

