The dispute with the Pentagon is not about Anthropic being woke, or obstinate. The company appears primarily concerned about national security, based on their own experience with China, which has created thousands of accounts under fake names to siphon Anthropic's data and systems to bolster their own, less advanced AI. They also, as any sensible company would, believe they should retain control of their product and the intellectual property that drives it, not hand it off to some bureaucrat with a political agenda.
Anthropic's model is evidently superior to its competitors for the the Pentagon's needs, since it is the only one currently with a top security clearance. In that light, Anthropic has been critical of the Trump administration's eagerness to sell AI chips to the United Arab Emirates for money that went not to the US government, but to private Trump business accounts. The company has understandable concerns about how those chips will be used, especially if, as feared, the UAE then re-sells some of them to China. So, in short, Anthropic does not trust this administration to protect the country's interests or the company's own commercial investment. And in defying the DOD Secretary Hegseth, are taking a prudent financial position that could benefit the entire US AI industry. JL
Robert McMillan and Raffaele Huang report in the Wall Street Journal, Ian Duncan and colleagues report in the Washington Post and Rebecca Bellan reports in Tech Crunch:
Anthropic CEO Dario Amodei said it was ready to continue working with the Pentagon, but would not change its stance (regarding use of its AI for) robotic weaponry and domestic surveillance. Amodei previously criticized the Trump administration’s drive to allow exports of American AI chips to China. He compared the policy to “selling nuclear weapons to North Korea.” Anthropic said three Chinese AI companies set up 24,000 fraudulent accounts with its Claude AI model to help their own systems catch up. The three companies, Deep Seek, Moonshot AI and AI, prompted Claude 16 million times to siphon information from Anthropic to train their own products. Anthropic is the only AI lab with classified DOD access. The DOD doesn’t have a backup option currently. "This is a single vendor situation. If Anthropic cancels, it will be a serious situation for DOD."