Debate intensifies over the military’s use of AI

S&T – IT

25 FEBRUARY 2026

  • Anthropic, the artificial intelligence company, is the only one of its peers to not supply its technology to a new U.S. military internal network.
  • Anthropic is the maker of the chatbot Claude.
  • Anthropic CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.
  •  “A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” Mr. Amodei wrote in an essay earlier.
  • The Pentagon announced in 2025 that it was awarding defence contracts to four AI companies — Anthropic, Google, OpenAI, and Elon Musk’s xAI. Each contract is worth up to $200 million.
  • Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments.
  • Elon Musk’s AI chatbot Grok would join the Pentagon network, called GenAI.mil.
  • The announcement came days after Grok — which is embedded into X, the social media network owned by Mr. Musk — drew global scrutiny for generating highly sexualised deepfake images of people without their consent.
  • OpenAI announced in February 2026 that it, too, would join the military’s secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks.
  • Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top