The debate over the use of artificial intelligence in military operations is intensifying in the United States, with Defence Secretary Pete Hegseth set to meet Anthropic CEO Dario Amodei to discuss the company’s stance on defence-related AI applications.
The meeting comes at a time when Anthropic stands apart from other major AI firms, as it has not agreed to supply its technology to the US military’s new internal AI network. This has drawn attention to growing concerns over the ethical risks of deploying advanced AI systems in national security and combat settings.
Ethical Concerns Over Military AI
Anthropic, known for developing the chatbot Claude, has consistently positioned itself as a safety-focused AI company. CEO Dario Amodei has previously warned about the dangers of unchecked government use of artificial intelligence, particularly in areas such as autonomous weapons and large-scale surveillance systems.
In a recent essay, Amodei highlighted the risks posed by highly advanced AI capable of analysing massive amounts of public data, stating that such systems could potentially monitor public sentiment and detect dissent, raising serious civil liberties concerns.
Pentagon’s AI Push
The US Department of Defense has been rapidly integrating artificial intelligence into its operations. Last year, the Pentagon awarded defence contracts worth up to $200 million each to four leading AI firms such as Anthropic, Google, OpenAI, and Elon Musk’s xAI to develop AI capabilities for military use.
While Anthropic was the first AI company approved to operate within classified military networks, its peers currently focus largely on unclassified applications. Recently, the Pentagon has highlighted partnerships with companies such as Google and xAI, signalling a growing emphasis on expanding AI integration in defence infrastructure.
Defence Secretary Hegseth has also made clear that the US military intends to use AI without what he describes as “ideological constraints,” stressing the importance of developing systems capable of supporting combat operations.
Tensions Over Regulation and Safety
Anthropic’s cautious approach has occasionally put it at odds with policymakers, including ongoing debates around AI regulation and export controls. The company has advocated for stronger safeguards and third-party oversight to reduce national security risks associated with advanced AI systems.
Experts note that while Anthropic’s safety-driven stance may limit its influence in defence contracts, it also reflects a broader industry divide between rapid AI deployment and ethical risk management.
AI’s Expanding Role in Warfare
The ongoing discussions highlight a larger global shift as artificial intelligence becomes increasingly embedded in military operations, ranging from administrative functions to battlefield decision-making systems. Analysts say the use of AI in defence is already well established and is likely to continue expanding despite concerns over potential risks, including lethal autonomous weapons and surveillance capabilities.




