Google Drops Out of Pentagon Drone Swarm Contest

Hundreds of Google’s AI researchers have raised broad objections to the company’s cutting-edge technology being used for classified military work

Share
Google Drops Out of Pentagon Drone Swarm Contest

Google abruptly dropped out of a $100 million Pentagon prize challenge to create technology for voice-controlled, autonomous drone swarms after it was among the successful submissions, according to people briefed on the matter. 

The company notified the government it wouldn’t participate further in the initiative, which seeks to create the technology needed to control drone swarms, on Feb. 11 — a few weeks after the proposal was submitted, according to another person briefed on the matter.

The decision followed an internal ethics review, according to records referencing it that were reviewed by Bloomberg News. Alphabet Inc.’s Google officially cited a lack of “resourcing” when it pulled out of the contest, according to the records.

Several Google workers involved in the effort expressed disappointment at the decision to withdraw from the contest, the records show. It’s not clear how widely Google’s initial entry in the autonomous drone tech challenge was known throughout the rest of the company. But hundreds of Google’s AI researchers have raised broad objections to the company’s cutting-edge technology being used for classified military work.

The people briefed on the matter asked not to be named because of the sensitivity.

Google’s participation in the competition and its subsequent withdrawal haven’t been previously reported. 

“After reviewing this project, we decided not to pursue a bid so we can stay focused on the initiatives where our models are most effective,”

a spokesperson for Google Public Sector told Bloomberg in a statement. The company evaluates hundreds of government opportunities every year and prioritizes bidding on projects that best align with current resources and technical strengths, the spokesperson said. The spokesperson didn’t address questions about an internal ethics review.

The Pentagon initiative, jointly led by Special Operations Command’s Defense Autonomous Warfare Group and the Defense Innovation Unit, foresees commanders being able to direct swarms of drones by converting voice commands such as “left” into digital instructions.

US Special Operations Command referred questions to the Pentagon, which declined to comment. The Defense Innovation Unit didn’t respond to a request for comment. 

OpenAI, Palantir and xAI are among the companies that have been picked to compete in the contest, which is due to unroll in multiple stages over six months. Later stages of the competition call for developing “target-related awareness and sharing” and “launch to termination.”

Anthropic PBC also applied for the contest but wasn’t selected. Despite Chief Executive Officer Dario Amodei’s reservations about building large language models into fully autonomous weapons, the company assessed its submission didn’t contravene his redlines and would have helped development and testing of the controversial new technology. 

Google’s decision to withdraw from participating in the six-month challenge comes as leading AI companies and their workforces wrestle with the implications of helping the US to develop autonomous lethal weapons systems. Disagreements over the role of technology in powering potentially risky new weapons represent a recurrent fault line between the Pentagon and Silicon Valley. 

On Monday, a letter signed by hundreds of Google AI researchers was sent to Sundar Pichai, Alphabet’s CEO, urging him to refuse to make the company’s AI systems available for classified workloads for US defense missions, according to organizers of the effort. The Information subsequently reported that Google and the Pentagon have signed a new AI deal for “any lawful government purpose” and that Google would have no veto over lawful government operational decision-making, including for classified work.

A Google spokesperson told Bloomberg that the company had amended its existing contract with the Pentagon and that providing access to the company’s AI models stops short of developing bespoke models for the Pentagon and represented what the spokesperson called “a responsible approach to supporting national security.”

“We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight,”

the spokesperson added.

The Pentagon declined to comment on whether it had struck an agreement to amend it existing deal with Google or the terms. 

Google workers in 2018 also protested the company’s use of cutting-edge technology for Project Maven, a Pentagon initiative to use AI to analyze drone footage and ultimately put it at the heart of how America makes war. The unrest eventually prompted the company to pledge not to make weapons and other potentially harmful technologies. Google said at the time that its work on Project Maven was intended for “non-offensive purposes,” but in the face of protests and concerns that such technology could lead to lethal outcomes, the company decided not to renew its contract for Maven.

Since the Project Maven controversy, Google’s leaders have slowly changed their position toward working with the Pentagon. In 2022, for instance, the company launched a new subsidiary specifically aimed at providing cloud, AI and machine learning tools for US public sector customers, including the Pentagon. At the time, the company indicated its work with the Defense Department would fall far short of lethal applications.

In 2025, Google dropped its seven-year objection to working on weapons technology altogether and has hired a number of US military veterans who specialized in special operations work. The company has more recently started to increase its involvement in AI for defense, making its AI agents and chatbot available to all Pentagon workers on unclassified networks. The Pentagon is also seeking to bring Google’s Gemini agent onto classified and top secret networks.

Google’s initial participation in a contest submission for drone swarming technology indicated a further step toward embracing work on potentially deadly weaponry. Explaining that autonomous vehicles are the future of war fighting, a senior Pentagon official said in a January news release that the contest

“will deliver a human-machine interaction layer that will directly impact the lethality and effectiveness of these systems.”

Scott Frohman, a Google federal sales executive who works on defence and intelligence programs and whose leaked emails were at the heart of the 2018 dispute over Project Maven, initially helped develop the company’s part in the Pentagon’s drone swarm contest ahead of the Jan. 25 submission deadline.

Frohman, a Google executive who works on defence and intelligence programs, didn’t reply to a message seeking comment.

Source: MSN