AI as the weapon, soldier, and commander of war.
Google AI Partners with the DoD in 2018
In March 2018, Gizmodo revealed Google’s contract with the US Department of Defense — for the purposes of AI in military applications. This is part of Project Maven, which uses AI to analyze drone footage. Since humans aren’t as fast as machines, AI helps identify objects — vehicles, people, or anything falling within the 38 unique object classifications. Google employees were upset about this, but former Google CEO, Eric Schmidt said the joint Google AI + DoD project is “not related to combat uses.” However, Google said it plans to discontinue its AI military contract.
Project Maven
Project Maven is also known as the Algorithmic Warfare Cross-Functional Team (AWCFT). According to a DoD article from July 2017, using AI in warfare was discussed during the Defense One Tech Summit. The idea behind Project Maven is for humans and computers to work together with drone surveillance.
“People and computers will work symbiotically to increase the ability of weapon systems to detect objects.” – USMC Col. Drew Cukor, then head of Project Maven
https://archive.ph/3Q17M
Cukor claims that AI is needed for object classification to fight against Daesh due to the immense amount of data from drones. Project Maven is also meant to be an initial step towards further AI military applications.
OpenAI and Meta’s Llama Policies Rescind Military Restrictions
On January 10, 2024, The Intercept noticed OpenAI (ChatGPT) changed its policies to allow the product to be used by militaries — the ban on “military and warfare” is gone. OpenAI spokesman, Niko Felix, told The Intercept, their principle is “Don’t harm others,” but this is ambiguous language. Meta also followed suit by changing a policy in Llama (Large Language Model Meta AI) that initially prohibited it for military use. Meta is partnering with Scale AI, a Machine Learning startup and defense contractor, to build a chat tool for military and intelligence operations.
This Meta/Scale AI project is known as Defense Llama. While Llama is an open source tool, Defense Llama is only for government use. Defense Llama has been trained on military doctrine, international humanitarian law, and DoD aligned guidelines. However, experts say Defense Llama gives “terrible advice,” and even asks unusual, ignorant questions a knowledgeable human wouldn’t consider. While Defense Llama is problematic, Scale AI has been working with the US Air Force and Army since 2020. The Pentagon is also continuing to explore which LLMs can be used, despite their issues.
AI Targeting
A February 2024 article by Engadget reveals Project Maven ML algorithms identified more than 85 Middle East targets for air strikes in that month. Targeted regions include Iraq and Syria, as well as Yemeni vessles in the Red Sea. An April 2024 piece by The Conversation discusses the IOF’s AI targeting programs — named Project Lavender and Where’s Daddy. Lavender uses AI to identify the IOF’s definition of a Hamas operative; Where’s Daddy geographically tracks a target to his home, then attacks the target in the home. Both Lavender and Where’s Daddy are in use for the Palestinian Genocide.
In a Bloomberg article from February 2024, retired USAF General Jack Shanahan anticipates the US military needing 5 years to feel comfortable using AI in warfare. This is due to fear of something going wrong, and the technology being shelved for a long time because of failures. So far, AI targeting has been used in drones, fighter jets, and bombers — they’ve even modeled a way to use it from submarines. Defense contractor, Sentient Digital Inc (SDI) has talked of using AI drone swarms that operate like insects, acting as one unit to gather data and reporting back to the hive.
AI Targeting Success Rates
AI algorithms are accurate only about 25% of the time, but the AI feels like it has a 90% success rate — it’s “confidently wrong” as USAF Major General Daniel Simpson told Defense One in a December 2021 article. The AI needs more data training, but it’s difficult to consistently acquire the right data from a war zone. Synthetic input is used instead — artificially generated images and video sourced from real data.
Autonomous Warfare With AI powered Drones, Fighter Jets, and Tanks

While drones typically require a human operator, a July 2021 article from The Washington Post states AI-manned drones are already being used in combat. Autonomous drones have killed people in Libya; and AI suicide drones, with munitions supplied by Turkiye and Isr—l, are being used by Azerbaijani forces in Armenia. One of the major flaws with AI drones is the data sets they’re trained on — meaning the wrong targets, such as children, can be killed by them.
The USAF is also planning on deploying fully automated aircraft by 2030. After a ride long in an X-62A Vista ( F-16 modified for AI testing and training), Air Force Secretary Frank Kendall is hopeful about fully AI powered fighter jets. The AI jet performed evenly when compared to a highly experienced human pilot, and Kendall believes it can defeat a less experienced human.
“Computers don’t get tired. They don’t get scared. They’re relentless. It’s easy to see a situation where they’re going to be able to do this job, generally speaking, better than humans can do. They also can handle large amounts of data.” – Kendall said during an AI and national security conference hosted by the Special Competitive Studies Project.
https://archive.ph/MClbE
The M1 Abrams tank is testing an AI targeting system — with task automation and speeding up object detection. This is so human tank crews can engage up to three targets at a time instead of one. As of a February 2023 article from The Warzone, the AI on an Abrams still requires a tank commander to pick targets. This could be the first step towards autonomous, crewless ground vehicles.
Robot Attack Dogs
A military.com article from August 2023 reveals the US Army wanting to test the Sig Sauer XM7 rifle on unmanned robot dogs. The M4A1 rifle has already been tested on the Q-UGV dogs (Ghost Robotics-manufactured Vision 60 Quadruped Unmanned Ground Vehicle). Robot dogs are already used by the military for perimeter security, surveillance, reconnaissance, and target acquisition. However, some robotics companies are against these modifications, asking for militaries across the world to not do this, but defense contractors have ignored their pleas.
Responsible AI Warfare
During a February 2023 international summit in The Hague, more than 60 countries, including the US and China, signed a non-legally binding “call to action” for responsible use of AI in the military. Russia wasn’t invited to this event, Ukraine didn’t attend, and Isr—l didn’t sign the statement. However, human rights experts said concerns about autonomous drones killing without human intervention weren’t addressed.
“We want to emphasize that we are open to engagement with any country that is interested in joining us.” – U.S. Under Secretary of State for Arms Control Bonnie Jenkins
https://archive.ph/oPEG6
Jenkins failed to give a clear definition of “appropriate.”