A former Google employee has accused the company of helping Israeli forces use artificial intelligence in Gaza, filing a whistleblower complaint that adds to mounting pressure over the tech giant’s military contracts.
The complaint, submitted to the Securities and Exchange Commission, describes a support request sent from an Israeli Defense Forces email address to Google’s cloud division. The request came from someone working at CloudEx, a contractor linked to the Israeli military. Internal documents show the customer needed help fixing a bug in Google’s Gemini AI system that was being used to scan aerial footage. The software kept failing to spot drones, soldiers, and other objects in images.
Google’s support team responded with troubleshooting suggestions and ran internal tests. The matter was addressed in due course. A second Google employee who worked on the IDF’s cloud account was copied on the exchange, according to the filing. The whistleblower claims the footage related to Israeli operations in Gaza during the current war, though the complaint offers no direct proof of this.
Google rejected the allegations. A company spokesperson told The Washington Post that the interaction broke no ethical rules because the account spent less than a few hundred dollars monthly on AI products. This made any serious use of AI impossible, the spokesperson said. Support staff simply answered a routine question with standard help desk information and gave no further technical help, Google added.
The former employee, speaking anonymously for fear of retaliation, accused Google of using a double standard for Israel and Gaza. Internal AI ethics reviews are normally strict, they said, but when it came to Israel and Gaza, the opposite was true. The complaint suggests Google may have misled regulators and investors by contradicting its own publicly filed policies. Anyone can file an SEC complaint, and doing so does not automatically trigger an investigation.
The filing arrives as Google faces growing scrutiny over Project Nimbus, its $1.2 billion cloud computing deal with the Israeli government. In February 2025, Google changed its AI principles by removing earlier pledges not to build AI for weapons or surveillance that broke internationally accepted norms. The company said it needed to help democratically elected governments keep up with global AI use.
Nearly 200 workers at Google DeepMind, the company’s AI lab, signed a letter earlier this year urging Google to drop its military contracts. The letter, reviewed by TIME magazine, expressed concern that the lab’s technology was being sold to militaries fighting wars, violating Google’s own AI rules. The signatures represented about five percent of DeepMind’s staff. While this is a small share, it signals notable unrest in an industry where top machine learning talent is highly sought after.
The DeepMind letter, dated May 16, said workers were troubled by recent reports of Google’s contracts with military groups. It did not name specific militaries, noting the letter was not about the politics of any particular conflict. But it linked to an April TIME report revealing that Google has a direct contract to supply cloud computing and AI services to the Israeli Ministry of Defense under Project Nimbus. The letter also pointed to stories claiming the Israeli military uses AI for mass surveillance and target selection in its Gaza bombing campaign, and that Israeli weapons firms must buy cloud services from Google and Amazon.
The letter stated that any involvement with the military and weapon-making harms Google’s position as a leader in ethical and responsible AI and goes against the company’s mission and AI Principles. Those principles say Google will not pursue AI applications likely to cause overall harm, contribute to weapons whose main purpose is to cause injury, or build technologies that violate widely accepted principles of international law and human rights. The letter asked DeepMind leaders to investigate claims that militaries and weapons makers use Google Cloud, terminate their access to DeepMind technology, and create a new body to prevent military clients from using DeepMind tools in the future.
Three months after the letter circulated, Google had done none of these things, according to four people familiar with the matter. One said they had received no meaningful response from leadership and were growing increasingly frustrated. When DeepMind held a town hall in June, executives were asked about the letter. The lab’s chief operating officer, Lila Ibrahim, answered. She told employees DeepMind would not design or deploy AI applications for weapons or mass surveillance. Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, she said. Ibrahim added she was proud of Google’s track record on safe and responsible AI, which was why she joined and stayed at the company.
When Google bought DeepMind in 2014, the lab’s leaders secured a major promise from the search giant. Their AI technology would never be used for military or surveillance purposes. For years, the London lab operated with high independence from Google’s California headquarters. But as the AI race heated up, DeepMind was pulled tighter into Google. A 2021 bid by lab leaders to win more autonomy failed. In 2023, it merged with Google’s other AI team, Google Brain, bringing it closer to the tech giant’s core. An independent ethics board that DeepMind leaders hoped would govern the lab’s technology met only once and was replaced by Google’s umbrella ethics policy, the AI Principles. While those principles promise Google will not develop AI likely to cause overall harm, they explicitly allow the company to develop technologies that may cause harm if it decides the benefits substantially outweigh the risks. They do not rule out selling Google’s AI to military clients.
This has seen DeepMind technology being integrated into Google Cloud software and sold to militaries and governments, such as Israel and its Ministry of Defence. In April 2017, one DeepMind employee told TIME that though DeepMind might not have been thrilled to work on military AI or defence contracts previously, it was not their choice anymore. Several Google employees confided in TIME that due to privacy concerns, the company does not have much knowledge about how its infrastructure is utilised by government clients. This could render it hard or unattainable to ascertain whether its favourable use policy, which prohibits the employment of Google products to perpetrate violence that may result in death, severe harm, or damage, is being violated.
Google maintains that Project Nimbus is for workloads running on its commercial cloud by Israeli government ministries that agree to comply with its terms of service and acceptable use policy. The work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services, the company says. But this response does not deny that its technology enables any form of violence or enables surveillance violating internationally accepted norms, according to the May letter that circulated within DeepMind. Google’s statement on Project Nimbus is so specifically unspecific that we are none the wiser on what it actually means, one letter signatory told TIME.
The dispute highlights a growing tension inside tech companies between commercial interests and ethical commitments. Due to the increasing power and prevalence of AI systems, there is a risk of the gap between proclaimed values and reality broadening unless oversight exists beyond what an organisation review can offer. The question facing Google and other tech giants is whether self-regulation can hold when government contracts worth billions of dollars are at stake, and whether workers inside these companies have any real power to shape how their creations are used in conflict zones around the world.
If you want to submit your articles and/or research papers, please visit the Submissions page.
To stay updated with the latest jobs, CSS news, internships, scholarships, and current affairs articles, join our Community Forum!
The views and opinions expressed in this article/paper are the author’s own and do not necessarily reflect the editorial position of Paradigm Shift.
Syed Salman Mehdi is a seasoned freelance writer and investigative journalist with a strong foundation in IT and software technology. Renowned for his in-depth explorations of governance, regional conflicts, and socio-political transformations, he focuses on South Asia and the Middle East. Salman’s rigorous research and unflinching analysis have earned him bylines in esteemed international platforms such as Global Voices, CounterPunch, Dissident Voice, Tolerance Canada, and Paradigm Shift. Blending technical expertise with a relentless pursuit of truth, he brings a sharp, critical perspective to today’s most pressing geopolitical narratives.






