How Is Google Complicit in the Genocide of Gaza's Population?

Nuha Yousef | a month ago




In the Gaza Strip, a facial recognition system has been deployed by the Israeli military, casting a digital net over Palestinians as they navigate the ravaged landscape.

This technological web, designed to identify individuals amidst the chaos, taps into two distinct facial recognition technologies.

One is developed by the Israeli firm Corsight, while the other is a feature of the ubiquitous Google Photos service.

Facial Recognition

A source within the Israeli establishment, preferring anonymity, conveyed to The New York Times the superior efficacy of Google's image processing capabilities in constructing a database of suspected Hamas militants implicated in Operation al-Aqsa Flood.

The widespread application of this surveillance has precipitated the detention of numerous Gazans post-October 7, with reports emerging of harsh interrogation methods and instances of torture, often based on scant evidence.

Highlighting the human cost of such technology, The Times recounts the ordeal of Mosab Abu Toha, a Palestinian poet ensnared and subsequently assaulted by the Israeli military through the use of facial recognition.

Despite the subsequent acknowledgment of an error and his release without charges, Abu Toha's experience underscores the fallibility and potential for misuse inherent in such systems.

The accuracy of facial recognition technology, particularly its diminished reliability when applied to non-white individuals, is a contentious issue.

The deployment of Google's machine-learning algorithms for military surveillance, or more alarming purposes, contravenes the tech giant's explicit policies against facilitating harm through its services.

Inquiries directed at Google regarding the alignment of its user policies with the Israeli military's application of Google Photos for compiling a "hit list" were met with a noncommittal response from PR of Google Chrome, Joshua Cruz.

Cruz emphasized the general availability and organizational utility of Google Photos, sidestepping the issue of its misuse for identifying individuals without consent. Further attempts to elicit a clear stance from Google on this matter remained unanswered.

The implications of such usage on a corporation's human rights commitments remain ambiguous.

Anna Bacciarelli of Human Rights Watch articulates the tension between corporate policy interpretations and the broader implications under international human rights law.

The surveillance practices in question, she argues, infringe upon a spectrum of human rights, including privacy, equality, freedom of expression, and assembly.

Amidst the ongoing and systematic violation of Gazan human rights, Bacciarelli advocates for a decisive response from Google, in line with the gravity of the situation and the company's stated principles.

Human Rights Violations

Google has long positioned itself as a champion of human rights, with its terms of service explicitly forbidding the use of its image platform to inflict harm.

The tech giant has repeatedly professed its commitment to global human rights standards, a stance articulated by Alexandria Walden, Google's global head of human rights, in a blog post from 2022.

Walden emphasized the company's belief in leveraging technology to promote human rights, stating that Google's products, operations, and policies are all shaped by a human rights framework aimed at expanding access to information and fostering new opportunities worldwide.

The company's dedication includes endorsing the Universal Declaration of Human Rights, which outlaws torture, and the UN Guiding Principles on Business and Human Rights, highlighting the severe violations often associated with territorial disputes.

However, the Israeli military's utilization of Google Images, a freely accessible service, has sparked debate over the extent of corporate human rights responsibilities and Google's readiness to enforce its proclaimed standards.

Google professes adherence to the UN Guiding Principles on Business and Human Rights, which urge businesses to prevent or mitigate adverse human rights impacts linked to their operations, products, or services, even when not directly contributing to those impacts.

Furthering this stance, Walden noted Google's support for conflict-sensitive human rights due diligence among ICT firms.

This voluntary initiative guides technology companies in preventing the misuse of their offerings in conflict zones, with recommendations that include avoiding complicity in government surveillance that breaches international human rights norms.

The responsibility to prevent human rights abuses lies with both Google and Corsight, according to Bacciarelli.

In light of recent reports, there is an expectation for Google to halt the use of Google Photos in systems that may infringe on human rights.

Amidst these concerns, Google employees engaged in the "No Technology for Apartheid" campaign — a movement protesting the Nimbus project — have urged the company to block the Israeli military from employing facial recognition technology within Google Photos, which they argue contributes to the conflict in Gaza.

The group contends that the Israeli Occupation Forces' exploitation of consumer technologies like Google Photos, equipped with facial recognition capabilities, for surveillance purposes demonstrates a willingness to leverage any available tech.

They assert that unless Google intervenes to prevent its products from aiding in acts deemed as ethnic cleansing, occupation, and genocide, the company is complicit.

These Google workers demand an immediate cessation of Project Nimbus and all support for the Israeli government and military's actions in Gaza.

Project Nimbus

Google's engagement in Israel with its "Project Nimbus" contract, which provides advanced cloud computing and machine learning tools to the Israeli military, has raised questions about the alignment of the company's business practices with its stated human rights principles.

The project, unlike Google Photos, is a bespoke software initiative tailored to the needs of the Israeli government.

The capabilities of both Nimbus and Google Photos in facial recognition are a testament to Google's extensive investment in machine learning technology.

The sale of these advanced tools to a government that faces ongoing allegations of human rights violations and war crimes appears to be at odds with Google's own AI guidelines.

These guidelines are designed to prevent harm and prohibit AI applications that conflict with established international law and human rights norms.

However, Google has indicated that its principles are more limited in scope than they might initially seem, applying only to custom AI projects rather than the broader use of its products by external entities.

In 2022, a spokesperson for the company clarified to Defense One that the military could use Google's technology quite extensively.

The real-world implications of Google's public commitments to ethical AI use remain uncertain. Ariel Koren, a former Google employee who resigned in 2022 after raising concerns about Project Nimbus, has since ceased public commentary on the matter. This silence is part of a larger pattern in which Google seems to sidestep accountability for the end-use of its technologies.

Koren, an activist now involved with the No Tech for Apartheid campaign, has expressed skepticism regarding the violation of Google's AI Principles and Terms of Service through potential complicity in human rights abuses. Despite the lack of official statements, Google's actions suggest that the company's broad AI ethics principles do not significantly influence the business decisions of its cloud services division. According to Corinne, even the possibility of being implicated in severe human rights violations does not deter Google's pursuit of profit.