Why Are U.S. Special Forces Prioritizing the Usage of Deepfake Technology?

Nuha Yousef | a year ago

12

Print

Share

This week, The Intercept newspaper reported that The U.S. Special Operations Command, responsible for covert military operations, is planning to launch online propaganda campaigns using deepfakes.

The federal plans revealed by the newspaper also involve hacking internet-connected devices to eavesdrop and gauge foreign populations’ responses to propaganda.

The move comes at a time when there is intense global debate about the ethics and effectiveness of technologically sophisticated disinformation campaigns.

While the U.S. government warns of the dangers of deepfake technology and builds tools to counter it, the document issued by the Special Operations Command (SOCOM) provides an unprecedented example of the U.S. government’s willingness to use controversial technology in an aggressive manner.

 

Russia’s Deployment

This interest in deepfakes by Special Operations Forces follows years of international concern about deepfake films and digital phishing from international enemies

Previously, in light of scant evidence that showed Russia’s efforts to influence the 2016 election digitally had any impact, thirteen Russians and three corporations were charged in 2018 with attempting to meddle in the election using Facebook groups, social media advertisements, and provocative pictures to bolster support for Donald Trump and sow divisiveness throughout the United States.

Although deepfakes have been primarily used for amusement and pornography, there is a potential for more serious uses.

For example, a sloppy deepfake of Ukrainian President Volodymyr Zelenskyy ordering troops to surrender began circulating on social media platforms at the start of Russia’s invasion of Ukraine.

Apart from ethical issues, the SOCOM study does not address the legality of using militaristic deepfakes in a battle, which remains unresolved.

Russia is increasingly deploying deepfakes for internal propaganda in the Ukrainian conflict, and this strategy is even boldly revealed in public.

In a November 2022 story by the Süddeutsche Zeitung, Vladimir Putin was presented with a deepfake film at a technology exhibition by German Chancellor Olaf Scholz.

The video featured a lifelike Olaf Scholz delivering a speech without actually moving his lips. Instead, a programmer or artificial intelligence placed the words in his mouth. Such forgeries are intriguing on multiple levels.

The development of deepfake technology is a double-edged sword, as it has the potential to boost morale within a country, but it can also be misused to spread fraudulent video messages.

For instance, in the summer of 2022, Berlin’s current mayor, Franziska Giffey (SPD), fell victim to a fake video conference call from Vitali Klitschko that was created using deepfake technology. This incident, claimed by a Russian comedy duo, demonstrates the potential for misuse of the technology.

Despite concerns over the potential misuse of deepfake technology, the United States Special Operations Command (SOCOM) views it as a promising weapon in the information war.

 

U.S. Exploitation

In fact, SOCOM is actively seeking partners to develop advanced capabilities for covert operations that utilize deepfakes and similar technologies to generate and disseminate messages through non-traditional channels, according to the Intercept.

SOCOM has outlined its aspirations for the next generation of special forces games in a procurement document, which lists the capabilities it hopes to achieve in the near future. These capabilities include the use of lasers, holograms, and the hacking of internet-connected devices, among other advanced technologies.

The document that The Intercept showed, first published by the Directorate of Science and Technology of the Special Operations Command in 2020, pointed to a wish list of next-generation special forces games in the 21st century and a series of futuristic tools that will help the country’s elite soldiers more effectively.

The Intercept confirmed that last October, the Special Operations Command secretly released an updated version of its wish list with a new section: Advanced Technologies for Use in Military Information Support Operations (MISO), a Pentagon euphemism for global propaganda and deception efforts.

The added paragraph illustrates the desire of the Special Operations Command to have new and improved means of carrying out influence operations, digital deception, disruption of communications, and disinformation campaigns at the tactical and operational levels.

The Special Operations Command also seeks to acquire next-generation capabilities to collect disparate records thru public and open-supply facts streams together with social media, nearby media, etc., to enable military information support operations to shape and directly influence operations.

 

The U.S. isn’t Ready

According to a study by DeepTrust Alliance titled “Deepfake, Cheapfake: Is the Internet’s Next Earthquake?” the institution outlines the “serious repercussions” of deepfakes for society by emphasizing the social, political, and emotional toll deepfakes impose on individuals, organizations, and governments.

Due to the far-reaching effects of deepfakes, politicians and policymakers have been struggling to find solutions and protections for a long time.

To address deepfake disinformation effectively, the study suggested that parties need to adopt a coordinated and comprehensive strategy that involves a combination of technological tools and procedures, legislative policies, and consumer education campaigns.

However, given the extent of the problem, it may be too late for lawmakers in the U.S. or EU to prevent the development and marketing of deepfake technology.

Despite the fact that some states in the U.S. have introduced deepfake safeguards and bans, injunctions against deepfakes may be difficult to obtain due to the First Amendment and a lack of jurisdiction over extraterritorial developers of deepfakes.

Several U.S. states have implemented regulations and bans on deepfakes, as reported in a University of Illinois Law Review article. In 2019, Texas became the first state to prohibit deepfakes intended to influence an election, while Virginia outlawed deepfake pornography the same year.

In California, within 60 days before an election, it is forbidden to create “videos, photos, or audio of politicians doctored to mimic authentic material.

However, injunctions against deepfakes are likely to face First Amendment arguments in the U.S., and their effectiveness may be limited due to a lack of jurisdiction over developers outside of the country.

A document from global law firm Hogan Lovells claims that Europe has not addressed the legal environment of deepfakes directly: There are presently no European regulations or national laws in the U.K., France, or Germany particularly committed to handling deepfakes.

The EU Commission intends to combat online misinformation in Europe, including the use of deepfakes, through a variety of steps, including a self-regulatory Code of Practice on Disinformation for online platforms, according to the University of Illinois article.