This Is How Extortion of Children Was Detected by AI Tools

Nuha Yousef | a year ago

12

Print

Share

In a landmark move, leading AI firms have committed to a framework designed to thwart the generation and proliferation of AI-created child exploitation content.

Notable industry players including Google, Meta, OpenAI, Microsoft, Amazon, Anthropic, Civitai, Metaphysic, Mistral AI, and Stability AI have vowed to adhere to Safety by Design standards.

The initiative is anchored by a document titled Safety by Design for Generative AI: Preventing Child Sexual Abuse, authored by Thorn, an entity committed to protecting children from sexual abuse, in collaboration with All Tech Is Human, an organization striving to address the societal impacts of technology.

This document offers practical measures for AI creators, service providers, data hosts, social networks, and search engines to mitigate the dangers generative AI may pose to minors.

Child Abuse Risk

The utilization of generative AI in the fabrication and dissemination of child exploitation material poses a significant risk, particularly to children.

A Thorn blog post outlines the pledge's goals, highlighting how generative AI advancements furnish malicious actors with novel means to create and circulate such material while evading detection.

AI's capacity to alter original imagery and videos into new forms of abuse material, transform innocuous images of children into sexualized depictions, or generate entirely AI-produced exploitative content is alarming.

Moreover, generative AI exacerbates the difficulties law enforcement faces in identifying victims.

Alaa Sabry, a Norwegian-based tech engineer, says that the task of pinpointing victims, already a daunting challenge, is further complicated by AI, which vastly expands the volume and variety of content that must be examined.

"AI also presents new avenues for victimizing and re-victimizing children, as malefactors can manipulate innocent images of children to craft illicit material," Sabry told Al-Estiklal.

"Additionally, AI can amplify grooming and extortion schemes by providing predators with sensitive information. Such information can be disseminated broadly among nefarious individuals. Generative AI models can furnish predators with details like instructions for direct abuse, coercion tactics, methods for erasing evidence and altering abuse artifacts, or guidance on preventing victim disclosure," Sabry added.

"AI can fuel an increased demand for child exploitation material. The rising presence of AI-generated child exploitation content numbs society to the sexualization of children and heightens the demand for such material," he concluded.

Significant Rise

Studies indicate a correlation between consumption of this material and physical offenses, where the normalization of such content also leads to additional detrimental effects on children.

In a recent exposé, The Guardian's technology editor, Dan Milmo, highlighted a disturbing trend of child exploitation through artificial intelligence. A child protection charity has unveiled these nefarious activities.

The report cites the Internet Watch Foundation (IWF), which found a dark web directory promoting AI tools designed to digitally strip photos of children.

These images are then weaponized to coerce the children into providing more explicit material.

The IWF emphasized this is the first instance of criminals being advised to employ AI for such malicious purposes.

The foundation also noted a surge in blackmail cases where victims are compelled to send explicit images under the threat of public exposure.

An anonymous manual circulating online, spanning roughly 200 pages, details how the author extorted nude photos from minors.

Last month, The Guardian reported that the Labour Party is contemplating a prohibition on AI software capable of generating nude images without consent.

The IWF's annual report declared 2023 as the worst year on record, with over 275,000 web pages hosting child sexual abuse discovered, including a significant increase in the most severe "Category A" content.

Over 62,000 pages featured this extreme content, a rise from the previous year's 51,000.

Additionally, the IWF identified 2,401 instances of self-generated abuse material by children coerced into recording themselves, with victims as young as three years old.

Analysts have observed such abuses occurring within domestic settings. Susie Hargreaves, the IWF's CEO, stressed that these predators represent an imminent danger, especially to very young children, underscoring the urgency for preventative discussions.

Recent Ofcom research revealed that a significant number of very young children own mobile phones and engage with social media.

In response, the government plans to consult on measures such as banning smartphone sales to minors under 16 and increasing the minimum age for social media usage to 16.

Indistinguishable

The proliferation of online child sexual abuse material is becoming an increasingly alarming issue for authorities, and the advent of AI-generated content is poised to exacerbate the situation.

A recent study from Stanford delves into the online child protection network's capabilities and limitations.

Researcher Shelby Grossman from Stanford's Internet Observatory notes that the nine-month investigation assessed the reporting platforms, the National Center for Missing and Exploited Children's processing of these reports, and the law enforcement's subsequent investigations.

Grossman points out that the challenge for law enforcement isn't merely the sheer number of reports; it's the difficulty in discerning which ones should take precedence for investigation — they struggle to identify which cases are most likely to lead to the rescue of a child in immediate danger.

The National Center for Missing and Exploited Children operates the CyberTipline, a mechanism for the public and corporations to report suspected online child sexual exploitation. However, many reports submitted are either incomplete or erroneous.

Grossman explains that a significant number of CyberTipline reports are triggered by memes that, while technically falling under the definition of child sexual abuse material, are often shared without malicious intent, obligating platforms to report them by law.

Data from the center indicates that nearly half of the CyberTipline reports in 2022 were deemed "actionable."

Grossman adds, "Determining whether an image is AI-generated or depicts an unidentified child in need of rescue is a daunting task for law enforcement, further straining an already burdened system."