A New Kind of Risk: How Is AI Used to Generate Nude Images of Children?

Nuha Yousef | a year ago

12

Print

Share

Federal authorities have filed charges against a U.S. man accused of producing over 10,000 sexually explicit and abusive images of children using a popular artificial intelligence (AI) tool.

Steven Anderegg, a 42-year-old Wisconsin resident, allegedly distributed these AI-generated images to a 15-year-old boy via Instagram.

Prosecutors revealed that Anderegg meticulously crafted approximately 13,000 “hyper-realistic” depictions of nude and partially clothed prepubescent children.

These images often portrayed children engaging in explicit acts or being victimized by adult men.

The evidence against Anderegg includes his use of the widely used Stable Diffusion AI model, which translates text descriptions into visual content.

Anderegg now faces four charges related to creating, distributing, and possessing child sexual abuse material.

If convicted, he could be sentenced to up to 70 years in prison.

Potential Misuses

Notably, this case marks one of the first instances where the FBI has pursued charges specifically related to AI-generated child sexual abuse material.

Child safety advocates and AI researchers have long warned about the potential misuse of generative AI, which could exacerbate the proliferation of child sexual abuse material.

Reports to the National Center for Missing & Exploited Children (NCMEC) regarding online child abuse increased by 12% in 2023 compared to the previous year, partly due to the surge in AI-generated content.

The organization’s tip line has been strained by the volume of potential child sexual abuse material (CSAM).

According to the report, the NCMEC is deeply concerned about this alarming trend, as malicious actors can exploit artificial intelligence to create deepfake sexually explicit images or videos based on any photograph of a real child, as well as generate CSAM featuring computer-generated children engaged in graphic sexual acts.

The rise of generative AI has also led to the widespread creation of nonconsensual deepfake pornography, affecting both A-list celebrities and ordinary citizens.

Minors have not been spared; instances of AI-generated explicit content have circulated within schools.

Some states have enacted laws to combat the non-consensual creation of explicit images, while the Department of Justice emphasizes that generating sexual AI images of children is illegal.

Deputy Attorney General Lisa Monaco affirmed the government’s commitment to pursuing offenders: “Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

Potential Abuses

Stable Diffusion, an open-source AI model, has previously been associated with the production of sexually abusive images.

Last year, the Stanford Internet Observatory also documented instances of child sexual abuse material related to this technology.

In a recent investigation, the open-source artificial intelligence model known as Stable Diffusion has come under scrutiny for its potential misuse.

Originally designed for benign applications, the model has been repurposed by users to generate sexually abusive images and explicit content.

Last year, the Stanford Internet Observatory revealed that the model’s training data contained instances of child sexual abuse material.

Stability AI, the UK-based company responsible for the widespread adoption of Stable Diffusion, acknowledges the issue.

According to their statement, the AI model in question appears to be an earlier version initially developed by the startup RunwayML.

Since taking over the model’s development in 2022, Stability AI claims to have implemented additional safeguards.

A recent report by the FBI has exposed a disturbing flaw within popular artificial intelligence (AI) image-generators: they harbor thousands of images depicting child sexual abuse.

The findings urge companies to take immediate action to address this alarming issue.

These same AI systems, designed for image synthesis, have inadvertently facilitated the creation of realistic and explicit depictions of fake children.

Additionally, they can transform innocuous social media photos of fully clothed teenagers into illicit nude images. The implications have raised serious concerns among educators and law enforcement agencies worldwide.

Children Database

Until recently, experts believed that unchecked AI tools produced abusive imagery of children by combining two distinct categories of online images: adult pornography and benign pictures of kids.

However, the Stanford Internet Observatory’s investigation revealed over 3,200 suspected child sexual abuse images within the massive AI database known as LAION.

This database, used to train leading AI image-makers like Stable Diffusion, prompted a collaboration between the watchdog group at Stanford University and anti-abuse charities, including the Canadian Centre for Child Protection. Approximately 1,000 of the identified images were externally validated.

In response to the report, LAION swiftly announced the temporary removal of its datasets just ahead of the Wednesday release.

The nonprofit Large-scale Artificial Intelligence Open Network emphasized its “zero tolerance policy for illegal content” and pledged to ensure the safety of its datasets before republishing them.

While these abusive images constitute only a fraction of LAION’s vast index of 5.8 billion images, they significantly impact AI tools’ ability to generate harmful outputs.

Moreover, they perpetuate the exploitation of real victims who appear repeatedly in the illicit content.

The root of this problem lies in the rush to market of many generative AI projects, driven by intense competition.

Stanford Internet Observatory’s chief technologist David Thiel, who authored the report, emphasized that creating datasets from an entire internet-wide scrape should have been treated as a research operation, not open-sourced without rigorous scrutiny.

One of LAION’s prominent users, London-based startup Stability AI, played a pivotal role in shaping the dataset’s development.

Although newer versions of their Stable Diffusion text-to-image models mitigate harmful content creation, an older version—released last year but still embedded in various applications—remains the most popular model for generating explicit imagery, according to the Stanford report.

Lloyd Richardson, director of information technology at the Canadian Centre for Child Protection, expressed concern about the widespread availability of this problematic model.

Stability AI responded by hosting filtered versions of Stable Diffusion and emphasizing its commitment to preventing misuse.

In summary, the intersection of AI, ethics, and safety demands urgent attention, as flawed models inadvertently perpetuate harm.

The challenge lies in balancing innovation with responsible deployment, especially in a competitive landscape where speed often takes precedence over due diligence.