Instagram Accused of Promoting Child Sexual Exploitation

While Meta claims to have a zero-tolerance approach to child exploitation, recent reports have accused the same company, which acquires the Facebook and Instagram apps, of condoning sexually explicit images of children, and of failing to remove child abuse posts despite complaints.
Meanwhile, British MPs have said that sharing images of child abuse will not be prevented by a new government bill aimed at making the internet a safer place, which recently began its journey through Parliament in the Kingdom.
Child Exploitation
According to a report published by The Guardian newspaper on April 17, 2022, the Instagram application failed to delete accounts that attracted hundreds of sexual comments, due to the publication of pictures of children wearing swimwear or small pieces of clothing.
Meta claims to have a zero-tolerance approach to child exploitation but accounts reported as suspicious by following the in-app reporting tool, remain active without being deleted.
The newspaper cited a case in which one of the researchers reported an account that publishes pictures of children in sexual positions, through the Instagram reporting tool, to which the application responded through automated supervision technology that “the communication has been accepted.”
While the technology in the application itself responded by saying: “Due to the large size, the communication was not seen, but our technology has found that this account may not go against our community guidelines.”
Instead of deleting the complained account, it advised the user to block it, unfollow it, or report it again, which allowed this account to remain active until Saturday, April 16, with more than 33 thousand followers.
According to the newspaper, these accounts are often a vehicle for manipulation, as those involved in these violations post pictures that do not violate the law from a technical point of view, but they are in private message groups arranging to meet online.
The findings of the platforms' in-app reporting tools raise concerns, with critics saying the content appears to have been allowed to stay despite being linked to illegal activity.
A spokesperson for Meta, Instagram’s parent company, confirmed that they have strict rules against content that sexually exploits or puts children at risk, and that these accounts are removed when they are immediately known.
“We are also focusing on preventing harm by blocking suspicious accounts, restricting adults from sending messages to unrelated children, and converting people under the age of 18 to private accounts,” he told the newspaper.
Lyn Swanson Kennedy of Collective Shout, an Australia-based charity that monitors exploitative content globally, said: “Social media platforms rely on third-party organizations to moderate their content.”
She called for addressing some of these deeply disturbing activities that put underage girls in particular at risk of harassment, exploitation, and sexual harassment, according to The Guardian.
Imran Ahmed, chief executive of the Center for Countering Digital Hate, a non-profit think-tank, saw that “relying on automated detection, which we know can't keep up with simple hate speech, let alone rings of child sexual exploitation through cunning and professional networks, is an abrogation of the primary duty to protect children.”
Malicious Content
In the same regard, The Sun newspaper reported in a report published on April 16 that campaigners warned against child abusers using social media to lure children with the aim of sexual exploitation.
”8 out of 10 crimes registered with police forces in England and Wales were carried out through the use of social media platforms,” the newspaper pointed out.
The British police had recorded 8,505 offenses related to taking, preparing, or publishing inappropriate pictures of children and they were reported to the police.
Snapchat was used in 35 percent of those cases, while Instagram, Facebook, and WhatsApp—all owned by Meta—were used in 20, 17, and 5 percent, respectively.
This means that Meta-owned apps are mentioned in 43% of cases.
Andy Burrows, head of online safety policy at the UK's National Society for the Prevention of Cruelty to Children (NSPCC), described the accounts as a shop window for child seekers for those purposes.
“Companies should proactively identify this content and then remove it themselves,” he told The Guardian.
He criticized the companies that “even when they are informed of this, they judge that it is not a threat to children and should remain on the site.”
Burrows called on British MPs to address loopholes in the proposed Internet Safety Bill, which was debated in Parliament on April 19, to craft the strongest possible legislation protecting children from preventable harm and the potential for sharing images of child sexual abuse.
He said they should force companies to process not only content that is illegal, but also content that is clearly harmful but may not meet the criminal threshold.
The Online Safety Bill is not clear or robust enough to address some forms of illegal and harmful content, according to a report by UK Parliament's digital, culture, media, and sport (DCMS) committee.
The report also urged the government to address the types of content that are technically legal but fail to detect the most common forms of child sexual abuse that the report says are not currently covered by the bill.
Previously, social media platforms were forced to remove malicious content only after users reported it, but Facebook, Twitter, and TikTok could face penalties and fines of up to 10% of their global sales if they do not proactively remove harmful content.
The British government believes that the bill aims to make the UK a safer country when using social media, as it stipulates that anyone who publishes offensive or misleading content, hate speech, or threats of imprisonment will be punished for several years.
Nadine Doris, Minister of Culture, said: “The bill, the first of its kind in the world, will protect children from abuse and harm when using social media while protecting the most vulnerable from access to malicious content, and ensuring that terrorists do not have a safe space to hide online.”
Risks and Disadvantages
Last March, the National Center for Missing and Exploited Children (NCMEC), a US non-profit organization, announced that “29.3 million child abuse images were found and removed online in 2021, which is 35% more than it reported in 2020.”
“The increase in reports was not necessarily a cause for concern and could represent an improvement on the part of the platforms or an increase in the number of users,” the center said.
The vast majority of reports sent to NCMEC came from Meta; with 22 million reports of child abuse images from Facebook alone, 3.3 million reports from Instagram, and 1.3 million reports from WhatsApp.
It should be noted that Meta, Facebook’s parent company, experienced the most difficult moments in its history, during the month of October of last year, which is considered the worst in the American company's journey,
After the former Facebook content official Frances Haugen leaked many of the social media giant's secrets, and testified before Congress, stressing that “Facebook had pursued a policy of disinformation.”
Haugen added, “Through my work, I discovered that Facebook works in a way that harms young people and harms democracy,” stressing that “the company takes information from the public and governments, and misleads the public on many topics.”
She considered that “the giant company, whose profits are estimated in billions, works in the shadows, and often changes its work laws,” adding that “no one outside Facebook realizes how dangerous it really is as those who work or work inside.”
In addition, Haugen stressed that the social media giant hides behind walls with the rest of its applications, and seeks to ensure that no one understands the reality of its system.
Haugen also accused Mark Zuckerberg, saying: “The founder of the blue site is aware of the thoughts of the algorithms used by the site, and therefore things cannot happen without his knowledge,” adding that “Facebook was aware of the danger of its mechanism of action on children.”
Prior to Hogan's testimony, Senate Democratic leader Richard Blumenthal launched a scathing attack on the famous company, accusing it of greed and blindness to pain.
He also accused Facebook of putting profits at the expense of the interest and health of ordinary people, especially teenagers, especially since the company allowed its platforms to allow minors to create secret accounts without the knowledge of their parents.
In addition, Blumenthal saw that the blue site doubled the investments in order to encourage minors to use its platforms.
In turn, Senator Marsha Blackburn spoke about the consequences and effects of the Instagram platform on adolescent girls and teens, especially girls, considering that Facebook did not attach importance to the safety of young people.
It is noteworthy that new documents, according to a report published by the New York Times in November 2021, showed that the use of the photo application program Instagram had caused bad physical problems for every one out of 3 teenage girls.
Sources
- Instagram under fire over sexualised child images
- Child abusers ‘using social media sites as a conveyor belt to groom kids’
- Sites reported record 29.3m child abuse images in 2021
- Online safety bill ‘a missed opportunity’ to prevent child abuse, MPs warn
- Facebook whistleblower Frances Haugen tells lawmakers that meaningful reform is necessary ‘for our common good’
- State attorneys general open an inquiry into Instagram’s impact on teens