An Investigation Reveals How Companies Are Using LinkedIn’s Deceptive Tactics to Increase Sales

Deepfakes technology has taken over the internet for the past two years. Therefore, it is not surprising to learn more about the emergence of new AI-created deepfakes every now and then.
National Public Radio (NPR) conducted an investigation, and the Stanford Internet Observatory conducted a study, and they found more than 1,000 deepfake LinkedIn profiles belonging to more than 70 companies, but these deepfake profiles are used as a marketing tool and to increase their sales force.
An NPR investigation indicated that these fake profiles or misleading information may be used for a variety of unethical and harmful purposes, such as: Mobilizing support for autocratic rulers, fomenting racial hatred towards minorities, and white supremacy.
Deepfake AI
Two researchers from Stanford University have discovered widespread use of fake LinkedIn accounts created with profile pictures generated by artificial intelligence (AI).
These profiles target real users in an effort to increase interest in specific companies without reaching the LinkedIn message limit before passing on successful leads to a real salesperson.
The story began when researcher Renee DiResta received a LinkedIn message from a person named Keenan Ramsey claiming to be a RingCentral employee, but DiResta noticed that Ramsey's photo was vague and blurry, for example: Missing earring, choppy hair strands.
The most telling sign, according to DiResta , was the focus of the eyes in the middle of the photo.
DiResta initially believed that Ramsey's message might have been a deception attempt to obtain sensitive information, then she received an email from a RingCentral employee, referring to Ramsey's LinkedIn message, but when DiResta searched for this person's name, she found that it belonged to a real person working for the company.
All of the above prompted DiResta and her colleague Josh Goldstein at Stanford Internet Observatory to launch an investigation that exposed more than 1,000 LinkedIn profiles using AI-generated faces.
Companies use profiles like this to generate a wide network of potential customers without having to use real salespeople and to avoid reaching the limits of LinkedIn's messaging.
NPR looked into allegations by DiResta and Goldstein and found more than 70 companies listed as employers of fake profiles.
Some companies told NPR that they hired outside marketers to help with sales but didn't allow the use of AI-generated images, and they were surprised by these results.
The chain of events goes like this: A bot with an AI-generated profile picture like Keenan Ramsey contacts a LinkedIn user, and if the target shows interest, it will be passed on to a real salesperson to continue the conversation.
Really excellent piece by NPR investigating some of the behind-the-scenes dynamic of a network of 1000+ fake accounts on LNKD that @JoshAGoldstein & I found a month or so ago. The find started by accident - this random account messaged me... https://t.co/uJmLxKvLRA
— Renee DiResta (@noUpside) March 28, 2022
By using fake profiles, companies can spread a vast network across the Internet without promoting sales staff or bypassing LinkedIn's messaging restrictions, according to the investigation.
The demand for online sales leads has increased during the pandemic period as it has become difficult for sales teams to showcase their products in person.
NPR noted that instead of trying to make a sale outright, Twitter bots from Amazon and other sources often spread disinformation and propaganda on behalf of companies and governments for the purpose of promoting or marketing.
Heather Hinton, RingCentral's chief information security officer, said she was unaware of anyone creating fake LinkedIn profiles on RingCentral's behalf and did not approve of this practice.
In an interview with NPR, Hinton said: “This is not the way we do business. For us it was a reminder that technology is changing faster. We just have to be more vigilant about what we do and what our sellers will do on our behalf.”
It is noteworthy that on March 24, 2022, LinkedIn had 810 million members in 200 countries.
Worrying Effects
In a related context, NPR noted that social media accounts that use fake faces are spreading Chinese disinformation, for example, adding that they also supported former US President Donald Trump, as well as independent media that spread pro-Kremlin propaganda.
“The entry of deep-fake technology into the field of companies’ work has worrying effects, for example: Deepfakes are created using AI, but AI has been proven to have a European bias, and enhance European beauty standards,” the investigation added.
“Now, in the world of work, a deluge of these traits can reinforce racial stereotypes about professionalism as well. Researchers have also noted in the past that the values embedded in mainstream ideas about professionalism are closely related to white supremacy,” the investigation says.
A study published in the Proceedings of the National Academy of Sciences of the United States of America on February 14, 2022 found that “faces created by artificial intelligence are now indistinguishable from real faces, in addition, people tend to rate artificially created faces as more trustworthy than real people's faces.”
Eurocentrism bias, combined with generation of synthetic faces, deepfakes on LinkedIn, and a trustworthiness factor, can lead to an increase in the racial divide in hiring processes and perceptions of trustworthiness.
Thus, while the issue of fake profiles being used to make sales pitches or recruit people may not in and of itself be a serious matter, the implications of AI's deeper involvement in human communications are troubling to say the least.
After Stanford University researchers alerted LinkedIn about those fake profiles, LinkedIn said it investigated and removed those who violated its policies, including the rules against creating fake profiles or falsifying information.
LinkedIn has also removed the pages of two leading companies listed on many of these profiles: LIA, based in Delhi, India, and Vendisys, based in San Francisco.
The company offered hundreds of “ready-to-use” avatars generated by artificial intelligence for $300 a month each, according to the LIA website, from which all information was recently removed, NPR reported.
Note that LinkedIn says that any non-genuine profiles, including those that use images that are not representative of a real user, are against their rules, and its Professional Community Policies page states that you do not use someone else's picture, or any other picture that does not look like you, for your profile picture
Tactic on LinkedIn
Fake profiles aren't a new phenomenon on LinkedIn, as LinkedIn removed more than 15 million fake accounts in the first six months of 2021, according to its latest transparency report.
These days, many companies are looking for ways to find customers online.
According to NPR, a company that provides LinkedIn marketing services to other companies, called Renova Digital, announced on its website a “ProHunter” package that includes two bots, or full-branded profiles, and unlimited messaging for customers willing to pay $1,300 per month.
But the company removed its service description and pricing from its website after NPR inquired about it.
Philip Foti, founder of Renova Digital, told NPR in an email that he has tested AI-generated images in the past but has stopped doing so.
To make it easier for people to distinguish between real and fake profiles, V7 Labs has created a new AI program that works as an extension in Google Chrome and is capable of detecting profiles belonging to a bot, with an alleged accuracy of 99.28%.
V7 Labs' Fake Profile Detector extension is intended to help authorities and ordinary Internet users detect and report profiles that spread fake news or create misleading content.
Deepfakes first appeared on the Internet in late 2017, powered by Generative Adversarial Networks (GANs), which is an exciting new technology that relies on two algorithms, one that produces the image based on millions of real faces copied from the web, and the other that searches for flaws.
With technology advancing at breakneck speed, deepfakes have become indistinguishable through the human eye from actual photos.
Recognizing the repercussions of this phenomenon, several countries have begun to take measures to stop misinformation by deepfake and to regulate its generation by companies.
Recently, the China Cyberspace Administration proposed a project that promises to organize technologies that generate or manipulate text, images, audio or video using deep learning.
The US state of Texas has banned deepfake videos designed to influence political elections.
Although these are only initial actions against AI-created fake media, governing bodies are still required to put in place plans to tackle new forms of deepfake content, especially those in which humans fail to distinguish between the real and the fake.
Sources
- Stanford Researchers Discover over 1000 AI-generated Deepfake profiles on LinkedIn
- That smiling LinkedIn profile face might be a computer-generated fake
- AI-synthesized faces are indistinguishable from real faces and more trustworthy [Study]
- LinkedIn - Community Report
- The Bias of ‘Professionalism’ Standards
- Chrome Extension Can Detect Fake Profile Pictures with 99.29% Accuracy