Up until the last few months, most social media scams have been fairly predictable and well-known.
Robots have run rampant on Twitter for many years, posing as real account holders and even tricking people into thinking they are human. These “bots” are more like automations; they don’t pose as real people.
Over the last year, generative AI has helped social media managers create posts that seem like they were written by a copywriter not a bot. And, we all know many of the photos and videos floating around social media platforms were created by artificial intelligence. It’s almost become a bit routine.
In the near future, however, new scams will emerge on platforms like Facebook and Twitter, and some of them will trick even the more technologically astute among us.
One reason is that AI is advancing faster than anyone could predict. While none of these scams are widely known yet, it’s wise to stay vigilant about potential abuses.
One example: It won’t be long before you’ll start seeing incredibly life-like and realistic “talking head” videos posted by an “influencer” who is actually an AI bot. I’ve seen experiments with this type of content already but not an actual scam yet where an AI was posing as a real person and not revealing the truth. None of these look realistic at the moment. It won’t be long before they do.
What concerns me about AI posing as a real person on social media is that the bots have an unusual advantage over real users: they never get tired.
“Influencer bots” can create content all day long, posting on multiple accounts, liking and commenting constantly. Since there’s no real governance over this type of content and the AI bots could fool the gatekeepers quite easily, there won’t be a way to tell what is a real post from one that is AI-powered.
That means AI bots could influence us about products and services and political viewpoints, spreading misinformation and even creating panic and market disruption. There’s already plenty of human influencers who are spreading misinformation and conspiracy theories as it is.
Imagine an AI bot created by one company that starts spreading misinformation about a competitor. We won’t really know whether the account is legit or how to verify any of the claims with a real person.
We naturally believe what we see online, it is human nature. And when the video looks incredibly realistic, we won’t know it is just a marketing ploy or a scam.
That’s just the beginning. AI bots could also start chatting with us using these fake accounts, posing as real people. They can then call us using an authentic-sounding voice.
Of course, there are already scams on Facebook like this, but what’s likely going to happen beyond that involves fake accounts run by bots that look entirely real and fool us into thinking it’s a person not a bot. Once the AI bots have built up trust, they could ask us to reveal key details about our life and eventually perpetuate other identity scams.
The scary part about all of this is that it might already be happening and we don’t even know it. There may be AI-powered social media accounts running right now that are building up a follower count, interacting with us, and pretending to be a real human.
The question is how to prevent this from happening.
I’m not seeing any great solutions yet. It’s an opportunity for security professionals to get involved and make suggestions. Watermarks? Some sort of digital artificial intelligence legislation? Today, it’s remarkably easy to create a social media account without any verification about who you are, where you live, or whether you are even a real person or not.
What’s more likely to happen? The first AI-powered scams will emerge on social media and cause some real damage. Then we’ll finally pay attention to the dangers and try to quickly enact some new laws.
Read the full article here