Social media sites like Instagram and Twitter seem increasingly plagued with “bots” — fake social media accounts which are automated and run by software instead of real people. These bots have generated an increasing amount of misinformation and spam messages in recent years, and we at The Alestle are asking for more to be done about it.
You’ve probably seen some examples of bots, whether you’ve known about them or not. While accounts vary in sophistication, they can usually be identified by a lack of a profile picture (or one that depicts an attractive woman), a suspicious username with several numbers and a low amount of posts and/or followers. On Instagram, unfiltered comment sections can feature bots which say things like, “DON’T view my story!” or “DM me with your CashApp,” likely trying to get unsuspecting users to interact with a malicious link.
The bot problem takes a different form on Twitter, where instead of posing as women with suspicious links, accounts either pose as people with strong musical opinions or strong political opinions. For example, the reply threads of artists like Billie Eilish and Lil Nas X are usually home to spam accounts copying and pasting such replies as, “You fell off” and “Make better music or retire,” typically within seconds of each other. While these mostly serve as annoying spam, Twitter bots have historically been credited with issues like election interference and pandemic misinformation.
The reason most of these bots exist is to help boost the analytics of certain posts and generate fake “engagement.” The prevalence of these social media bots wouldn’t be so bad if everyone on the internet were able to distinguish exactly what is real and what isn’t — but statistics show that this is increasingly not the case. In fact, it’s actually getting harder to detect social media bots thanks to things like Artificial Intelligence teaching bots to form more human-like language. As humans get less confident in their ability to distinguish fact from fiction, social media bots appear more convincing than ever before.
While there isn’t much data about how social media bots have impacted minors, there have been studies done on their ability to distinguish real and fake online content. In 2016, 80 percent of middle schoolers couldn’t distinguish between ads and articles, and most high schoolers took photos and captions at face value. If a middle schooler or high schooler interacted with a bot account on Instagram because they thought it was an attractive woman, this data suggests they would be much more susceptible to predatory behavior from whatever is behind that account.
While social media bots clearly present a problem, they don’t so easily present a solution. However, researchers have also found that students can increase their ability to identify fake online content with as little as one article or short video. If you haven’t already, we at The Alestle strongly encourage you to do some of your own research into how to protect yourself from both fake news and fake accounts.
We’re also calling on social media companies to do more to eradicate bots from their sites, including updating their detection methods to suit the way these bots are evolving. While Facebook claims to regularly remove billions of fake accounts from its platforms, it’s clear that sites like Facebook and Instagram still suffer from bot and misinformation problems. This process can be partially expedited by users pointing out accounts that seem suspicious — so if you see a bot online, report it or flag it. Hopefully, we can combine our efforts to create a more authentic internet.