The Top Ways Apps are Stopping Bots and Fake Accounts

The Top Ways Apps are Stopping Bots and Fake Accounts

The Top Ways Apps are Stopping Bots and Fake Accounts | DeviceDaily.com
 

Every year, we spend more and more time on the Internet. With the arrival of COVID19, millions of people are using it from home more than before. From work to online shopping, social media, and dating, apps integrate with our lives in new ways all the time. But as more users join these apps and websites, where we go to consume content and interact with people become hubs for online scams, bots, and fake accounts.

Campaigns that prey on the trust of people are rampant online. It might be advertising campaigns that astroturf social media websites with bots or fake accounts to make it look like a specific product is extremely popular. It could be fake emails from your bank asking you to reset your password. Unfortunately, these campaigns are quite lucrative for the scammers running them. In fact, people reported more than 250,000 cases of online scams last year in the U.S. alone, with more than $ 300 million in damages.

Of course, apps don’t want fake users and scammers on their platforms either. There’s an ever-evolving battle between malicious actors and the developers behind these apps. Below are some of the unique approaches developers use to detect fake users and get them off their platforms quickly and effectively.

Multi-Step Authentication

People who create spam bot accounts generally try to make many hundreds of accounts at the same time. Usually, this is because app users are able to detect fake accounts with ease. That means these developers want a simple account creation process that can be easily automated. Similarly, scammers want easy verification steps to speed up the process or brute-force hack their way onto already-existing apps easily. One way that apps are stopping quick account creation is through captcha and multiple account verification steps.

By asking users for a password, then sending a security code to a device owned by the person who knows the password, users can verify their identity quickly and securely so they can gain access. Some of the first major companies to adopt two-factor authentication included Wells Fargo. Now, most software companies and social media apps use it to make sure that your account is truly yours.

For slowing down automated threats, platforms like Twitter and massively multiplayer online games like League of Legends require users to link accounts to other pre-existing accounts. When you create an account, the service uses these existing accounts to verify your online presence and act as a way to observe human actions like clicking and typing during all sign-up steps.

Manual Review

It’s clear that tech-based solutions are useful for stopping the majority of bad bots. But, sometimes stopping human-like bots takes a simpler solution – manual review of accounts. For example, Twitter and Facebook use automated systems as their first line of defense, but then flag certain accounts for actual humans to look at. Using metrics collected by the account creation and posting system, reviewers can look through an account’s history to determine whether it has a particular agenda that’s related to other bot activity on the platform.

For example, a set of accounts may be created on a similar date and post regularly about innocuous content. But then, they suddenly transition their posting style to one that matches a particular agenda. These accounts won’t be flagged by automatic systems early on because they won’t match the traditional flagging metrics. But, their later posts might be reported by other platform users. Manual reviewers are then able to determine if it’s a group of real people or a set of bot accounts.

Real-Time Verification

Some apps try to create real-life situations where people can meet up and interact in person. For example, Pokémon Go requires users to band together in a physical location so they can take down powerful, legendary Pokémon. Other dating and friend-making apps entirely rely on users to interact in-person to establish relationships. These types of apps have a unique problem. They must ensure their users are entirely real, so others can trust the app to find them safe social situations.

Many of these apps use a significant amount of manual reviews to find fake users. But others use extensive verification and new, unique technology to do the same. Dating apps like Hily (Hey, I Like You) capture new user information by linking to pre-existing Facebook accounts and comparing uploaded photos to Facebook photos to ensure that the same person is using both accounts.

Other social apps are now introducing real-time photo requirements. A real-time photo obligation involves snapping a current picture to compare against uploaded images (pictures). The goal is to use built-in technology to make the account creation process more challenging to automate. By requiring multiple verification steps, apps can curb scam accounts and increase user safety.

A Problem that Isn’t Going Way

Online scams and malicious bots will likely continue to increase in number as we devote more time and trust to Internet-based technology. But app creators and developers realize this and are taking necessary steps to develop better security systems.

Entrepreneurs are testing many approaches, from detecting fake accounts through machine learning and facial recognition to manually reviewing user-flagged accounts. And, as nefarious tactics evolve, we’re likely to see even better methods to stop these spammers from succeeding.

Image Credit: Polina Zimmerman; Pexels

The post The Top Ways Apps are Stopping Bots and Fake Accounts appeared first on ReadWrite.

ReadWrite

John Boitnott

CEO, Boitnott Consulting LLC

A journalist and digital consultant, John Boitnott has worked at TV, print, radio and Internet companies for 25 years. He’s an advisor at StartupGrind and has written for BusinessInsider, Fortune, NBC, Fast Company, Inc., Entrepreneur and Venturebeat. You can see his latest work on his blog, jboitnott.com

(32)