Privacy vs. security: is your bot mitigation solution effective in the wake of privacy trends on the web?

Bad robots disguise themselves as humans to bypass detection

Bot mitigation providers focus on stopping bots with the highest degree of precision. After all, it only takes a small number of malicious bots to break through your defenses and wreak havoc on your online activities. One of the challenges in stopping bad bots is minimizing false positives (where a human is mistakenly categorized as a bot).

The more aggressively rules are set in a bot mitigation solution, the more sensitive the solution becomes to false positives, as it must decide whether or not to accept requests for indeterminate risk scores. As a result, actual users are inadvertently blocked from websites and / or given CAPTCHAs to validate that they are indeed humans. This inevitably creates a bad user experience and reduces online conversions.

Much of the ongoing innovation in modern bot mitigation solutions has been a reaction to the increasing sophistication of the adversary. The fact that malicious bots increasingly look like humans and act like humans in an attempt to evade detection makes it harder to rely on rules, behaviors, and risk scores to make decisions. , which makes false positives more pronounced.

Humans now disguise themselves for privacy

A more recent trend is exacerbating false positives, and without proper innovation, it makes bot mitigation solutions dependent on legacy rules and risk score inadequate. It results from the acceleration of trends related to human action towards more privacy on the Internet. Ironically, the move towards more privacy on the web can actually compromise security by making it even more difficult to distinguish between humans and bots.

To understand why it is essential to know how the majority of bot detection techniques work. They rely heavily on device fingerprints to analyze device attributes and bad behavior. The device fingerprint is performed on the client side and collects information such as IP address, user agent header, advanced device attributes (e.g. hardware imperfections) and cookie identifiers. Over the years, the information collected from the device’s fingerprint has become a major determinant for the crawl engines used to determine whether the request is bot or human.

Device fingerprints, in theory, are meant to be like real fingerprints. Their fingerprint can thus uniquely identify each user. Fingerprint technology has evolved towards this goal – aka high definition device fingerprint – by collecting more and more information on the client side. But what happens when the device’s fingerprint can’t be a reliable unique identifier – or worse yet – starts to look like the ones presented by bad bots?

The disappearing device fingerprint

Previously, we published articles on how robot operators evade device fingerprint-based detections. They collect fingerprints and use them in combination with anti-detection browsers to trick systems into believing the request is legitimate. It’s one of the first drivers that pushed Kasada to move away from device fingerprints years ago as an effective way to distinguish between humans and bots.

In addition to that, here are several recent trends in web privacy that make “evidence” obtained through device fingerprinting methods even more suspect.

Trend # 1 – Use of Residential Proxy Networks

Of course, residential proxy networks are exploited by bot operators to hide their fraudulent activity behind seemingly harmless IP addresses. But there is also a growing trend of legitimate users going beyond traditional data center proxies to hide behind residential proxy networks like BrightData. Residential proxy networks have become increasingly inexpensive and, in some cases, free; they provide a seemingly endless combination of IP addresses and user agents to hide your activity.

While some of these users hide behind residential proxies for suspicious reasons, such as to bypass access to restricted content (e.g. geo-restrictions), many use it to genuinely ensure their online privacy and protect their personal data against theft. The fingerprinting techniques used to detect those behind proxy networks have become ineffective in light of modern residential proxy networks masking your identity.

Bottom line: You cannot rely on IP addresses and user agents to distinguish between humans and malicious bots because they look the same when hidden behind residential proxies.

Trend # 2 – Using Privacy Mode and Browsers

The recent increase in the availability and adoption of private mode and new privacy browsers also makes it difficult to use device fingerprints.

Private browsing modes, such as Chrome Incognito mode and Edge InPrivate browsing, reduce the density of information stored about you. These modes take moderate measures to protect your privacy. For example, when you use private browsing, your browser will no longer store your consultation history, accepted cookies, completed forms, etc. once your session is over. It is estimated that over 46% of Americans have used a private browsing mode in the browser of their choice.

Plus, privacy browsers like Brave, Tor, Yandex, Opera, and Custom Firefox take web privacy to the next level. They add extra layers of privacy such as blocking or randomizing device fingerprints, offer tracking protection (coupled with privacy search engines like DuckDuckGo to avoid tracking your search history), and remove them. cookies making ad trackers ineffective.

These privacy browsers order about 10% of the total market share today, and they are growing in popularity. They have enough market share to present major challenges for anti-bot detection solutions based on device fingerprints.

Bottom Line: You can’t trust advanced device identifiers or first-party cookies due to the growing percentage of users using privacy modes and browsers.

Trend # 3 – Elimination of third-party cookie tracking

There will always be a substantial percentage of Internet users who will not use privacy modes or browsers. Google and Microsoft have too much market share. But even for these users, taking fingerprints from the device will be increasingly difficult. One example is due to Google’s widely publicized effort to eliminate third-party cookie tracking. And while the delay has was recently postponed to 2023, this will inevitably make it more difficult to identify suspicious behavior.

Third-party cookies collected from the device’s fingerprinting process are often used as a tell-tale sign of robot-driven automation. For example, if a particular session with an identified set of third-party cookies tried to make 100 connections, then this is an indicator that you will want to force them to revalidate and establish a new session.

Bottom line: Soon you will no longer be able to use third-party cookie IDs in the browser to help identify bot-driven automation.

Go beyond device fingerprints

Kasada moved away from device fingerprints years ago, realizing the increasing limits to obtaining accurate device fingerprints by both humans and robots. The team has come up with a new method that doesn’t need to look for a unique identifier that can be associated with a human, but rather the indisputable and indisputable proof of the automation that presents itself every time a bot interacts. with websites, mobile apps and APIs.

We call this our client interview process. The attributes are collected invisibly from the client looking for automation metrics, including the use of headless browsers and automation frameworks like Puppeteer and Playwright. So instead of asking if a request can be uniquely identified as a human, Kasada asks if a request is occurring in the context of a legitimate browser.

With this approach, there is no need to collect device fingerprints and no ambiguity in having to assign rules and risk scores to decide bot or human. Decisions are made on the first request, never letting requests enter your infrastructure, including those from new bots never seen before – without ever needing CAPTCHA to validate.

If you’re already using a bot mitigation solution, ask them if they are relying on outdated device fingerprinting methods and what steps are being taken to deal with the inevitable increase in false positives and bots. not detected resulting from movement to a more private web.

Want to see Kasada in action? Request a demo and see the industry’s most accurate bot detection and industry’s lowest false positive rate. You can also run a instant try to see if your website can detect modern bots, including those that leverage the open source Puppeteer Stealth and Playwright automation frameworks.

*** This is a Syndicated Security Bloggers Network blog by Kasada written by Neil Cohen. Read the original post on: https://www.kasada.io/privacy-vs-security/

Comments are closed.