Dark Patterns 2.0
Blog » Dark Patterns 2.0
How Websites in 2025 Disguise Anti-Fraud as UX
Dark patterns are 🪄 design tricks that mislead users and push them into taking actions that benefit the website’s creators.
Common Dark Patterns
You’ve probably noticed these tactics more than once:
- Subscriptions renew automatically;
- Extra charges appear unexpectedly in the payment total;
- It’s impossible to refuse software updates.
For example, you can’t cancel Windows updates — only postpone them.
All of this falls under the umbrella of ⬛ dark patterns. But there are many more subtle tricks:
- Disguised advertising: You click a side panel thinking it will collapse an image — instead, it opens a new page.
- Exit traps: Some mobile apps don’t respond to the “back” or “exit” buttons; you can only leave by clicking a notification from another app.
- Countdown timers: They create a sense of urgency (like an expiring discount), but if you refresh the page or wait, the deal is still available.
Dark Patterns as Anti-Fraud Mechanisms
These tactics are increasingly being used as part of 👨⚖️ anti-fraud systems. Developers of many platforms are well aware of anti-detect browsers and the fact that users can spoof their digital fingerprints. That’s why they resort to dark patterns 😐 to make life harder for fraudsters and fake account farmers. Let’s examine the most common UI/UX tricks used as hidden anti-fraud tools.
“Please confirm it’s really you” — constant reauthentication
Many websites now require frequent user verification, sometimes every few days or weeks. Most often, it comes in the form of:
- SMS codes;
- email confirmations;
- captchas to prove you’re not a bot.
On the surface, it looks like “We care about your security.” or “We don’t want anyone else accessing your account”.
For example, PropellerAds regularly asks users to enter a code sent via email.
In reality, these persistent checks are used to:
- Verify access to 📩 email and 📱 phone numbers (especially to detect if they’re temporary or disposable);
- Scan device fingerprints;
- Analyze behavioral patterns (behavioral biometrics).
These checks are especially inconvenient for multi-account users, which helps deter fraudsters by raising their operational costs and time.
“Customize your interface” — onboarding as anti-fraud and surveillance tool
Some websites and apps offer 🛠 interface customization right after signup:
- selecting interests;
- marking favorite categories;
- setting preferences for the newsfeed;
- choosing newsletter topics;
- personalizing the layout and features.
For example, the KIWI flight search site immediately asks the user to choose a language and currency.
At first glance, it seems like a 👩❤️👨 user-friendly feature. But in reality, the platform collects extra behavioral data to enrich fingerprinting. With enough aggregate data, the system learns:
- what real users tend to pick;
- what fraudsters usually select;
- and what bots typically choose.
When a user customizes the interface, the system evaluates:
- how “human” their behavior appears;
- how quickly they make decisions;
- whether clicks are random or deliberate;
- whether the selected interest combinations match typical patterns of real users.
If someone behaves 🚀 too quickly, too mechanically, or picks overly generic combinations, this may trigger suspicion of a bot or multi-account activity. Platforms may also compare this data to previously banned accounts, analyzing:
- preference structures,
- onboarding completion speed,
- selected interests.
If the new user resembles past fraudulent profiles, the system may lower the account’s trust score, trigger additional verification steps or even limit functionality.
UX Elements for Identity Verification
In recent years, many services have started requiring users 📱 to link a phone number for authentication. This isn’t just for quick account recovery — the platform uses the number to check for reuse across multiple accounts, аnalyze geolocation and вetect a history of bans.
Even if a user tries to spoof this with virtual or disposable numbers, the system may issue HLR (Home Location Register) lookups or analyze SMS metadata for anomalies.
Another tactic is to 🌏 ask for a city or even full physical address (e.g., for shipping purposes). This allows the platform to cross-check the address against geolocation data, IP addresses and device fingerprints commonly seen in that geographic area.
Yet another trick is to 🙋♂️ offer a reward in exchange for personal details — such as a discount for students or seniors. To claim it, the user must upload a valid ID or official document.
These documents and the information they contain are then:
- Parsed by the platform;
- Compared against data from other users;
- Matched to the device’s fingerprint to spot patterns and link accounts.
Intentional Frustration as a Test
Many bot operators, click farms, and fake account farmers act according to rigid, automated playbooks — often using tools like 🔥 Dolphin Anty to script entire user sessions.
Anti-fraud systems analyze behavioral biometrics, including: typing speed, click timing and navigation patterns. To expose automation, websites may intentionally break parts of their interface. Since bots don’t get “frustrated,” they will continue performing actions mechanically.
But real users react emotionally:
- They click repeatedly;
- Move the mouse around anxiously;
- Reload the page;
- Start mashing the keyboard.
Others may just wait for the site to “recover,” which also reveals their human behavior.
Examples of Artificial Frustration Tactics
Slowed-down interface. If anti-fraud logic suspects you, it may delay button responses, scrolling, or page loading. Bots don’t notice — but humans get annoyed and act erratically. This technique is often used in online banking dashboards, e-commerce checkout flows, payment portals;
False errors. Messages like: “Something went wrong, try again later”, “Unknown error”, “Failed to save data”. These errors are often not real — they’re selectively triggered for suspicious users. If you try again and it works the second time, you’ve probably encountered one. The platform is watching how you react.
Hidden timeouts. The system artificially delays API responses or “freezes” forms to simulate server lag. The goal is to prompt the user to exit the session, or trigger alternative behavior that can be analyzed for bot detection. This is especially useful for quietly filtering out suspicious users without revealing that anti-fraud has been triggered;
Multi-step forms. Users are asked to fill out long forms with multiple required fields, drop-downs with hundreds of options (like a full list of countries), repeated entries (e.g., email twice). Real humans will usually complete the form — even if annoyed. Bots, on the other hand, may fail to complete the form or do so too quickly, triggering suspicion. These forms are common on crypto exchanges, banks, CPA networks, payment services etc.;
“Random” logouts or session drops. The user is unexpectedly logged out after performing certain actions. It looks like a session timeout or bug — but it’s intentional. Bots often don’t log back in, while human users will try to regain access. In some cases, fraudsters will just create a new account, revealing themselves again;
Pop-up ads. Their purpose isn’t just advertising — it’s a behavioral trap. If the user closes or minimizes the ad, that mimics natural human interaction. Bots typically ignore pop-ups altogether, exposing their automation.
Example: a pop-up reading “Join Fast Company today” isn’t just an ad — it’s a test.
Real-World Examples of UX-Based Anti-Fraud
Facebook — Artificial Interface Lag. When Facebook suspects bot activity or fake engagement, it deliberately slows down the interface. Button responses are delayed, pages load more slowly, and interactions feel sluggish.
If the user behaves in a way that’s typical for real people (e.g. moving the mouse, clicking with hesitation, waiting), the account earns trust. If not — the system may restrict access or escalate to the next verification level.
Real-Time KYC Identity Verification. Facebook sometimes uses real-time KYC (Know Your Customer) when multi-accounting is suspected. But this method is even more common on platforms like Dojah, a fintech service in Africa. At registration, Dojah requires users to:
- Upload a photo or ID document;
- Complete a liveness check (e.g., take a selfie in real-time).
This slows down real users just a bit — they search for documents, take photos, and pass the check manually. Bots and fraudsters are often filtered out, or they upload pre-made images — a behavior that can flag the account as suspicious.
Incognia — Smart Risk-Based Verification. Incognia is a platform that enables adaptive fraud detection on websites and apps. It implements a tiered verification system:
- Low risk — minimal checks (almost invisible);
- High risk — additional verifications (SMS, email, CAPTCHA, etc.).
This ensures that genuine users experience a smooth flow, while suspicious actors face delays, extra steps, and behavioral triggers.
Anti-fraud verification levels configured in Incognia.
How to fight back against UX-Based Anti-Fraud and Dark Patterns
Ultimately, your best weapon is a high-quality anti-detect browser and behavior that mimics a typical user as closely as possible. Here’s how to stay under the radar:
- Use real email logins within the same browser environment you use for the target site.
- When buying virtual phone numbers or eSIMs, ensure their area codes match the region or city of your proxy.
- Minimize automation — fill in forms manually whenever possible, and reduce the use of scripts or pre-filled data.
Most importantly — use a reputable, regularly updated anti-detect browser (🚀 Dolphin Anty for example). Its developers monitor anti-fraud updates across platforms and adjust their fingerprinting logic accordingly.