May 1, 2026

Deepfakes, Fake Reviews, Fake Users — The Trust Crisis Is Getting Worse in 2026

In 2025, deepfakes and bot networks have made the internet untrustworthy — the solution isn't better detection, it's verified proof of presence tied to real identity and location.

This article is part of daGama's weekly blog series exploring the intersection of physical-world experience, on-chain infrastructure, and the future of how people discover and interact with the places around them.

Something fundamental changed in 2025, and most people haven't fully processed it yet.

For the first time in the history of the internet, you can no longer trust what you see. Not a photo. Not a video. Not a voice on a call. Not a review from someone who appears to have a real profile, a real history, and a real face. The tools to fabricate all of it are cheap, accessible, and improving faster than any detection system can keep up with.

This isn't a prediction about where things are heading. It's a description of where things already are. And the consequences for how we discover, evaluate, and trust the physical world around us are only beginning to be felt.

The Numbers Are Worse Than You Think

Deepfake technology increased 500% since 2024. The number of deepfake files online grew from approximately 500,000 in 2023 to a projected 8 million in 2025 — a 1,500% increase in two years. Deepfake-enabled fraud exceeded $25 billion in losses in 2025 alone. Fraud attempts using deepfakes increased by 2,137% over the last three years.

Those are large numbers. But here's the one that should stop you: in a 2025 iProov study, only 0.1% of participants correctly identified all fake and real media shown to them. For high-quality deepfakes specifically, human detection rates drop to just 24.5%. Humans claim about 73% accuracy on audio authenticity — but are easily fooled in detail, particularly for clips under 20 seconds.

In other words: we are not equipped, as humans, to detect the fakes that are currently being produced. This is not a matter of paying closer attention. The gap between what AI can generate and what the human eye and ear can detect has already closed.

The fake review problem we wrote about previously — the manufactured ratings, the coordinated campaigns, the anonymous accounts with no verifiable connection to any real experience — was bad enough when it was driven by humans gaming a system. What happens when every piece of that system can be fabricated end-to-end by AI?

The Web3 User Problem Is the Same Problem

The trust crisis isn't limited to review platforms or social media. It runs directly through the heart of the Web3 ecosystem — and the numbers there are equally stark.

When Web3Quest analyzed verification data across major crypto projects in 2025, the finding contradicted almost every bullish narrative in the space: a project that reports one million users has actually acquired roughly 350,000 genuine humans. The other 650,000 are bots, duplicate wallets, and automated engagement systems.

In approximately 80% of airdrops analyzed in 2025, the combined majority of tokens went to non-organic participants. Projects weren't building communities. They were subsidizing bot infrastructure and funding arbitrage networks — and paying for the privilege. Projects without real-time verification waste 65–70% of acquisition budgets on bot activity and Sybil farms.

The founders running these projects aren't lying. Most of them genuinely believe their metrics because nobody is measuring real users. Everyone is measuring reported users. As one analysis put it plainly: this isn't fraud. It's a systematic delusion at scale.

The consequence is a market where over 64% of all web traffic is now non-human — bots, scrapers, and automated agents that create fake accounts, post fake reviews, manipulate engagement metrics, and impersonate real people. The internet was built without a way to prove a human being is on the other end of a connection. That architectural gap is now being exploited at a scale that was unimaginable five years ago.

Three Converging Crises

It helps to understand the trust crisis not as one problem but as three related crises converging at the same time.

The identity crisis. Traditional verification methods have failed to keep pace with the tools now available to bad actors. CAPTCHAs are solved by AI with 99.8% accuracy. Phone verification is bypassed by SIM farms selling numbers for cents. Only 13% of companies have anti-deepfake protocols. 22% of people have never heard of deepfakes. 25% of executives have little or no familiarity with them. The systems we built to verify that a person is real were designed for a different threat environment, and they no longer work.

The content crisis. Deepfake content jumped from 500,000 files in 2023 to 8 million in 2025. Deepfake-related financial fraud in crypto increased 340% in 2025–2026, with cryptocurrency scams representing the largest category. Voice deepfakes are projected to dominate 60% of fraud by 2026. Political deepfakes are projected to impact 50% of elections by 2028. The content layer of the internet — the reviews, the videos, the testimonials, the social proof — is being systematically poisoned at a rate that outpaces any cleanup effort.

The metrics crisis. The numbers that projects, platforms, and businesses report about their users no longer reflect reality. Wallet counts are inflated. Engagement metrics are manipulated. Community sizes are gamed. Review ratings are manufactured. The data that institutions use to make investment decisions, the data that consumers use to make purchasing decisions, the data that travelers use to make discovery decisions — all of it is operating in an environment where the baseline assumption of authenticity no longer holds.

These three crises are connected. They share a common root: the internet was built on the assumption that the people interacting with it were real. That assumption no longer applies.

Why This Gets Worse Before It Gets Better

The deepfake detection market is growing — but it's running against a fundamental asymmetry.

Generating convincing synthetic content is getting cheaper and faster. Detecting it is getting harder and slower. A benchmark analysis found that audio detectors lost 43% of their performance on more realistic fakes. Detection tools claiming 99% accuracy often fail on new, zero-shot deepfakes. The detection tools are trained on the previous generation of fakes; the fakes are already in the next generation.

Deepfake phishing will intensify in 2026 as attackers combine real-time media manipulation, automation, and scalable fraud services to bypass traditional controls. Real-time deepfake manipulation — live video and voice simultaneously — is already emerging. Resemble AI reported 980 corporate infiltration cases in Q3 2025 where live video deepfakes during meetings were used to impersonate executives and authorize fraudulent transactions.

For the physical world specifically — for the places people visit, the businesses they evaluate, the experiences they share — this means the review and discovery layer is entering a period of accelerating unreliability. The fake review was already a problem when it required a human to write it. When a single bad actor can generate thousands of convincing, contextually appropriate reviews for any location, any product, any experience — at negligible cost — the entire premise of anonymous user-generated review content collapses.

What Verification Actually Requires

The response to this crisis is not better detection. Detection is, at best, a temporary countermeasure in an arms race where the attacker has structural advantages.

The actual solution requires changing the architecture — building systems where the question isn't "is this content real?" but "can this person prove they were there?"

Proof of presence. Cryptographically verifiable attestation that a specific person, with a verified identity, was at a specific place at a specific time. Not a claim. Not a review submitted from an anonymous account. An on-chain record that cannot be manufactured, cannot be replicated at scale by a bot farm, and cannot be retroactively altered.

This is technically achievable now in a way it wasn't five years ago. The combination of on-chain identity infrastructure, zero-knowledge proofs that can verify attributes without revealing private data, and mobile-first location verification creates the foundation for a trust layer that doesn't require trusting the content itself — only the proof of presence that generated it.

The deepfake crisis and the fake user crisis share the same solution: verified identity tied to verified behavior in the real world. Not a better algorithm for spotting fakes. Not a larger moderation team. A different starting point — one where the burden of proof is on establishing that something is real, not on detecting that it's fake.

The Trust Layer Is the Infrastructure Problem of 2026

The crisis of trust in 2026 is not a content moderation problem or a fraud prevention problem. It is an infrastructure problem.

The internet needs a layer that it was never built to have: a way to prove that a human being, with a verified identity, actually experienced something in the physical world and is reporting on it honestly.

Every system that depends on user-generated information about the real world — review platforms, discovery apps, travel guides, local business directories, community mapping tools — is vulnerable to the trust crisis in its current form. And the vulnerability is growing, not shrinking, as generative AI tools become more capable and more accessible.

The projects building the verification layer now — the infrastructure for proving presence, establishing verified identity, and rewarding genuine contribution — are building something the entire internet eventually needs.

The crisis is already here. The infrastructure to solve it is just beginning to be built.

daGama is building the verified discovery layer for the physical world — where proof of presence, on-chain identity, and real community knowledge replace anonymous reviews. Learn more at dagama.world

April 22, 2026

Why Google, Yelp, and TripAdvisor Can't Solve What They Created

Fake review platforms have a fundamental design flaw — they never verified reviewers were actually there — and the author argues only proof-of-presence tech can fix it (which their startup daGama is building).

April 20, 2026

GT Protocol — Ecosystem Overview

GT Protocol replaces traditional trading dashboards with a fully automated system that lets users build, optimize, and run algorithmic strategies across multiple assets through a unified platform.

April 15, 2026

The Next Frontier for AI Agents: Physical World Navigation

The next frontier for AI agents isn't digital — it's physical, and the missing piece is a verified, real-world data layer that agents can actually trust.