5 fraud and identity experts on 2025-2026 trends

As with previous years, we asked identity and fraud experts to reflect on the closing year and share a few predictions for the next.
You’ll get unique perspectives from fraud fighters, researchers, and an executive. We asked them about unexpected fraud trends, which tactics will become more valuable, leadership’s changing perceptions, and AI, of course.
But we kicked things off with a lighthearted question.
Which emoji best describes the year?
🤨 Ashley Fang, fraud product analyst at Persona
🧪 Charles Yeh, CTO at Persona
🤔 David Maimon, head of fraud insights at SentiLink
🚀 Patrick Hall, product architect at Persona
(😮 but not 😱) Shoshana Maraney, content consultant at Longform Labs
David also expanded on his selection. “The thinking emoji feels like the perfect symbol for 2025: a year where both fraud fighters and fraudsters were relentlessly analyzing, adapting, and trying to stay one step ahead of the other.”
What fraud-related KPIs or metrics mattered more in 2025 than in previous years?
Charles Yeh: Because fraudsters are adaptive, we find that conventional ML metrics are pretty misleading when used for fraud model accuracy. Instead, we have a full taxonomy in-house that we’re constantly updating when we see new forms of fraud. Fraud capture rate across that full taxonomy has been the key metric for us in 2025 as deepfake and GenAI technology improves, making fraud more complex than ever.
David Maimon: In 2025, fraud claims emerged as one of the most critical KPIs for fraud leaders — even more so than aggregate fraud losses. Losses tell you what has already happened, but consumer fraud claims give you early visibility into what’s coming next.
A sudden increase in claims tied to a specific channel, geography, or customer segment can signal a coordinated attack or a new scam script taking hold. And claims data captures the human side of fraud pressure that traditional models miss.
For these reasons, tracking the volume, velocity, and nature of fraud claims became a crucial predictor of future loss trajectories, helping institutions spot patterns earlier and allocate resources more effectively.
Patrick Hall: False negatives seemingly started to pull even with conversion loss. Historically, a hit to conversion or an increase in false positives was less acceptable. I would say there was a clear change in that this year. False negatives maybe took a backseat to conversion when there were gray areas.
Shoshana Maraney: Agentic referral traffic. In other words, good bots entered the scene. It’s a metric no one was looking at in previous years, and that’s a significant change from a world in which you just needed to identify and block bots to protect your site. Now, that approach can cost you good business.
Read more: Identity verification pass rate metrics aren’t telling you the whole truth
What were the most unexpected fraud trends or attack vectors from 2025?
Ashley Fang: It's either the crazy good/eerie Gen AI Selfies, or kids with all kinds of dolls and masks.
Charles Yeh: We’ve come to expect most of what fraudsters do within the Persona flow, but the ways they exploit the wider internet continue to surprise us. This year brought a surge across use cases we didn’t expect, from ghost students to mass-media influence and more.
David Maimon: One of the most unexpected fraud trends of 2025 was the emergence of large-scale schemes exploiting the identities of former legal immigrants — people who lived, studied, or worked in the US for a short period and then returned to their home countries.
Throughout 2025, underground markets — particularly international Telegram channels — began selling “expat identity packages” containing SSNs, personal information, tax forms, and full credit reports. These marketplaces even offer services to update addresses, emails, and credit bureau records, making the identities appear current and legitimate to financial institutions.
Even after they leave, their Social Security numbers, tax records, and credit histories remain active within US systems, creating a uniquely vulnerable pool of identities that are rarely monitored by their owners.
Fraudsters quickly recognized this gap. Throughout 2025, underground markets — particularly international Telegram channels — began selling “expat identity packages” containing SSNs, personal information, tax forms, and full credit reports. These marketplaces even offer services to update addresses, emails, and credit bureau records, making the identities appear current and legitimate to financial institutions.
What makes this trend so striking is that these individuals are not typical fraud victims, yet their unmonitored, fully authentic identities have become some of the most prized assets in global fraud rings.
Multiple cases surfaced in 2025 showing these identities being used to apply for credit, open accounts, and execute bust-out schemes years after the person left the United States. This new attack vector has quickly become one of the hardest for financial institutions to detect — and one of the most costly.
Shoshana Maraney: For me, I think the most unexpected trend was the first-party fraud and insider fraud vectors. It was obvious that fraudsters were going to be exploring the opportunities that GenAI presents; they started experimenting with ChatGPT early on, and once the early FraudGPT came out, things never really went back. That was about a year and a half ago, so the evolution of the trend in 2025 wasn’t a surprise.
What did surprise me was the extent to which regular people started co-opting the bot in their pocket to do anything from faking receipts, to creating pictures purporting to show damaged goods, creating ghost employees and phantom payroll, and more.
It’s natural for this kind of trend to appear over time, but the speed was notable. The key aspect here is that the ground was already perfectly primed.
Many ordinary people have been feeling economic uncertainty since the coronavirus pandemic, and that’s something that has intensified for many in 2025 with rising costs of living.
People were already far more savvy both technically and about process loopholes (such as in returns policies) than used to be the case. GenAI entered a stage where people were already ready and willing to use something that could help them cheat more.
Which fraud-fighting tools, tactics, and signals do you think will become less valuable in 2026?
Ashley Fang: Static, visual-based detection methods for deepfakes, Gen AI, and fake IDs will become less valuable.
Charles Yeh: Visual inspection or even manual review are no longer effective due to how good generative AI has gotten.
David Maimon: I try to avoid predictions not grounded in solid data, but one trend is clear: AI tools are making it far easier for fraudsters to create highly convincing fake documents and images. Criminals can now generate realistic IDs, alter photos, and misuse genuine credentials taken from real victims. As these capabilities expand, fraud attempts will look increasingly polished and credible.
Shoshana Maraney: Using individual data points in silo is going to become increasingly less relevant. Finding a lot of information about a specific data point can be valuable. But with the identity landscape increasingly complicated by more sophisticated synthetic identities, account takeovers (ATOs), scam trickery, and more, fraud fighting will have to be grounded in a strong understanding of the entire identity of the user on the other end.
Do you think any will become more useful?
Ashley Fang: Device, network, and behavioral signals, database verifications, and link analysis.
Charles Yeh: The only way to catch truly sophisticated fraud is through the effective combination and use of signals across many media.
David Maimon: Real-time verification that incorporates layered signals, advanced models, and behavioral intelligence will become even more valuable. In 2026, institutions that strengthen their on-the-spot verification capabilities and combine them with historic signal analysis that AI can't fake will be best positioned to stay ahead of AI-driven fraud.
Shoshana Maraney: With the identity landscape increasingly complicated by more sophisticated synthetic identities, ATOs, scam trickery and more, fraud fighting will have to be grounded in a strong understanding of the entire identity of the user on the other end.
What role do you think AI (both defensive and offensive) will play in shaping fraud fighting in 2026?
Ashley Fang: On the fraudsters' side, AI will make creating synthetic identities much cheaper and easier. They also get access to a wide knowledge base and are able to operate scaled attacks using agents. On the defensive side, multi-modal clustering, anomaly detection, and LLM-based checks/analyses will become more important
Charles Yeh: AI is an incredible tool with limitless upside but also limitless downside. What’s important is that we harness it effectively for people.
David Maimon: AI is already being weaponized by fraudsters in meaningful ways. We’re seeing deepfakes used to replace faces, generate entirely new ones, and insert them into fake documents and liveness videos.
We’re also beginning to observe early signs of agentic AI being integrated into fraud operations. These systems can autonomously gather information from social media and public websites to build detailed victim dossiers, construct psychological profiles, and craft tailored approaches designed to manipulate or defraud individuals.
Looking ahead, there is growing evidence that agentic AI systems may engage in deceptive techniques — and even autonomous hacking behaviors — without explicit human prompts.
On the defender side, financial institutions and fintechs will continue leaning heavily on AI-driven models to detect fraud in real time. Machine learning systems now evaluate hundreds of signals — from device behavior and geolocation patterns to phone/email intelligence and prior fraud labels — allowing for far more precise risk scoring than traditional rules.
Identity-resolution tools and graph-based AI will also remain essential, linking millions of datapoints to uncover synthetic identities, organized fraud networks, and connections between applications that would otherwise appear unrelated. This trend will only accelerate in 2026.
On the back end, investigators will increasingly rely on generative AI to summarize case histories, identify common MOs across alerts, and dramatically speed up SAR drafting. AI-driven analytical tools will also help fraud teams spot emerging threats earlier by digesting large volumes of unstructured data.
Shoshana Maraney: I think teams will start looking at it as the next stage of automation, rather than a magic wand. Initially, in small ways, to save time and expand reach. As confidence grows, more fraud fighting teams will start to think seriously about their processes and how they could reimagine them to take advantage of the abilities that AI brings to the table.
Does the C-suite think about fraud/trust and safety teams differently than they did five years ago? If yes, what’s changed?
Patrick Hall: [Fraud] teams were viewed as cost centers, but there's a clear pivot to them being seen as revenue- and trust-generating at the executive level. The fraud team is increasingly important in a company's overall strategy
Charles Yeh: Fraud teams have evolved a lot over the last year, but trust and safety teams have undergone an even more dramatic shift. With so many interactions now happening online, it’s becoming clear that some parts of the internet can’t or shouldn’t operate without accountability. It’s more important than ever that the internet is safe and human.
Shoshana Maraney: Fraud fighting was once purely about risk and cost mitigation, whereas now more companies are seeing it as integral to the customer journey and potentially an enabler of smoother experience and improved results. That narrative shift has started to be internalized in more companies.
In the companies where this is the case, it’s largely due to hard work from the fraud team. It’s both a matter of proactively offering creative solutions as part of the wider fraud work, and making sure that this is recognized.
If you could recommend one operational change for fraud teams heading into 2026, what would it be and why?
Ashley Fang: To do more proactive monitoring and anomaly detection using a wide range of passive signals.
Charles Yeh: Build a suite of tools and methods for addressing fraud — there’s no silver bullet. Catching fraud isn’t about simply denying fraudsters, it’s about changing their ROI equation such that they stop trying.
Shoshana Maraney: Many teams are still wary of using GenAI to streamline their work, and this comes from a lot of really good places — it’s great that fraud teams are so responsibly committed to ensuring the safety of their data, compliance with all legal and regulatory requirements, and so on.
That said, it’s past the point where non-engagement is an option. Fraudsters have expanded their own operations so drastically using GenAI, that if fraud fighters don’t put effort into becoming comfortable with its use in 2026, it’ll cause a serious imbalance in the ongoing arms race.
I’d recommend starting three things in parallel:
Use GenAI to communicate more often, and more effectively, to key stakeholders and leaders in your company.
Kick off an analysis of your fraud fighting processes to identify places where GenAI can not only save you time, but also improve your results.
Engage directly and openly with your R&D and legal teams. Explore the GenAI reality in your company, discuss your needs, and find an arrangement that can work long-term for everyone.
