Click here to get this post in PDF

At the 2026 World AI Cannes Festival (WAICF), identity verification company Microblink received the Grand Jury Excellence Award for its Fraud Lab research initiative. The project focuses on a growing problem many businesses are only beginning to recognize: companies can no longer assume the request on the other side of a digital transaction comes from a real person.
Generative AI has changed the economics of fraud. A convincing face, a believable identity document, and a supporting digital history can now be produced in minutes using widely available tools. What once required technical expertise and time can now be done quickly and at scale.
Businesses are already encountering the impact. Lenders approve loans tied to identities that exist only long enough to receive funds. Retailers issue refunds to customers created solely to request them. Platforms pay referral and sign-up incentives to coordinated fraud networks. Customer support teams reset accounts for individuals successfully impersonating legitimate users. Some employers have even discovered that a convincing interview and valid-looking identification does not always correspond to a real employee.
Security historically focused on protecting accounts. Increasingly, it depends on confirming the participant.
From Know Your Customer to Know Your Actor
Most digital verification systems were built on a simple assumption: a real person presents legitimate credentials during onboarding, and once verified, that person remains the trusted decision maker.
Microblink describes the next stage as Know Your Actor (KYA). The company coined the term to distinguish it from traditional Know Your Customer (KYC) processes.
KYC verifies identity once, typically through document checks and facial recognition at account creation. KYA treats identity as continuous. It evaluates whether the same human participant remains present throughout a digital interaction.
AI systems change quickly, which makes imitation easier. An identity can move through document checks and selfie verification, and it can even respond naturally to a support agent without raising concern. The controls that businesses trusted for years no longer guarantee a real person. Businesses may believe they verified a customer when they actually admitted a synthetic identity.
Why Fraud Detection Is Changing
Traditional detection systems learn from past fraud cases. However, generative AI produces new variations constantly. When a synthetic identity does not match known fraud patterns, it may pass automated checks.
Microblink’s Fraud Lab was designed to address this challenge by reversing the model. Instead of waiting for real-world attacks, researchers create synthetic IDs, face swaps, and manipulated documents in controlled environments. Detection systems are trained to recognize manipulation behavior rather than specific examples.
The approach resembles preventative security more than reactive fraud investigation. The WAICF recognition reflects a broader industry shift toward anticipating AI-driven threats rather than responding after losses occur.
Privacy and Synthetic Data
As companies improve detection, they encounter a privacy challenge. Regulations set limits and restrictions on the use of personal data for training AI systems, even though larger datasets improve accuracy. Synthetic data provides a practical solution, allowing organizations to train models using generated identities instead of real customer information.
As privacy expectations and regulations increase globally, companies are finding security and privacy must operate together rather than in opposition.
Identity Is Becoming Continuous
Identity verification is no longer a single event. It increasingly functions as an ongoing condition when AI systems mimic behavior during a user session. Users log in. Then malware can take control and allow automated agents to complete transactions.
Organizations are responding by monitoring behavioral signals, biometrics, device changes, and transaction patterns over time. The operational question is shifting from “Was this user verified?” to “Is the same person still participating?” When treated this way, identity becomes an operational control rather than a compliance requirement.
What Businesses Should Do Now
Business leaders evaluating digital risk are beginning to adjust processes accordingly. Verification at onboarding is no longer sufficient for higher-risk transactions. Documents should not be treated as proof of a person without behavioral confirmation. Identity performance metrics increasingly influence fraud loss, customer friction, and approval rates.
Organizations adopting continuous identity monitoring often find that stronger verification also improves customer experience by reducing unnecessary friction for legitimate users.
Why the WAICF Award Matters
The WAICF award recognized Microblink’s Fraud Lab, but its significance extends beyond one company. The recognition highlights a broader change occurring across digital commerce.
Online systems now interact with software that can initiate payments, open accounts, or carry out transactions with minimal human involvement. As automation expands, companies face a more fundamental challenge of determining whether they are engaging with a person or a convincingly simulated participant. Trust has to be reinforced throughout an interaction rather than assumed at the start.
Organizations that adjust their verification models tend to operate with more certainty and fewer downstream disruptions. Those that continue to rely on one-time checks often find themselves devoting more time and resources to fraud recovery, dispute resolution, operational cleanup, and customer support remediation. Digital commerce will always be about bringing customers in. The difference now is that businesses must also confirm the person completing the transaction is a real participant with genuine intent.
About Microblink
Microblink is an identity verification technology company focused on digital identity, fraud prevention, and continuous authentication solutions.
You may also like: The Invisible Threat: Why Data Exfiltration Should Keep You Awake at Night
Photo credit: BDAIP/WAICF
