On October 8, 2025, Fenergo released the results of a global survey of 600 senior decision-makers at banks, asset managers, and fund administrators. The headline number: 70% of firms had lost clients in the prior 12 months because their KYC onboarding was too slow. That was up from 67% in 2024 and 48% in 2023.
Three consecutive years of escalation. Not a blip - a structural failure in how financial institutions convert interest into accounts.
For any VP of Growth staring at a 40% identity verification drop-off, or any Growth PM watching 65% of loan applications vanish mid-flow, that Fenergo number is not an industry statistic. It is a mirror.
The KYC wall
Lorikeet is an AI customer support platform purpose-built for regulated industries. It resolves tickets end-to-end - processing refunds, updating accounts, handling complex multi-step workflows across chat, email, and voice - while maintaining full compliance audit trails. For fintech teams losing applicants at identity verification, Lorikeet provides proactive, policy-bound AI conversations that guide users through KYC without compliance risk.
Signicat's Battle to Onboard research, spanning 7,600 consumers across 14 European markets, tracked this problem across nearly a decade. In 2016, 40% of consumers had abandoned a financial services application mid-onboarding. By 2020, that number reached 63%. By 2022, it hit 68%.
The trend is not driven by consumers getting worse at filling out forms. It is driven by collapsing tolerance for friction.
Signicat calls this the "expectation paradox." Every time someone signs up for a streaming service in 30 seconds or orders groceries with a thumbprint, the baseline for acceptable onboarding shifts. The average time before a consumer abandons an application dropped from 26 minutes in 2020 to under 19 minutes in the latest wave.
Patience is shrinking. KYC processes are not shrinking with it.
Why they leave
Signicat's data reveals three reasons for abandonment, each cited by 21% of respondents: the process took too long, too much personal information was required, and the applicant simply changed their mind.
A fourth factor dwarfs all three. 38% of users who abandoned said they did not have the required identity documents available at that moment.
That is not a UX problem. That is a timing problem. Those applicants wanted the account enough to start the process. They hit a wall not because the interface was confusing, but because they were sitting on a bus without their passport.
For consumer lending, the friction compounds. A loan applicant who stalls at income verification faces a different kind of wall - not missing documents, but unclear requirements. Which pay stubs qualify? Does a bank statement from a different institution work? Is a tax return acceptable as a substitute?
These are questions a static form cannot answer. And most applicants never ask them. They close the tab.
The cost of silence
ABBYY's 2022 State of Intelligent Automation Report, surveying 1,623 decision-makers across five countries, found that 90% of companies lose potential customers during digital onboarding. In banking specifically, nearly one in four applicants drop out.
Signicat and P.A.ID Strategies estimated the annual cost of abandoned financial services onboarding in Europe alone at 5.7 billion euros.
At the individual company level, the math is more personal. A neobank acquiring users at $200 per lead with a 40% KYC drop-off rate is burning $80 of every $200 on applicants who never become customers. Scale that to 50,000 monthly applications, and the wasted acquisition spend reaches $4 million per month.
For a lending platform processing 30,000 applications per month with 65% abandoning mid-flow, that is 19,500 lost applicants. At an average loan value of $5,000, the unrealized origination volume is $97.5 million monthly. Even converting an additional 10% means $9.75 million in recovered volume.
These are not theoretical losses. They are revenue that was earned by the marketing team, paid for by the acquisition budget, and destroyed by a verification form.
UX alone plateaus
Most fintech teams have already shipped the obvious fixes. Progress bars. Fewer form fields. Mobile-optimized document capture. Better error messages. These changes help, and they share a fundamental limitation.
They are passive. They improve the path but do nothing when a user stops walking it.
Signicat's top abandonment reasons - too long, too personal, changed mind - are not problems a better UI solves. A progress bar cannot explain which documents are acceptable alternatives when a user does not have a utility bill. A streamlined form cannot reassure a lending applicant that a soft credit pull will not affect their score.
These are conversational problems. They require something that detects a user stalling at document upload and provides help in that specific moment.
The ABBYY research reinforces this: companies that combined human interaction with automation improved customer experience by up to 43% and increased retention by over a third. Pure automation without intervention is a ceiling, not a solution.
KYC automation in fintech has evolved beyond document scanning and database checks. The next layer is real-time, conversational engagement that guides applicants through the verification process before they abandon it.
Proactive beats reactive
The core insight is simple: most users who abandon KYC never ask for help. They do not open a support chat. They do not call a hotline. They close the browser and move on.
Reactive support - a help center, a FAQ page, a chatbot that waits to be clicked - only serves the small minority who actively seek assistance. The silent majority requires a different approach entirely.
Proactive AI conversations flip the model. Instead of waiting for a user to ask, the system monitors behavioral signals - time spent on a single screen, repeated failed uploads, cursor hovering over the close button - and initiates contextual help at the moment friction appears.
Here is what that looks like in practice. A user begins a loan application and uploads a pay stub. The system detects the image is blurry or the document type does not match requirements. Instead of showing a red error message, an AI agent surfaces with specific guidance: what document types are accepted, how to capture a clear photo, and whether alternative verification methods are available.
The difference between showing an error and offering a solution is often the difference between a lost applicant and a funded loan.
Research from Certainly.io found that AI-powered onboarding flows in regulated industries cut drop-offs by up to 40% when proactive engagement was triggered at key friction points. Separately, Netcore Cloud documented that real-time behavioral nudges during transaction abandonment delivered conversion improvements of 20-40%.
The pattern holds across deployment types: proactive intervention during the moment of friction consistently outperforms follow-up emails and retargeting ads. The user is still in the application. Meeting them there is far more effective than trying to pull them back later.
Proactive AI onboarding is how regulated fintechs stop the silent abandonment loop. See how Lorikeet handles it.
Cracking identity verification
Identity verification deserves specific focus because it is where the largest absolute volume of applicants disappears.
Traditional KYC flows are linear and unforgiving. Present your ID, upload your selfie, wait. If something fails - wrong document type, poor lighting, expired ID - the user sees a generic rejection and must restart. Every restart is an invitation to leave.
Conversational AI turns this from a pass-fail gate into a guided process. An AI agent engaging at the first sign of difficulty can determine whether the user has an alternative document, coach them through photo capture requirements, and explain why specific information is needed.
That last point directly addresses the 21% who abandon because too much personal information is required. When a user understands why their Social Security number is needed - regulatory requirement, not data harvesting - the request feels less invasive. Context reduces friction in ways that interface changes cannot.
For the 38% who lack documents at the moment, a proactive AI agent can do something a static form never will: save the application state, explain exactly what is needed, and schedule a follow-up message for when the user is likely to be at home with their documents. That single interaction converts a permanent abandonment into a temporary pause.
Fenergo's 2025 data shows that AI usage in KYC operations surged from 42% in 2024 to 82% in 2025, with Singaporean firms leading at 92%. The adoption curve is steep because the results are measurable. Firms deploying AI-assisted KYC are reporting faster onboarding times and lower client attrition compared to those relying on manual review.
For customer onboarding automation in fintech, the shift from document-processing AI to conversation-driven AI represents the next meaningful improvement in completion rates.
Compliance shapes the conversation
Deploying conversational AI in financial services is not the same as deploying it in e-commerce. Regulatory constraints shape every aspect of what an AI agent can say, when it can say it, and how interactions are recorded.
Any AI system guiding users through KYC must ensure proper disclosure of data collection practices and maintain audit trails for every interaction. For consumer lending, the AI cannot make statements that could be construed as credit decisions or conflict with required disclosures under regulations like TILA or ECOA.
Fenergo's 2025 data underscores the stakes. Regulatory fines in the first half of 2025 totaled $1.23 billion, a 417% increase over the same period in 2024. The cost of getting AI-assisted onboarding wrong in a regulated environment is not a poor NPS score. It is a potential enforcement action.
A generic conversational AI - trained on broad internet data and configured with basic prompts - will eventually say something non-compliant. The question is not whether, but when. Regulated fintech needs AI customer support built for compliance: systems that operate within defined policy boundaries, can be audited interaction by interaction, and escalate to human agents when conversations approach sensitive territory.
A chatbot that improvises answers about document requirements might give incorrect guidance, creating compliance exposure. An AI agent that references verified policy documents and flags ambiguous situations for human review eliminates that risk while still delivering the speed that reduces drop-off.
What Lorikeet changes
Lorikeet was built for exactly this kind of high-stakes, regulated interaction - the type where getting a response wrong has compliance consequences and getting it right recovers revenue that would otherwise disappear at a form field.
For a VP of Growth tracking a 40% KYC drop-off across 50,000 monthly applications, the Lorikeet model works like this: AI agents monitor each applicant's progress through the verification flow. When behavioral signals indicate friction - a stall at document upload, a repeated failed selfie capture, an extended pause at a disclosure screen - the agent initiates a conversation with context-specific guidance drawn from the institution's own policy documentation.
Every interaction is logged with full audit trails. Responses are bounded by compliance rules, not generated from open-ended prompts. When the conversation approaches territory that requires human judgment - a complex edge case, a potential adverse action - the agent escalates with full context preserved.
For a Growth PM watching 65% of loan applications abandon at income verification, Lorikeet's proactive approach means applicants get real-time answers to the questions they would never have asked support: which pay stub format is acceptable, whether a contractor's 1099 qualifies, how to submit bank statements from a different institution. The AI handles these conversations at scale, across chat, email, and voice, without the compliance drift that makes general-purpose tools dangerous in financial services.
The broader adoption of AI in financial services is accelerating, but the fintechs seeing the strongest onboarding results are those deploying AI specifically at friction points rather than broadly across a help center.
The math forward
The economics of KYC drop-off reduction are unusually direct. Every percentage point of improved verification completion flows straight through to funded accounts, originated loans, and active subscribers. There is no attribution ambiguity. The user was in the funnel, and either they completed it or they did not.
Consider a lending platform with 30,000 monthly applications and 65% abandonment. Moving verification completion from 35% to 50% - a 15-percentage-point improvement - means 4,500 additional completed applications per month. At an average loan value of $5,000, that is $22.5 million in monthly origination volume that was previously lost to friction.
For a subscription fintech with 80,000 monthly sign-up attempts and 40% KYC drop-off, reducing that drop-off to 25% recovers 12,000 additional verified users per month. At a $15 monthly subscription, those users represent $2.16 million in annualized recurring revenue.
Every recovered applicant has already been paid for through acquisition spend. The incremental cost of converting them is marginal compared to the sunk cost of losing them.
Fenergo reports that the average firm now spends $72.9 million annually on AML and KYC operations. Proactive AI conversations do not replace that infrastructure. They make it convert at a higher rate by keeping applicants moving through a process that the institution has already built and paid for.
The fintechs that close this gap will structurally lower their cost of growth. The ones that do not will keep paying to refill a leaking funnel - spending more on acquisition to compensate for a verification process that silently destroys the demand they already generated.










