DFS25 Recap: Trust, Fraud & AI

Are You Human or a Robot? Trust, Fraud, and Identity in an AI-Driven World

As artificial intelligence rapidly shifts from experimental technology to operational backbone, one question looms large for financial services and digital platforms alike: who or what are we really dealing with online?

This was the central theme of the Roundtable on Trust, Fraud and AI, opened by Sutton Maxwell, Head of Revenue and Growth at Swedish identity and API security company Curity followed by a wide-ranging panel discussion featuring experts in fraud prevention, data analytics and technology law.

What emerged was a sobering but pragmatic conclusion: in an AI-driven world trust cannot be assumed; it must be engineered, continuously validated and legally defensible.

Sutton Maxwell speaking at DFS25

Sutton Maxwell opened with a simple provocation: “Are we human, or are we robots?” For banks, insurers, fintechs and financial infrastructure providers, the answer increasingly matters. Their customers are no longer only people clicking through web portals, but also AI agents accessing services directly via APIs — acting autonomously and at machine speed.

Trust, Maxwell argued, is the common currency of financial services. Customers expect speed and convenience, but also friction where it matters especially when money, identity and sensitive data are involved.

The challenge he noted is that traditional trust mechanisms like KYC checks, authentication flows and user verification patterns were designed for humans. They struggle to map cleanly onto an ecosystem where machines act on behalf of people often without a visible interface at all.

As AI agents bypass front-end applications and connect directly to backend services APIs have become the new perimeter.

Today, most organizations are seeing a surge in machine-generated traffic. The difficulty lies in distinguishing:

  • good bots from bad bots

  • legitimate AI agents from malicious automation

  • and expected behavior from anomalous or fraudulent patterns.

When behavior itself changes by design as it does with AI, traditional detection methods lose clarity.

Maxwell’s argument was clear: API security and identity management must be treated as product disciplines not infrastructure afterthoughts. Security has to be designed into the lifecycle of digital services not bolted on after launch.

To illustrate what “security by design” looks like in practice, Maxwell pointed to a Nordic insurance provider operating across eight markets with roughly €4 billion in revenue.

A decade ago the company adopted an API-first, digital-first strategy long before AI became mainstream. They exposed their insurance capabilities as products complete with lifecycle management, partner access and embedded security.

The results were transformative:

  • Flexibility across markets, products and customer segments

  • Speed enabling faster launches and lower operational costs

  • Optionality allowing partners to build on top of their services

Most importantly this architecture positioned them to evolve naturally toward an AI-first model where agentic workflows and AI assistants could securely access customer and partner data.

Panel discussion moderated by Marc Lainez

We then moved on to the panel discussion, moderated by Marc Lainez, which explored how attackers are already exploiting AI and how defenders are responding.

Maurits Lucas kicked off the discussion with insights from the frontlines of fraud detection. He emphasized that fraudsters are just like ordinary people curious, adaptive and increasingly aware of AI’s potential. “The underground is buzzing with AI,” Lucas explained. He detailed early attempts by criminals to use AI for coding malware and creating fake live video streams to bypass identity verification, noting that initial results were laughably poor. Yet AI has since enabled the creation of highly convincing phishing content, fake social media profiles and sophisticated catfishing campaigns targeting unsuspecting individuals. Lucas highlighted a recent trend originating from Asia where AI-generated Facebook profiles targeted senior communities with elaborate scams disguised as social activities.

Folkert de Neve speaking at DFS25

Echoing Lucas’s observations, Folkert de Neve described parallels in the financial sector. “Fraudsters are using AI the same way we use it: speed and sophistication make detection harder,” he said. De Neve illustrated how AI-generated websites and communications can appear almost identical to legitimate ones, increasing the risk of fraud. He also noted that the speed of modern payments leaves financial institutions mere seconds to act and that incorporating additional data sources such as geolocation or business verification can enhance detection models.

Edwin Jacobs speaking at DFS25

Edwin Jacobs provided the legal perspective, emphasizing the current liability framework in the face of AI-enabled fraud. While sophisticated deepfake attacks may succeed, he noted that under existing payment legislations consumers are generally protected. “The bank will have to reimburse the customer, unless there is gross negligence on behalf of the fraud victim. In that case the victim remains liable,” Jacobs explained, adding that the bank remains liable to regulators for any lapses in KYC or anti-money laundering procedures. Identity solution providers may bear contractual liability but ultimate accountability often rests with the bank.

Jacobs also highlighted the emerging trend of AI audits and increased regulatory scrutiny signaling that institutions may soon be questioned not only on whether they use AI but why they may have chosen not to employ state of the art solutions.

de Neve and Lucas discussing AI fraud defense

The discussion then turned to defense strategies. de Neve and Lucas shared practical examples of how AI is used to reduce false positives and improve fraud detection. By analyzing historical transaction data and user behavior such as typing patterns, device handling, and navigation speed, banks can create models that differentiate between legitimate users and potential fraudsters. Lucas noted that even micro-behaviors like hesitation while using an app can indicate a potential scam. Importantly, both emphasized that human oversight remains critical as fully automated decisions are not allowed under GDPR and regulatory expectations demand explainability.

Addressing concerns about bias in AI the panelists agreed that while bias can arise from training data, careful design, monitoring and collaboration with clients help mitigate risks. Jacobs noted that legal frameworks increasingly require institutions to explain AI-based decisions ensuring that models are accountable, transparent and proportional to the risk.

Looking ahead the panel considered the next five years of AI-enabled fraud. de Neve expressed hope that collaboration across financial institutions could create an ecosystem for sharing fraud signals and threat intelligence enhancing defenses collectively. Jacobs predicted a “Return of the Human,” where AI supports but does not replace human judgment emphasizing new skill sets and training. Lucas highlighted the continued proliferation of AI-generated content noting that while it may complicate information verification its impact on fraud will be shaped by the combined efforts of defenders and regulators.

DFS25 panel concluding discussion

The panel concluded with a consensus: AI is both a weapon and a shield in the fight against fraud. While the arms race between criminals and defenders will never fully end, careful use of AI, regulatory oversight, and collaboration across institutions can help tip the balance in favor of security, trust, and resilience.

Next
Next

New Podcast Episode: Unlocking Growth Through Real-Time Inventory Financing