Thursday, January 8, 2026
LOGIN/REGISTER
HomeFintechExploding cross-border fraud and deepfakes challenge fintech's manual oversight...

Exploding cross-border fraud and deepfakes challenge fintech’s manual oversight of Autonomous AI

Global fintech firms were among the attendees converging at a conference to discuss agentic AI verification and ethical frameworks amid rising threats.

How can financial firms verify if their AI agents are hallucinating, or falling under the control of bad actors or insiders? In an era of exploding cross-border fraud and deepfake phishing, can manual oversight keep pace with machine-scale threats?

These provocative questions frame the urgent challenges in global financial technology trends to imbibe AI, as articulated in the keynote of an industry conference.

At the Splunk .conf25 event, discussions highlighted the mounting pressure to verify deployed AI agents, chatbots, and applications. One consensus specific to the fintech industry: its next growth phase hinges on ethical frameworks that enforce explainable decisions, from credit approvals to fraud detection, amid regulatory scrutiny in markets like the US and Asia Pacific region.

This backdrop set the stage for deeper insights from industry voices navigating these tensions.

Operational efficiency and ethical AI frameworks
In an exclusive interview, Hao Yang, Vice President, Artificial Intelligence, Splunk, the organizer of the conference, highlighted the core industry challenge: Ensuring accurate, high-quality outputs without errors or manipulation.

He noted: “We deploy applications. How do we know if they are doing what they are supposed to, and not being controlled by bad actors?” This extends to operational efficiency, where firms seek to empower analysts. “Customers want to be more efficient and allow analysts to do more,” Yang explained. “That is the biggest trend among IT leaders right now.”

Drawing from interactions with global customers, Hao Yang revealed: “Whenever I talk to customers, the first question is on agentic AI and how it will work in their domains,” he said. Large corporations are investing heavily to reshape operations through AI, and seeking help to build resilience for AI infrastructure and AI-driven system oversight. His advice? Use a dual approach to digital resilience:

  1. Fortify all systems for AI
  2. Use AI to detect anomalies

Fintech firms, including community banks expanding collaboratively, rely on such tools for comprehensive monitoring, Hao Yang noted, pointing to longstanding AI use in payments, credit, and insurance. “These industries have used AI for years, but with regulations ensuring ethical application.”

Transparency is critical under mandates like those in the United States, where banks must explain credit denials in plain language, not opaque algorithms. “It has to be human-understandable, not just pointing to the algorithm,” Yang said. This enforces accountability, curbing bias in high-stakes decisions. Citing some of his own firm’s case studies in APAC, he asserted that cloud migration with hybrid visibility (i.e., full visibility into both cloud and on-premises resources) enable clients to manage their hybrid operational environment with ease.

Three pain points addressed
One area of concern was cross-border fraud, which overwhelms manual detection due to data volume. How fintechs tackle the problem? “Manual efforts cannot catch it all,” Yang observed. Machine learning enables real-time pattern analysis and remediation.

Next, security leaders are increasingly facing AI-amplified threats such as deepfake account-takeovers. Hao Yang cited a Hong Kong bank incident where fraudsters had mimicked a Chief Financial Officer convincingly in a Zoom call. “Phishing has become way more dangerous with AI, including deepfakes,” he warned. “CISOs must double-check suspicious activity using network signals like source IP.”

Finally, there was the problem with training gig workers to handling sensitive data. This group of employees in fintech firms usually lack tech savvy. To this, Hao Yang suggested: “These folks are typically not tech savvy, so you have to make things simple for them, because if you require them to do a lot of steps and processes, these folks may not be able to (cope). You ideally want to make them have basically zero efforts, zero computer and they don’t need to do anything. AI has to play a huge role there making the experience smooth among the gig workers.”

On a concluding note, Hao Yang noted the increase in the number of “Chief AI Officers” in various organizations. As fintech organizations expand their digital footprints and confront sophisticated threats, the question, Yang implied, is no longer whether AI will shape the future of finance — but how responsibly and intelligently it will be governed.

- Advertisement -

SPONSORED

- Advertisement -