Clearing the myths and misunderstandings about AI, its role in financial cybercrime, and how it should be leveraged in digital defense and riskops.
At the recent SuperAI event held in Singapore, Nuno Sebastião, CEO and Co-Founder, Feedzai– and one of Europe’s leading voices on AI in financial crime prevention – participated in a panel discussion titled Cybersecurity Redefined: Digital Defense in the Age of AI.
The discussion explored how AI is reshaping the frontlines of digital defense – particularly in sectors like banking, where the stakes are high and the threats increasingly complex. Following up on the subject, we sought further insights from Sebastião in an exclusive Q&A with him:
How is AI reshaping the frontlines of digital defense today?
Sebastião: Across all industries, and especially the financial services industry, AI is enabling criminals to launch much faster and more realistic scams with deepfakes, synthetic identities, and hyper-personalized techniques.
In response, banks are fighting back to protect themselves and their consumers by adopting AI-native “riskops” platforms that automate decision-making processes for new account openings, fraud detection, and money laundering. These platforms help them understand risk to process trillions of transactions with minimal friction for consumers.
Yet, not all AI platforms are created equal. Banks need to be able to trust the data that their AI platforms are trained upon. They need to know their AI systems are ethical and responsible with regards to consumer protection, fair lending, and credit underwriting. Their systems need to be explainable, helping to build trust with both regulators and end users by making every decision transparent and auditable.
And as synthetic content and manipulated communications become more common, AI-native riskops platforms allow banks to spot and neutralize threats before they cause harm.
Is the fear of AI replacing humans in fraud detection a myth? What’s your perspective?
Sebastião: Part myth, part misunderstanding. Our State of AI report revealed that nearly half of fraud professionals worry about being replaced by AI. But here’s the reality: AI isn’t replacing fraud teams — it’s upgrading them.
AI takes the grunt work off their plates — reviewing transactions and triaging alerts — so analysts can focus on what humans do best: complex investigations, nuanced judgment, and strategic decision-making.
Financial institutions, especially, cannot afford to treat AI as a black box. Why is that so, and how is that impacting product design?
Sebastião: In finance, opacity is a liability. Every AI-driven decision, from declining a transaction to flagging a new customer, must be explainable. If a bank can’t justify why a system blocked someone’s account, they’re not just risking a bad customer experience , they’re courting regulatory trouble.
That’s why explainability is no longer a “nice to have”; it’s a design mandate. The smartest AI platforms today don’t just detect risk, they show their work. Feedzai IQ, for example, gives real-time reasoning behind every decision, making it easy for teams to audit, adjust, and trust the system.
We’re also seeing a shift toward privacy-preserving tech like federated learning, which lets banks collaborate on AI training without exposing sensitive data.
The key takeaway is, powerful AI alone isn’t enough. In finance, AI must also be transparent, fair, and accountable — because trust is a product requirement, not a bonus.
Responsible and ethical use of AI is a top-of-mind concern for highly regulated industries and organizations handling sensitive data. What is a practical approach to building trustworthy AI from the ground up — ensuring systems enhance security, fairness, and performance as they become central to decision-making under increasing public and regulatory scrutiny?
Sebastião: Don’t build AI like it’s a shiny object. Build it like it’s going to be subpoenaed.
In highly regulated industries, trust isn’t a buzzword — it’s a survival strategy. You’re not just building models; you’re building systems that will be questioned by regulators, tested by bad actors, and scrutinized by customers who don’t care how smart your tech is if it locks them out of their own money.
So, where do you start? With architecture, not aspiration :
- Design for explainability from day one. If a system can’t explain why it made a decision, it’s not a system you should trust or deploy.
- Make fairness and bias mitigation ongoing processes, not one-time audits. Bias creeps in silently. You need tools that detect it before regulators do.
- Build privacy into the DNA. Techniques like federated learning and differential privacy shouldn’t be “add-ons.” They’re the baseline in sectors handling sensitive data.
- Test for failure, not just success. How does your AI behave under stress, in edge cases, when the data gets weird? That’s where real trust is earned.
At Feedzai, we call this the TRUST framework — Transparent, Robust, Unbiased, Secure, and Tested. But the big idea is simpler: Trustworthy AI doesn’t happen by accident. It’s engineered. Because at the end of the day, your AI doesn’t just need to be powerful. It needs to be defensible.