Tuesday, December 16, 2025
LOGIN/REGISTER
HomeDigital PaymentsNo-code agentic AI can be used for financial fraud...

No-code agentic AI can be used for financial fraud and workflow hijacking

New findings from Tenable Research demonstrate how democratized AI tools like Microsoft Copilot Studio can inadvertently leak sensitive data and execute unauthorized financial actions. 

Organizations are rapidly adopting “no-code” platforms to enable employees to build their own AI agents. The premise is harmless, efficiency without needing developers. While well-intentioned, automation without strict governance opens the door to catastrophic failure.

To demonstrate how easily AI agents can be manipulated, Tenable Research created an AI travel agent in Microsoft Copilot Studio to manage customer travel reservations, including creating new reservations and modifying existing ones, all without human intervention. The AI travel agent was provided with demo data that included the names, contact information, and credit card details of demo customers and was given strict instructions to verify the customer’s identity before sharing information or modifying bookings.

Using a technique called prompt injection, Tenable Research successfully hijacked the AI agent’s workflow to book a free vacation and extracted sensitive credit card information.

The findings of this research could have significant business implications, including:

  • Data breaches and regulatory exposure: Tenable Research coerced the agent into bypassing identity verification and leaking payment card information (PCI) of other customers. The agent, designed to handle sensitive data, was easily manipulated into exposing full customer records.
  • Revenue loss and fraud: Because the agent had broad “edit” permissions intended for updating travel dates, it could also be manipulated into changing critical financial fields. Tenable Research successfully instructed the agent to change a trip’s price to $0, effectively granting free services without authorization.

“AI agent builders, like Copilot Studio, democratize the ability to build powerful tools, but they also democratize the ability to execute financial fraud, thereby creating significant security risks without even knowing it,” said Keren Katz, Senior Group Manager, AI Security Product and Research, Tenable. “That power can easily turn into a real, tangible security risk.”

AI governance and enforcement are mission-critical for safe and secure AI usage

A key takeaway is that AI agents often possess excessive permissions that are not immediately visible to the non-developers building them. To mitigate this, business leaders must implement robust governance and enforce strict security protocols before deploying these tools.

To avoid data leakage, Tenable recommends:

  • Preemptive visibility: Map exactly which systems and data stores an agent can interact with before deployment.
  • Least privilege access: Minimize write and update capabilities to only what is absolutely necessary for the agent’s core use case.
  • Active monitoring: Track agent actions for signs of data leakage or deviations from intended business logic.

- Advertisement -

SPONSORED

- Advertisement -