Artificial intelligence is now woven into the marketing of almost every regtech product. Agentic AI, AI workflows and AI‑driven client lifecycle management are offered as frictionless routes to regulatory compliance. Adoption is accelerating: a 2025 survey of 90 risk and compliance professionals found that 93 percent of financial institutions planned to implement agentic AI within two years and six percent were already using it. The same study reported that fraud detection, KYC maintenance and transaction monitoring were the top use cases, and over a quarter of respondents anticipated annual savings of more than US$4 million. AI clearly has a place in compliance.

Yet enthusiasm can mask structural risks. This article examines the promise and pitfalls of agentic AI for FSMA‑regulated brokers, currency exchanges and payment services. It also reviews recent UK regulations that tighten operational resilience, data protection and third‑party oversight, and argues that owning your compliance software under a perpetual licence offers a resilient alternative to subscription‑based SaaS. Throughout, we reflect on our experience at Thames Systems—without marketing our products—to illustrate how measured innovation and software ownership can better align with regulatory expectations.

What AI can and cannot do

Agentic AI undeniably enhances efficiency. Fenergo’s research shows firms are adopting AI to speed up due diligence and reduce manual work. Done properly, these systems incorporate built‑in governance and control frameworks, mitigating some of the risks around data privacy and regulation. For example, an AI‑driven risk engine can automatically flag suspicious transactions for human review, and natural‑language processing can help extract relevant information from documents.

However, over‑reliance on AI introduces three recurring problems:

  1. Technical debt and code quality. Studies of AI‑generated code have found it to be structurally repetitive and prone to security vulnerabilities. Researchers estimate that clearing today’s global software technical debt would take 61 billion workdays. Developers routinely spend more than 11 hours per week correcting hallucinations and weaknesses in AI‑generated code. This “slop layer” becomes a long‑term liability, especially when junior engineers are encouraged to rely on AI rather than learning foundational skills.
  2. Lack of accountability. AI tools make decisions without understanding consequences. In late 2025 a widely publicised “anti‑gravity incident” occurred when a large technology firm’s AI assistant was asked to clear a project cache. Misinterpreting a flag, the system recursively deleted the root directory, erasing 2 TB of production data and forcing months of work to be re‑created. The AI apologised, but apologies do not restore lost data or relieve regulatory obligations. This event underscores that humans must remain accountable for AI‑initiated actions.
  3. Opaque risk modelling. AI models excel at pattern recognition but do not understand causation. If a compliance decision is based on opaque AI logic and later proves flawed, a regulator may challenge both the decision and the model’s governance. Fenergo’s own research notes that data privacy and regulatory compliance are the top concerns limiting AI adoption. Regulators now expect firms to maintain transparency, audit trails and human oversight for automated decisions.

AI is not monolithic; many providers have robust engineering practices and governance frameworks. Tools like Fenergo’s agentic AI are built on mature architectures with clear audit trails and regulatory alignment, so they deserve consideration. But the industry also contains “vibe‑coded” projects—quickly assembled products designed to ride the hype wave. Buyers should evaluate AI maturity, code quality, and security practices rather than assuming that any AI‑branded product will be fit for purpose.

The regulatory landscape for 2025–2026

The UK’s regulatory environment is shifting rapidly. Two statutes in particular—the Financial Services and Markets Act 2023 (FSMA) and the Data (Use and Access) Act 2025 (DUAA)—significantly strengthen expectations around software, data hosting and third‑party relationships.

Operational resilience and critical third parties

The Financial Conduct Authority (FCA), Prudential Regulation Authority (PRA) and Bank of England now have powers to designate critical third parties (CTPs) and to supervise them directly. The final rules, effective 1 January 2025, let regulators intervene where a third party’s failure could threaten financial stability. Importantly, these rules do not absolve regulated firms of responsibility: brokers, exchanges and payment providers remain accountable for operational resilience and must manage their third‑party suppliers in line with existing outsourcing rules. In practice this means firms must perform rigorous vendor risk assessments, map dependencies and plan exit strategies, as recommended by operational resilience guidance.

The FCA’s transition period for operational resilience ended on 31 March 2025. Firms are now expected to maintain important business services within defined impact tolerances even under “severe but plausible” scenarios. The regulator stresses that resilience must account for cyber threats and AI‑driven attacks and should be embedded into culture and design. This includes scenario testing, clear communication plans and board‑level oversight.

Zero trust security and data hosting

Traditional perimeter‑based security models are no longer adequate. Grant Thornton notes that zero trust security, built on rigorous identity verification, least‑privilege access, network micro‑segmentation and continuous monitoring, is becoming the baseline architecture. The adoption of cloud services and AI‑driven ransomware has accelerated this shift. Regulators implicitly require zero trust elements: risk functions should evaluate identity and access controls, network segmentation and third‑party alignment with zero trust principles. Financial firms must also ensure that data hosted in the cloud meets confidentiality and availability requirements and that they retain oversight of third‑party providers.

Data processing and digital identity under the DUAA

The Data (Use and Access) Act 2025 amends UK data protection law in several key ways:

  • Recognised Legitimate Interests (RLIs). DUAA introduces pre‑approved legitimate interests for processing personal data—including detecting, investigating or preventing crime and safeguarding. This provides a clearer legal basis for anti‑fraud and anti‑money‑laundering checks, allowing firms to use databases and AI without conducting a full balancing test.
  • Automated decision‑making safeguards. DUAA relaxes restrictions on automated decision‑making but requires firms to include human oversight and allow individuals to challenge decisions. AI‑driven onboarding or risk scoring therefore needs to be transparent and contestable. (Source)
  • Digital Verification Services (DVS). To support a national digital identity ecosystem, DUAA establishes a regime for digital verification services. The framework includes a trust framework of rules and standards, supplementary codes, a public register of DVS providers, an information gateway that allows public authorities to share data with registered providers, and an official trust mark. The Office for Digital Identities and Attributes (OfDIA) will oversee this regime. The aim is to let regulated firms rely on certified digital identity providers rather than building their own KYC infrastructure. (Source)
  • Data sanitisation. Emerging standards for secure data destruction, such as NIST SP 800‑88 and IEEE 2883, provide tested methods to ensure that data cannot be recovered once removed. Yet adoption in financial services remains low: only 21 percent of firms report following NIST SP 800‑88 and 19 percent use IEEE 2883. Regulators increasingly expect firms to adopt such standards to support audit readiness, align with frameworks like ISO 27001 and safeguard customer trust.

Payment services and currency exchange

Payment service providers (PSPs) face new rules designed to combat fraud and strengthen safeguarding. A near‑final statutory instrument published in March 2024 allows PSPs to delay an outbound payment by up to four business days when there are reasonable grounds to suspect authorised push payment (APP) fraud, giving them time to investigate with customers or law enforcement. This risk‑based approach stems from the UK government’s 2023 Fraud Strategy and requires firms to review their transaction monitoring capabilities. (source)

Separately, the FCA has introduced CASS 15, a regime that brings electronic money institutions and payment services providers under the same safeguarding rules as investment businesses. CASS 15 requires daily reconciliations of client funds, annual independent audits, monthly reporting to the FCA and detailed resolution packs showing where funds are held and how they would be protected if the firm fails. The framework also mandates enhanced due diligence when appointing third parties to manage or hold client funds and has a May 2026 implementation deadline.

These developments, combined with FSMA amendments to payment services regulations and MiFIR, underscore a common theme: regulators are demanding stronger governance over how firms handle data, manage vendors and design software. AI may help meet those requirements, but it does not absolve firms of their responsibilities.

Third‑party risk and the lesson of ION

Third‑party risk is not theoretical. In February 2023 the derivatives industry was rattled by a ransomware attack on ION Group, a software provider used by many brokers. The attack disabled ION’s cleared‑derivatives platform and forced more than forty firms to revert to manual processes such as handwritten trade tickets and phone calls. Regulators downplayed systemic risk but acknowledged that the incident highlighted weak supply‑chain security. Importantly, this incident had nothing to do with AI; it was a conventional cyber‑attack exploiting vulnerabilities in a third‑party platform. The takeaway is that dependence on a single vendor can become a single point of failure, whether the platform is AI‑driven or not.

The new FSMA powers over critical third parties were designed with such events in mind. They remind us that firms cannot outsource accountability: operational resilience requires oversight of suppliers, clear exit strategies and contingency plans.

Why ownership and perpetual licences matter

Most compliance software today is delivered as cloud‑based subscriptions. While convenient, this model ties customers to recurring fees and external dependencies. When valuations and demand for AI models soar, subscription costs can increase dramatically. Moreover, if a vendor experiences an outage or ceases trading, clients may be left scrambling. In the context of operational resilience and DUAA, owning your software outright offers several advantages:

  • Control over hosting and data residency. A perpetual licence allows firms to deploy software on their own infrastructure or with an ISO 27001–certified hosting provider of their choosing. At Thames Systems we help clients set up our platform on certified servers in jurisdictions such as the UK, US or Asia, complying with data‑sovereignty requirements. Clients can restrict access to approved IP ranges and use multi‑factor authentication. By controlling the hosting environment, firms can align with zero trust principles and satisfy data‑localisation obligations.
  • Resilience against vendor disruption. Owning the code means you are not dependent on a SaaS provider’s business model. During the ION incident, firms with perpetual licences could continue operations on their own servers. FSMA’s new CTP rules reinforce the importance of being able to operate independently of critical suppliers.
  • Budget predictability. Perpetual licences convert ongoing subscription fees into a one‑off capital expenditure. Maintenance and support can be purchased separately on predictable terms, reducing exposure to price shocks when demand for AI infrastructure increases.
  • Ability to modify and audit. A perpetual licence gives in‑house developers access to the source code. Thames Systems’ practice is to provide training and knowledge transfer so that clients understand the architecture and can extend or audit the software. This is essential for meeting DUAA’s transparency and contestability requirements and for implementing bespoke controls such as daily reconciliations under CASS 15.
  • Long‑term compliance alignment. Owning the software allows firms to integrate new regulatory modules, such as DVS interfaces or data sanitisation workflows, without waiting for a vendor roadmap. It also facilitates the adoption of standards like NIST SP 800‑88 and IEEE 2883, which require deep integration with IT asset management.

A balanced path forward

Artificial intelligence will remain a powerful tool in financial compliance. When built and governed correctly, AI can enhance risk management, reduce manual effort and improve customer experiences. However, the rush to “AI‑wash” every product has created technical debt, accountability gaps and opaque decision‑making. Regulators are raising the bar on operational resilience, third‑party oversight and data governance. They expect human oversight, clear audit trails and the ability to continue operating if a supplier fails.

For FSMA‑regulated brokers, FX firms and payment services, the safest course is measured innovation: adopt AI where it demonstrably adds value, insist on mature development practices and governance, and retain human control.

Equally important is the question of ownership. A perpetual licence that lets you host software on secure, certified servers, modify it to meet regulatory requirements and operate independently of vendor outages can provide a bedrock of resilience. As the AI bubble inflates and new regulations take effect, combining human expertise with controlled AI and a robust ownership model may prove more sustainable than chasing the latest subscription‑based buzzword.