Zango Strengthens Advisory Board With Senior Santander Exec, Launches Cross-Industry AI Governance Research Initiative

Zango has appointed Dean Nash, Global COO (Legal) at Santander, to its advisory board and launched a cross-industry research project on AI governance in financial services, aiming to develop practical oversight frameworks as AI scales in compliance operations.

In a major strategic move at the intersection of fintech innovation and governance, Zango — a fast-growing UK startup developing artificial intelligence agents for financial services compliance — has appointed Dean Nash, the Global Chief Operating Officer (Legal) at Banco Santander, to its advisory board. The announcement accelerates Zango’s ambitious ‘Future of AI Governance and Compliance in Financial Services’ research project, bringing together industry leaders, academic partners, and compliance experts to tackle one of the most pressing questions facing the financial sector today: how to govern AI as it scales from pilot projects into core operational systems.

The project is set to produce groundbreaking research in AI governance models, accountability frameworks, and oversight mechanisms tailored to the unique demands of highly regulated financial institutions — a topic that has captured global attention as banks, regulators, and technology vendors navigate rapid AI adoption.

Why This Appointment Matters in Financial AI Governance

As financial institutions expand their use of generative and agentic AI across risk management, compliance, customer service and credit decisioning, questions around accountability, explainability and responsible oversight have become urgent. Traditional governance frameworks — largely designed for legacy IT systems and manual controls — are struggling to keep up with the unique risks AI systems introduce, such as automated decision bias, complexity in model behaviour, and opacity in large language models (LLMs).

Zango’s initiative aims to address these gaps by building practical governance frameworks supported by evidence, expert input, and multi-institutional perspectives — a first-of-its-kind effort in the financial services compliance space. The research will be developed in collaboration with academic partners including the University of Oxford and the University of Glasgow, creating a bridge between theoretical research and real-world applications in compliance and enterprise AI governance.

Dean Nash: A Strategic Voice on AI, Legal and Compliance

Dean Nash, a seasoned executive overseeing legal operations for one of the largest banking groups in the world, brings deep experience in regulatory risk, corporate governance, and legal strategy to Zango’s advisory board. His participation signals a broader industry recognition of the importance of governance as AI transitions from experimentation to deeply embedded enterprise workflows.

In his advisory role, Nash will help shape the methodology and direction of the research project by providing seasoned practical insights into how complex institutions can manage AI responsibly without stifling innovation. He said the initiative presents “an opportunity to examine how existing governance models hold up in practice, and where they need to evolve,” especially as AI is embedded into critical systems. The appointment underscores the need for industry-level collaboration on AI governance across legal, compliance, risk and technology domains.

Overview: Zango’s AI Governance Research Project

The research initiative — titled ‘Future of AI Governance and Compliance in Financial Services’ — is positioned as a year-long, cross-industry exploration into practical governance structures that financial firms will need as they scale AI adoption. It is backed by a consortium of academic researchers, compliance leaders, and industry experts who will conduct interviews, analyse emerging best practices and document gaps in current governance frameworks.

Working with independent researchers from prestigious institutions, the project will look at:

  • Where existing governance frameworks succeed and fail when applied to adaptive and agentic AI systems;
  • How accountability and oversight structures should be configured for AI involving significant decision-making;
  • Human-in-the-loop expectations and how compliance teams should integrate AI into established controls;
  • Policy recommendations for regulators and boards on risk management, audit trails, and evidence-ready documentation for oversight bodies.

This kind of practical, academically anchored analysis is critical because many financial organizations are still evaluating how to evolve governance to manage risks such as algorithmic bias, regulatory auditability, model explainability and the potential for unintended outcomes in live environments.

What Zango Does: AI for Compliance at Scale

Founded in 2024, Zango specialises in building AI-powered agents that automate manual regulatory compliance workflows for financial institutions — a segment otherwise dominated by expensive, consultant-driven processes. Its AI technology uses regulation-aware large language models (LLMs) to interpret complex regulatory texts, perform horizon scanning, conduct gap analysis, and ensure audit-ready compliance documentation.

Zango’s platform has been adopted by banks and fintechs across Europe, helping organisations reduce manual effort, improve accuracy, and scale compliance operations at a fraction of the time and cost of traditional approaches. Its approach blends advanced AI with human subject-matter expertise to ensure reliability, transparency and human-in-the-loop checks, which are crucial in regulated environments.

The appointment of Nash and the launch of this project further elevate Zango’s position in the regtech and AI governance landscape, expanding its role from a product provider to a thought leader helping define how financial services should shape governance strategy for AI systems.

Industry Momentum Behind AI Governance

Zango’s initiative aligns with a broader trend in financial services: organisations are grappling with how to govern increasingly complex AI agents and generative systems. Regulators and industry groups are asking tough questions about trustworthy AI, ethics, explainability, and control frameworks — all while banks and insurers accelerate AI deployment to gain competitive edge in areas like risk analytics and operational efficiency.

For example, central banks and regulators around the world — including recent efforts by the Reserve Bank of India to form ethical AI panels — reflect this global focus on responsible AI governance in financial markets.

Yet many organisations still lack the governance models needed to marry enterprise risk management with AI-specific concerns. Zango’s research project, supported by senior industry voices and academic partners, could help provide blueprints for governance frameworks that are practical, measurable and regulation-ready.

What This Means for Financial Institutions

Financial firms weighing AI adoption must consider a host of governance issues, including:

1. Accountability and Oversight Structures

Who is responsible for AI decision outcomes? What approval frameworks are needed to ensure human oversight without slowing innovation?

2. Explainability and Audit Readiness

How can institutions ensure models are explainable and demonstrably compliant when audited by internal or external regulators?

3. Continuous Monitoring and Risk Controls

What strategies balance automated AI operations with continuous compliance, risk scanning and regulatory reporting?

Zango’s research looks to provide direct insights into these questions, offering comparison points across institutions and shaping potential best practices for boards, risk committees and executive leadership.

Future Outlook: Shaping AI Governance Practice

The findings of Zango’s research project — expected to be released publicly in spring 2026 — may serve as a benchmark for regulatory compliance and governance teams across the financial industry. By bringing together leaders from banks, regulators, academia and research institutions, Zango is positioning itself at the forefront of a movement to define governance by design — embedding oversight mechanisms into AI workflows rather than retrofitting them after adoption.

As AI becomes increasingly integrated into mission-critical compliance workflows, the need for formalized governance, clear accountability boundaries and robust oversight mechanisms will only grow more urgent — and projects like this could be pivotal in helping institutions navigate the complex regulatory and ethical landscape ahead.