UAE Central Bank issues new guidance on the responsible use of artificial intelligence in financial services, outlining governance, risk management and compliance standards for regulated entities.
The Central Bank of the United Arab Emirates has issued new guidance on the responsible use of artificial intelligence (AI) in financial services, setting out governance, risk management, and compliance expectations for banks, insurers, and other regulated institutions operating in the country. The move comes as financial institutions increasingly integrate AI driven tools across credit underwriting, fraud detection, customer service automation, and risk analytics.
The guidance establishes supervisory principles aimed at ensuring that AI systems deployed within regulated entities are transparent, explainable, secure, and aligned with existing regulatory frameworks. Institutions are expected to implement robust internal governance structures, maintain human oversight over critical decision making processes, and adopt controls to mitigate risks related to bias, data privacy, cybersecurity, and model drift. The central bank emphasized that accountability for AI enabled decisions remains with the regulated entity, regardless of third-party vendor involvement.
The UAE has positioned itself as a regional hub for AI development and deployment, previously launching a national AI strategy and establishing oversight frameworks through various government bodies. The central bank’s latest directive aligns financial sector oversight with the country’s broader digital transformation agenda, ensuring that technological innovation proceeds within prudential safeguards.
Globally, regulators have been accelerating efforts to define supervisory standards for AI in financial services. The Bank for International Settlements has published guidance highlighting the need for governance, auditability, and risk management in AI driven financial applications, while the European Central Bank and the Federal Reserve System have also issued supervisory statements addressing model risk management and algorithmic accountability. These efforts reflect concerns that opaque or poorly governed AI systems could amplify operational, compliance, and systemic risks.
Within the Gulf region, financial institutions have been rapidly adopting AI-powered solutions to improve efficiency and customer experience. Banks are deploying machine learning models to enhance anti money laundering (AML) monitoring, automate loan approvals, and personalize digital banking interfaces. Fintech platforms are embedding AI into payments infrastructure and credit scoring mechanisms, often leveraging large scale data analytics capabilities.
The UAE central bank’s guidance is expected to require regulated entities to conduct risk assessments before deploying AI systems, document model development processes, and maintain testing frameworks to validate performance and fairness. Institutions may also be required to establish board level oversight and ensure that staff possess adequate technical understanding to supervise AI enabled functions effectively.
By formalizing expectations around responsible AI use, the central bank is reinforcing the importance of balancing innovation with financial stability and consumer protection. As AI adoption accelerates across the sector, supervisory clarity is likely to play a central role in shaping how financial institutions deploy advanced technologies within regulated environments.
The issuance of the guidance marks another step in the UAE’s broader effort to integrate advanced digital tools into its financial ecosystem while maintaining regulatory integrity and international alignment.