For banks trying to put AI into real use, the hardest questions often come before any model is trained. Can the data be used at all? Where is it allowed to be stored? Who is responsible once the system goes live? At Standard Chartered, these privacy-driven questions now shape how AI systems are built, and deployed at the bank.
For global banks operating in many jurisdictions, these early decisions are rarely straightforward. Privacy rules differ by market, and the same AI system may face very different constraints depending on where it is deployed. At Standard Chartered, this has pushed privacy teams into a more active role in shaping how AI systems are designed, approved, and monitored in the organisation.
“Data privacy functions have become the starting point of most AI regulations,” says David Hardoon, Global Head of AI Enablement at Standard Chartered. In practice, that means privacy requirements shape the type of data that can be used in AI systems, how transparent those systems need to be, and how they are monitored once they are live.
Privacy shaping how AI runs
The bank is already running AI systems in live environments. The transition from pilots brings practical challenges that are easy to underestimate early on. In small trials, data sources are limited and well understood. In production, AI systems often pull data from many upstream platforms, each with its own structure and quality issues. “When moving from a contained pilot into live operations, ensuring data quality becomes more challenging with multiple upstream systems and potential schema differences,” Hardoon says.
Privacy rules add further constraints. In some cases, real customer data cannot be used to train models. Instead, teams may rely on anonymised data, which can affect how quickly systems are developed or how well they perform. Live deployments also operate at a much larger scale, increasing the impact of any gaps in controls. As Hardoon puts it, “As part of responsible and client-centric AI adoption, we prioritise adhering to principles of fairness, ethics, accountability, and transparency as data processing scope expands.”
Geography and regulation decide where AI works
Where AI systems are built and deployed is also shaped by geography. Data protection laws vary in regions, and some countries impose strict rules on where data must be stored and who can access it. These requirements play a direct role in how Standard Chartered deploys AI, particularly for systems that rely on client or personally identifiable information.
“Data sovereignty is often a key consideration when operating in different markets and regions,” Hardoon says. In markets with data localisation rules, AI systems may need to be deployed locally, or designed so that sensitive data does not cross borders. In other cases, shared platforms can be used, provided the right controls are in place. This results in a mix of global and market-specific AI deployments, shaped by local regulation not a single technical preference.
The same trade-offs appear in decisions about centralised AI platforms versus local solutions. Large organisations often aim to share models, tools, and oversight in markets to reduce duplication. Privacy laws do not always block this approach. “In general, privacy regulations do not explicitly prohibit transfer of data, but rather expect appropriate controls to be in place,” Hardoon says.
There are limits: some data cannot move in borders at all, and certain privacy laws apply beyond the country where data was collected. The details can restrict which markets a central platform can serve and where local systems remain necessary. For banks, this often leads to a layered setup, with shared foundations combined with localised AI use cases where regulation demands it.
Human oversight remains central
As AI becomes more embedded in decision-making, questions around explainability and consent grow harder to avoid. Automation may speed up processes, but it does not remove responsibility. “Transparency and explainability have become more crucial than before,” Hardoon says. Even when working with external vendors, accountability remains internal. This has reinforced the need for human oversight in AI systems, particularly where outcomes affect customers or regulatory obligations.
People also play a larger role in privacy risk than technology alone. Processes and controls can be well designed, but they depend on how staff understand and handle data. “People remain the most important factor when it comes to implementing privacy controls,” Hardoon says. At Standard Chartered, this has pushed a focus on training and awareness, so teams know what data can be used, how it should be handled, and where the boundaries lie.
Scaling AI under growing regulatory scrutiny requires making privacy and governance easier to apply in practice. One approach the bank is taking is standardisation. By creating pre-approved templates, architectures, and data classifications, teams can move faster without bypassing controls. “Standardisation and re-usability are important,” Hardoon explains. Codifying rules around data residency, retention, and access helps turn complex requirements into clearer components that can be reused in AI projects.
As more organisations move AI into everyday operations, privacy is not just a compliance hurdle. It is shaping how AI systems are built, where they run, and how much trust they can earn. In banking, that shift is already influencing what AI looks like in practice – and where its limits are set.
(Photo by Corporate Locations)
See also: The quiet work behind Citi’s 4,000-person internal AI rollout
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



