Customer due diligence (CDD), closely related to know-your-customer (KYC), is a practice used in banks and other businesses in the financial services sector. It involves gathering information about who’s using these services and for what purpose, in order to reduce the risk of financial crime.
Some have suggested that CDD could be adapted for the AI industry. One proposal is that CDD could be performed by cloud compute providers (to monitor AI developers), and another is that it could be performed by AI companies (to monitor users).
We’ve explained both ways that CDD might be useful for AI safety.
Customer due diligence for AI cloud compute providers
Training large AI models requires huge amounts of computing power, or “compute”. This compute takes the form of cutting-edge AI chips, housed in datacenters.
Some AI developers access computing power by buying large numbers of chips and constructing their own datacenters, but this is not always practical. Instead, many developers rent hardware infrastructure remotely, through cloud compute providers like Google, Amazon and Microsoft.
Image: An Amazon datacenter in the US. Source: CC-BY-SA by Tedder.
A CDD scheme in this context would involve cloud providers gathering information about their customers (AI developers).
The most detailed proposal in this space involves cloud providers reporting to the US government, since most cloud providers are in the US. The scheme would require cloud providers to keep records about their customers, and to create risk profiles for each one. They would inform the government of any suspicious or potentially high-risk activities (if a developer appears to be accessing much more compute than is required for the project they claim to be working on, for example). It could also grant the US government powers to cease providing compute to a developer that violates the scheme.
A CDD regime for cloud providers would help the US government exert greater oversight over risky AI development. It could also help uphold existing policies (like export controls), or form a basis for new ones, like reporting requirements or mandated safety testing.
This kind of scheme falls under the broader umbrella of compute governance. We provide an overview of some other forms of compute governance in a separate post.
Customer due diligence for AI companies
The second way that CDD could be applied in the AI context is by requiring AI companies to monitor users of AI systems with harmful capabilities such as synthetic biology, cybersecurity or weapons manufacturing skills.
This could help prevent powerful AI models from being misused for malicious purposes like developing biological weapons or carrying out cyberattacks.
For example, perhaps it’s fine for a genuine cybersecurity firm to use AI to discover security vulnerabilities at a bank so they can fix them. But an AI company should not provide the same access to an unknown entity based in North Korea, a state known to conduct cyberattacks.
There are a number of ways this could be achieved, including:
Verifying customer identity through requiring users to upload an ID or take a photo of themselves.
Building an understanding of customers, for example by asking them for their intended usage, and evidence they are using high-risk capabilities responsibly.
Developing a customer risk profile to determine what kinds of activity would be considered suspicious.
Monitoring AI inputs and outputs to detect suspicious patterns.
Flagging suspicious activity to law enforcement through a suspicious activity report.
These measures could be done in parallel while other interventions are put in place - such as society adapting to more powerful capabilities. When society is at the point where a capability is no longer dangerous, models with these capabilities could be made more generally available.
There are some downsides to customer due diligence though:
It costs money and time to implement these measures. This could put up barriers to using these AI systems for some people and businesses, and adds regulatory burden to AI companies.
Connecting people’s identity to their AI usage has privacy implications. With more people using AI as their therapist or treating it like a close friend, they might reveal sensitive personal information. Attaching this to their verified identity could result in a wide assortment of privacy harms. (Although in both cases, it seems unlikely the model needs very harmful capabilities. Well, as long as you’re not discussing manufacturing nuclear weapons in your therapy sessions.)
Customer due diligence is one model that could be useful for overseeing both the development and the use of powerful AI systems. No such scheme currently exists, and more research could help adapt existing practices from the financial sector for the AI context.





