We spoke to ~30 people working on AI safety — from think tanks and policy research groups to grassroots campaigning organisations — to map the landscape and identify gaps, primarily in the UK.
One thing that is clear: there aren’t enough people doing public-facing AI safety advocacy.
Not all advocacy looks like a protest
An important note on terminology.
Advocacy isn’t just protests or lobbying. It’s any activity aimed at influencing decisions within political, economic, and social institutions — which also includes public education and writing policy recommendations.
The terminology matters because when you frame your work as advocacy, you might receive pushback or even feel personally averse to it. Being clear about what advocacy means makes conversations more productive.
Gaps in the UK AI safety advocacy landscape
Almost every organisation expressed a desire for more public-facing AI safety advocacy work. The existing work covers tiny slices of the advocacy pie, with each group reaching limited audiences through specific channels.
Our conversations revealed specific gaps where additional work could lead to meaningful AI policies in the UK.
Building public support
Policymakers need signals that their constituents care about AI risk. Though most Brits express concern about AI and support its regulation, far fewer consider AI a priority (YouGov). Without public pressure, there’s little incentive for politicians to act.
This isn’t just unique to the UK. Several people working on AI safety in the US and EU have also strongly recommended building general public awareness or support for broad principles — like ensuring AI development has appropriate oversight. Several highlight the lack of public support as one of the main blockers to having meaningful AI policies and would encourage more work to shift the Overton window on AI risk.
Some advise explicitly against mobilising support for narrow policies, without carefully considering their implications for the field.
A compelling narrative for AI safety
AI safety lacks a positive, inspiring narrative. Current messaging focuses almost entirely on what we want to avoid — extinction, loss of control, catastrophic outcomes — rather than what we’re working towards. Even when people accept the risks, this creates helplessness rather than motivation.
Compare this to AI acceleration advocates who use narratives like “winning the race against China” or “humanity’s inevitable march towards progress.” These framings give people a sense of agency and excitement about the future, making them far more motivating than warnings about doom.
Several people highlighted the need to position AI safety as a path to enabling AI’s transformative potential, rather constraining it. For example, framing safety as enabling progress rather than as a false choice between the two.
A more positive framing also helps politicians justify their priorities to constituents and show they’re delivering concrete benefits, not just preventing abstract disasters. If AI safety were pitted against online safety, it would lose because the public thinks catastrophic AI risks are too conceptual. It’s much easier to champion “ensuring Britain leads in safe AI innovation” than “preventing human extinction from superintelligence.”
The narrative should also be accessible, using tangible examples or demonstrations that people without context on AI safety can understand. This includes steering away from vague, in-group language like “catastrophic risks” and “artificial superintelligence” towards terms people actually understand.
The UK (Frontier?) AI Bill
As of June 19, 2025, a comprehensive AI bill seems planned for 2026. Opinions split dramatically on whether this bill matters for AI safety and whether public pressure helps or hinders meaningful policies.
Some believe the UK’s AI bill won’t meaningfully influence US regulation. They think the EU AI Act’s Code of Practice would be sufficient for the UK to follow without expending political capital, or that forcing a bill through might waste political capital on insufficient regulations or risk UK AISI’s privileged model access.
Others see this as a unique policy window, given the current government’s concern about catastrophic AI risks. They argue we shouldn’t underestimate the UK’s soft power in signalling that AI risk is urgent.
But most agreed that getting more public support for the UK to take AI as a high-priority risk would be useful — even absent a specific policy ask — so that AI safety becomes a sufficiently high priority issue for voters that policymakers will act, especially in the face of competing pressures to grow the economy with AI.
Building leverage for the UK
The UK has high potential to lead in AI safety:
High talent concentration: AISI hosts more top AI safety researchers outside frontier AI companies than anywhere else
AI safety leadership: The UK birthed the AI Summits, where major powers agreed AI poses an existential threat to humanity
International relationships: Good standing with the US and its network of Commonwealth and G7 countries
Economic heft: Second largest economy in Europe and the seventh largest in the world
Several people suggested trying to build more leverage in the UK to improve its ability to lead on safety. There’s room for more thinking on how UK policy positions could influence EU AI Act’s Code of Practice, how UK-EU coordination could pressure the US and how the UK could position itself as a trusted partner to the US. Some ideas included building up the UK’s data centres or attracting more AI talent.
Better coordination could help make AI safety advocacy orgs more effective
Coordination happens through informal channels — group chats, monthly meetings, and ad hoc conversations. Most don’t know what other organisations are doing despite wanting to exchange learnings and collaborate.
An organisation might conduct expensive polling on public attitudes toward AI risk, but other organisations might not know these findings exist while running their own public campaigns. An organisation might develop insights on tackling field issues but not know how to coordinate with others to solve them.
Different organisations sometimes inadvertently undermine each other by promoting inconsistent framings or making incompatible policy asks. One person we spoke to highlighted how this makes the UK government feel like civil society groups can’t agree on what they want. This lack of agreement on policy demands makes it easier for the government to do nothing.
As far as I know, no such explicit coordination effort exists in the UK. (Happy to correct this if I’m wrong!) There seems to be space for a genuine effort at coordinating between orgs.
What this suggests
Each organisation can only reach so many people through so many channels. With advocacy spanning policymakers, public, industry, and academia — all needing different approaches, messages, and messengers — there’s enormous space for more people working on different slices of this complex challenge.
The field needs more people precisely because AI safety is a wicked problem requiring multiple simultaneous approaches, not a simple problem with one clear solution.
Key questions worth considering:
What audiences are you uniquely positioned to reach? Your professional background, network, location, and skills might give you access to groups that existing organisations struggle to engage.
What strategic gaps match your strengths? The field needs everything from technical policy analysis to grassroots organising to international diplomacy.
How can you coordinate rather than compete? The most effective work will come from those who complement existing work rather than duplicating it.
What timeline are you optimising for? Different approaches to AI safety advocacy work on different timescales, from immediate policy wins to longer-term narrative change.
For people considering this work, the question isn’t whether there’s room for more people (that’s a definite yes!) — it’s how to use your unique position to strengthen the whole effort.


