Navigating dual-use projects in biosecurity
Advice for the BlueDot biosecurity course and hackathons
Some projects designed to reduce biological risk can themselves be misused. This involves creating tools, datasets, or capabilities that bad actors could exploit.
Examples include:
Building benchmarks to test whether AI models can assist with dangerous biology. These can help AI developers identify and constrain dangerous capabilities before deployment, or inform policymakers about emerging risks. But the benchmark code or collated data could also help malicious actors test which models are most useful for their purposes, or provide a roadmap for extracting harmful outputs.
Developing AI agents with access to biological tools. Such agents could accelerate defensive research, drug discovery, or outbreak response. But the same capabilities could be repurposed to help bad actors navigate complex biological procedures or identify vulnerabilities in biosecurity infrastructure.
Red-teaming exercises that probe biosecurity vulnerabilities. These can reveal weaknesses in time to fix them, and build the case for policy action. But detailed findings about what’s broken, if shared too broadly, hand attackers a list of exploitable gaps.
Threat modelling that produces detailed attack scenarios. This can help defenders prioritise resources and anticipate adversary behaviour. But concrete scenarios, if they escape controlled settings, can serve as instruction manuals.
Whether any particular project does more good than harm depends heavily on context: who’s doing it, where, with what safeguards, and how the outputs are managed. There’s genuine uncertainty here, and reasonable people disagree. The point isn’t that these projects should never happen, but that they require more care than work without dual-use dimensions.
Dual-use is a spectrum, not a binary
Not all dual-use work is equal:
Some should be done openly. Low-risk contributions to detection, broad-spectrum countermeasures, or policy analysis.
Some should be done carefully. In controlled settings with experienced collaborators, perhaps with government stakeholder input.
Some requires serious information security. Maybe even classification, with appropriate institutional backing.
It’s important to remember that your good intentions don’t eliminate the risk. Someone building a benchmark to measure misuse-relevant AI capabilities, even with the goal of demonstrating those capabilities should be restricted, is producing something that could itself be useful to bad actors.
What to do
If you’re considering a project with dual-use dimensions:
1. Get experienced mentors. This is the single most important step. Senior people in biosecurity have developed intuitions about what’s safe and what isn’t. They’ve seen projects go wrong. They have relationships with funders and policymakers who can help navigate tricky situations.
2. Don’t do it alone. The “unilateralist’s curse” applies here: if twenty-four people independently decide a risky project isn’t worth pursuing, but the twenty-fifth goes ahead anyway, the damage is done. Check your reasoning with others before acting.
3. Consider the institutional setting. Some work genuinely belongs in organisations with government relationships, security clearances, and established protocols for handling sensitive material.
4. Think about outputs. Before you create a dataset, tool, or benchmark, ask: if this were publicly released, who would benefit? If the answer includes potential bad actors, you need a plan for controlled access, or you need to reconsider whether this project should exist at all.
And remember: defensive value doesn’t happen automatically. If you’re building something intended to help biosecurity professionals, you need to actively get it into their hands. Don’t assume the right people will find your tool or analysis and use it. Test whether it’s actually useful to them before you start, tailor it to their expressed needs, and build relationships that ensure it reaches them. A dual-use tool that only bad actors end up using is worse than no tool at all.
When in doubt
If you’re unsure whether a project idea has dual-use concerns, that uncertainty is itself informative. Reach out to your course facilitators, or to experienced biosecurity professionals who can help you navigate these questions.
The biosecurity field needs capable, ambitious people. But capability and ambition should be paired with caution and collaboration. The goal isn’t to discourage you from working on hard problems; it’s to ensure that when you do, the work makes things better rather than worse.


