<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[BlueDot Impact: Blog]]></title><description><![CDATA[Writing from the team.]]></description><link>https://blog.bluedot.org/s/blog</link><generator>Substack</generator><lastBuildDate>Wed, 22 Apr 2026 05:48:16 GMT</lastBuildDate><atom:link href="https://blog.bluedot.org/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dewi Erwan]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[bluedotimpact@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[bluedotimpact@substack.com]]></itunes:email><itunes:name><![CDATA[Dewi Erwan]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dewi Erwan]]></itunes:author><googleplay:owner><![CDATA[bluedotimpact@substack.com]]></googleplay:owner><googleplay:email><![CDATA[bluedotimpact@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dewi Erwan]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[We’ve given out $50,000 in rapid grants. Now we want to triple that.]]></title><description><![CDATA[Bigger grants, broader scope, same quick decisions. Apply now.]]></description><link>https://blog.bluedot.org/p/weve-given-out-50000-in-rapid-grants</link><guid isPermaLink="false">https://blog.bluedot.org/p/weve-given-out-50000-in-rapid-grants</guid><dc:creator><![CDATA[Joshua Landes]]></dc:creator><pubDate>Tue, 14 Apr 2026 20:44:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/950bdfc8-4860-492e-8325-1bd6d0af52bf_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the past few months, BlueDot Impact&#8217;s <a href="https://bluedot.org/programs/rapid-grants">Rapid Grants</a> program has quietly funded 77 rapid grants totaling over $50,000 - mostly to course participants and facilitators working on technical AI safety research. Grants are small, decisions are fast, and the process is lightweight.</p><p>That program worked. So we&#8217;re making it bigger.</p><p>The majority of our grants have gone toward compute and API credits for people running evals, training small models, and replicating safety research, typically for a few hundred dollars. Our <a href="https://bluedot.org/courses/technical-ai-safety-project">Technical AI Safety Projects Sprint</a> has been great at surfacing these.</p><p>But some of our most impactful grants didn&#8217;t look like that at all:</p><ul><li><p><strong><a href="https://aisafety.org.pl/">AI Safety Poland</a></strong> received $4,800 to organize meetups across the country, covering venue and tooling costs, and building a national community from scratch. They&#8217;ve had hundreds of attendees, and dozens of people they&#8217;ve referred have ended up taking a BlueDot course.</p></li><li><p><strong><a href="https://www.linkedin.com/in/jpauldoll/">Justin Dollman</a></strong> received $3,000 to lead our weekly <a href="https://evalsreadinggroup.com/">evals reading group</a>, coordinating a growing community of people learning about and working on model evaluations. We&#8217;ve since funded more reading groups on <a href="https://aigovernanceclub.com/">AI governance</a> and <a href="https://luma.com/ztcm9xuq">AI x cyber</a> and are excited about scaling these up from ~120 to 300 attendees per week.</p></li><li><p><strong><a href="https://www.linkedin.com/in/sally-g-a2869346/?lipi=urn%3Ali%3Apage%3Ad_flagship3_profile_view_base%3B%2FMFE0SGdSP2MYQoU1wa0zA%3D%3D">Sally Gao</a></strong> received $1,000 to run AI Safety meetups in New York, hosting guests such as Alex Bores. Her next event on the <a href="https://luma.com/state-of-ai-safety">State of AI Safety</a> is coming up on April 23rd!</p></li><li><p><strong><a href="https://www.linkedin.com/in/eitan-sprejer-574380204/">Eit&#225;n Sprejer</a></strong> received $4,200 in facilitator stipends to run our Technical AI Safety Projects Sprint at <a href="https://www.baish.com.ar/en">BAISH</a> in Buenos Aires, accessing a new and underleveraged talent pool for AI safety work.</p></li><li><p><strong><a href="https://www.linkedin.com/in/aaron-maiwald-939b891b3/">Aaron Maiwald</a></strong> received $5,000 to attend a biosecurity conference in DC, travel to SF, and connect with senior folks in the field to accelerate his journey.</p></li><li><p><strong><a href="https://www.linkedin.com/in/z-saber/">Zac Saber</a></strong> received $8,000 to drop out of EF and validate an AI safety-focused startup instead.</p></li></ul><p>None of these grants fit neatly into &#8220;compute for a project.&#8221; But they were some of our highest-impact bets. As the program picked up speed and more of these came in, we started treating rapid grants as small, focused bets on talented people doing or experimenting with high-impact projects - research, fieldbuilding, talent acceleration, you name it. In some cases, we approached people directly and pitched them on applying for work they were already doing.</p><p>Going forward, we&#8217;re expanding the official scope to match our internal understanding: Rapid Grants now fund much more than compute - we&#8217;re excited to back work on events, talent acceleration, BlueDot community building, and more. The bar for funding hasn&#8217;t changed - if anything we&#8217;ve raised it as we became more calibrated - we still look for concrete work in progress, a specific cost that&#8217;s the bottleneck, and a reason for us to believe the work matters for making AI go well.</p><p>Grant sizes now go up to $10,000 to allow us to make bigger bets for more impact. Decisions will still be fast - we&#8217;re targeting around five working days. For grants above $5,000, we may hop on a quick call.</p><p><strong>Who should apply</strong></p><p>If you&#8217;re in the BlueDot community - a course participant, alum, facilitator, or wider community member - and you&#8217;re doing something high-impact in one of our focus areas that a grant would accelerate, apply.</p><p>If you&#8217;re on the fence, the default, as always, is simple: apply.</p><p>Apply and see the full list of public grantees and program details at <a href="https://bluedot.org/grants/rapid">bluedot.org/grants/rapid</a>.</p>]]></content:encoded></item><item><title><![CDATA[Is the Frontier AI Governance course right for you?]]></title><description><![CDATA[3 questions to help you decide.]]></description><link>https://blog.bluedot.org/p/is-the-frontier-ai-governance-course</link><guid isPermaLink="false">https://blog.bluedot.org/p/is-the-frontier-ai-governance-course</guid><dc:creator><![CDATA[Joshua Landes]]></dc:creator><pubDate>Sun, 05 Apr 2026 23:46:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3bdabe64-7a96-4abb-8793-deb907a24ff3_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We currently reject about 75% of applicants to the <a href="https://bluedot.org/courses/ai-governance">Frontier AI Governance course</a>. Most of them aren&#8217;t bad candidates - in our view they are applying to the wrong course.</p><p>This short blog post is intended to help you figure out if you should spend time on an application.</p><p>FAIGC is at its core about the governance of frontier AI systems - AGI, ASI - the most capable models being built by a handful of labs, the decisions governments and other actors need to make as those systems become more powerful, and what happens as general capabilities rapidly approach and exceed human-level.</p><p>If your guiding question is &#8220;how should we govern AI systems that might be smarter than humans within the next few years&#8221; you&#8217;re in the right place!</p><p><strong>What FAIGC is not</strong></p><p>It is not a corporate AI governance course</p><ul><li><p>If your goal is ISO 42001 compliance, responsible AI frameworks for your product team, or helping your organization navigate the EU AI Act - that&#8217;s real work and we respect it, but it&#8217;s not what we focus on. You should look at something like <a href="https://iapp.org/certify/aigp">IAPP&#8217;s AIGP certification</a> or <a href="https://www.bsigroup.com/en-NZ/products-and-services/training-courses-and-qualifications/iso-iec-42001-training-courses/">ISO 42001 programs</a> instead.</p></li></ul><p>It is not an AI ethics course</p><ul><li><p>We don&#8217;t cover algorithmic bias in hiring tools, fairness metrics, or the full range of social impacts of current AI systems. If that&#8217;s your focus, you&#8217;re better served by something <a href="https://www.lse.ac.uk/study-at-lse/executive-education/programmes/ethics-of-ai">like</a> this or <a href="https://www.coursera.org/courses?query=ai%20ethics">this</a>.</p></li></ul><p>It is not introductory</p><ul><li><p>We require the <a href="https://bluedot.org/courses/agi-strategy">AGI Strategy course</a> or some equivalent background. If you&#8217;re still building your understanding of why this matters, <a href="https://web.miniextensions.com/9Kuya4AzFGWgayC3gQaX?prefill_PostHog%20Session%20ID=019d5ff7-5050-784c-9de4-d76c8bd0892c">apply</a> to the AGISC first. It too is free and runs every month!</p></li></ul><p><strong>What gets you in</strong></p><p>It&#8217;s not (just) your CV - we&#8217;ve rejected plenty of people from prestigious institutions with impressive resumes who wrote three-word applications, and we&#8217;ve also accepted sharp people with more modest titles who showed they were already sprinting ahead.</p><p>What we&#8217;re looking for: you&#8217;ve already started engaging with frontier AI governance specifically. You can name a concrete gap between where you are and what the course provides. And your post-course plan better be more specific than &#8220;apply for fellowships&#8221; - a line we see in dozens of applications every round.</p><p>Your application itself is evidence for us too. We use it to infer how you&#8217;ll show up in a live discussion session of seven to nine people working through hard material together. Effort, agency and clarity go a long way.</p><p><strong>A note for AGI Strategy course graduates</strong></p><p>We view completing our AGI Strategy course as a prerequisite and not as a ticket! We may still reject AGI Strategy grads who apply. If you&#8217;ve completed the AGISC and then started building something, writing something, or working on something in frontier AI governance, tell us about that. But if you completed it and your main next step is taking this course, you may need more time.</p><p><strong>As a general heuristic, before you apply, ask yourself these three things</strong></p><ul><li><p>Am I focused on frontier AI, or AI in general?</p></li><li><p>Do I have a specific reason to take this course now?</p></li><li><p>Will I act on this fully within six months?</p></li></ul><p>We&#8217;re building a pipeline into the institutions that govern frontier AI and are looking for people who have a good chance to be in those institutions - not people adding a line to their CV.</p><p>If you&#8217;ve read this and you&#8217;re thinking <em>yes, this is me</em> - apply <a href="https://web.miniextensions.com/BSUqN3WHmeL9MbzAj2P6?prefill_PostHog%20Session%20ID=019d5ffd-3e8f-77c8-b982-1b6535777097">here</a>.</p>]]></content:encoded></item><item><title><![CDATA[Rapid Small Grants for the BlueDot Technical AI Safety Project Sprint]]></title><description><![CDATA[Small grants for BlueDot participants to build their portfolio]]></description><link>https://blog.bluedot.org/p/rapid-small-grants-for-the-bluedot</link><guid isPermaLink="false">https://blog.bluedot.org/p/rapid-small-grants-for-the-bluedot</guid><dc:creator><![CDATA[Sam Dower]]></dc:creator><pubDate>Tue, 17 Mar 2026 11:11:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUc5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b9c29b1-3ff5-4ef6-8c04-e91c608ec10e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You are enrolled in our <a href="https://bluedot.org/courses/technical-ai-safety-project">Technical AI Safety Project Sprint</a>. Many technical projects require access to compute (e.g. renting GPUs) or expensive API credits to use frontier models. We want your project to be excellent, and don&#8217;t want money to be a barrier for you building your portfolio. If your project will benefit from funding we encourage you to apply to our rapid small grant!</p><h2><strong>How it works</strong></h2><p>At all times, only assume we&#8217;ll cover costs we&#8217;ve confirmed in writing for your specific project. If you&#8217;re ever uncertain, contact us before spending money you expect us to reimburse.</p><ol><li><p><strong>Submit a proposal</strong> <a href="https://airtable.com/appMVNtdBtvtJvu5E/pag9G3oF4DYAyassX/form">here</a> (under 15 minutes for most applications). We typically respond within 5 working days with one of three outcomes:</p><ul><li><p><em>Accepted</em>: We&#8217;ll confirm exactly what spending we&#8217;ve approved.</p></li><li><p><em>Clarification needed</em>: We&#8217;ll ask follow-up questions to evaluate your request.</p></li><li><p><em>Not approved</em>: We don&#8217;t think this meets our criteria, or we&#8217;re unable to approve for another reason.</p></li></ul></li><li><p><strong>Do your project</strong>, confident you can spend on what we&#8217;ve approved.</p></li><li><p><strong>Claim reimbursement.</strong> We process most claims within 5 working days. If you didn&#8217;t end up spending the money, no problem - just don&#8217;t submit a claim.</p></li></ol><h2><strong>What we fund</strong></h2><p>The following examples are illustrative; all decisions are ultimately at our discretion:</p><ul><li><p><strong>Compute</strong> for a technical AI safety project (API costs, cloud GPU, training runs).</p></li><li><p><strong>Access to paywalled resources</strong> like articles, research papers, datasets, or textbooks.</p></li><li><p><strong>Hosting costs</strong> for your application or tool.</p></li></ul><p>We expect most grants to fall between <strong>$50-100</strong> for initial experiments to develop your proof of concept, then up to <strong>$500</strong> once you have evidence of initial traction (a strong proof of concept or promising preliminary results). We encourage you to submit a separate application for each of these stages.</p><h2><strong>What we don&#8217;t fund</strong></h2><ul><li><p><strong>Compensation for your time</strong> on the project.</p></li><li><p><strong>Equipment</strong> you&#8217;d reasonably already have (laptops, phones, external drives, etc.)</p></li><li><p><strong>General productivity subscriptions</strong> (ChatGPT Plus, Claude Pro, Cursor, Grammarly, etc. unless this is highly leveraged).</p></li><li><p><strong>Personal expenses.</strong></p></li></ul><p>Other funders may cover some of these - see <a href="https://www.aisafety.com/funding">this resources page for AI safety funding opportunities</a>.</p><h2>When should you apply?</h2><p>Before applying, you should have a project idea. It doesn&#8217;t need to be a slam dunk, but you should:</p><ul><li><p>Be able to briefly articulate how it relates to AI safety.</p></li><li><p>Have ideas for initial experiments you want to run.</p></li><li><p>Have a rough estimate for how much those experiments will cost (I recommend using an LLM to help you calculate this).</p></li></ul><p>If you&#8217;re unsure whether your application is strong enough, submit an application anyway! We&#8217;re not looking for perfection. Your facilitator can also help you stress test your project, so have a low bar for reaching out to them!</p><p>While you wait for a grant decision, you should continue to develop your project idea, do a deeper literature review, or start to run smaller experiments that fit on the free <a href="https://colab.research.google.com/">Google Colab</a> GPU.</p><h2><strong>Eligibility</strong></h2><ul><li><p>You must be a current or past participant of BlueDot Impact&#8217;s Technical AI Safety Project Sprint.</p></li><li><p>We reimburse via bank transfer (Wise) or PayPal (UK), so we cannot send payments to sanctioned countries.</p></li></ul><p>Questions or feedback? <a href="mailto:team@bluedot.org">Contact us</a>.</p><h2><strong>Apply</strong></h2><p><a href="https://airtable.com/appMVNtdBtvtJvu5E/pag9G3oF4DYAyassX/form">Submit your proposal here</a> - it takes under 15 minutes for most applications. We aim to get back to you within 5 working days.</p>]]></content:encoded></item><item><title><![CDATA[Why you should vibe-code your AI safety research sprint project]]></title><description><![CDATA[Your research hours are limited. Don't spend them coding.]]></description><link>https://blog.bluedot.org/p/why-you-should-vibe-code-your-ai</link><guid isPermaLink="false">https://blog.bluedot.org/p/why-you-should-vibe-code-your-ai</guid><dc:creator><![CDATA[Sam Dower]]></dc:creator><pubDate>Fri, 13 Mar 2026 17:36:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f5af6a82-3ef6-4d51-b718-3100da101a82_1080x607.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Intro</h3><p>When I first asked an LLM to code me that annoying function or, worse, that whole experiment pipeline, I felt guilty and anxious as if I was cheating and getting away with it. I&#8217;m sure you&#8217;ve felt this way too.</p><p>It can be easy (especially if you are new to AI safety) to feel like you should avoid this, to feel like you should write all of the code in your project yourself, understanding each line and all the libraries that you use.</p><p>There is a time and a place for this, but a 30hr research sprint is not it.</p><p>If you are doing a research sprint, you should use LLMs / coding agents for all of your coding. Below I give you three reasons why.</p><h3>You don&#8217;t have time to learn both engineering and research</h3><p>The skill you should build in a research sprint is <a href="https://www.lesswrong.com/posts/Ldrss6o3tiKT6NdMm/my-research-process-understanding-and-cultivating-research">research taste</a>: choosing what experiments to run, forming hypotheses worth testing, and updating your intuitions when the results surprise you. This only develops through many iterations and 30hrs is not a lot of time, especially if you spend hours coding experiments that could take minutes with AI. You should be very explicit about which skill you are trying to train, and optimise for that skill alone.</p><p>You might be thinking &#8220;but I need to be sure my code is actually implementing the experiments I intended&#8221; and you&#8217;re absolutely correct. However, this is entirely compatible with vibe coding your whole experiment pipeline. Just get the same model to provide you with a summary of the code and keep asking it questions until you are confident it is doing it right.</p><h3>You&#8217;ll do more research and do more writing</h3><p>If you spend less time coding, you will cover more ground and do better research. You can go deeper, ask that extra question, run that extra experiment, discover that extra awesome result. You&#8217;ll develop better intuitions about how AI models behave and which research directions are worth pushing.</p><p>You&#8217;ll also have more time for writing. Sharing your project is what lands you that next opportunity and where all the best feedback comes from. Nobody will be impressed by how &#8220;human written&#8221; your code is. They will be impressed by your clarity of thought and the depth of your research.</p><h3>Coding with AI is a skill worth practicing</h3><p>AI models are amazing at coding. With Claude Code, you can now build an app in a weekend and a whole machine learning experiment setup in minutes. And this won&#8217;t change. For the rest of your life, LLMs will be faster at coding than you, no matter how much you practice. The most productive AI safety researchers use AI for coding, and so should you.</p><p>If you want to learn how to use AI for coding, ask your favourite LLM! They&#8217;re very good at giving you tips and tricks on how to use them better.</p><h3>Conclusion</h3><p>Don&#8217;t feel guilty about using AI for your coding. Learn to operate fast while staying in control. Be explicit in the skills you want to develop, and be ruthless in pursuing them. So, which skills do you want to develop in this sprint?</p>]]></content:encoded></item><item><title><![CDATA[AI Safety Needs Startups]]></title><description><![CDATA[Why the best way to deploy safety at scale might be to sell it.]]></description><link>https://blog.bluedot.org/p/ai-safety-needs-startups</link><guid isPermaLink="false">https://blog.bluedot.org/p/ai-safety-needs-startups</guid><dc:creator><![CDATA[Joshua Landes]]></dc:creator><pubDate>Fri, 06 Mar 2026 15:13:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c232e665-4aab-4f4a-a76b-bb4287506372_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Summary:</h2><ul><li><p>Startups can become integrated in the AI supply chain, giving them good information about valuable safety interventions. Safety becomes a feature to be shipped directly to users by virtue of this market position.</p></li><li><p>Better access to capital, talent, and ecosystem-building is available to for-profits than non-profits. VC funding dwarfs philanthropic funding, and there is little reason to believe that profitable safety-focused businesses aren&#8217;t possible.</p></li><li><p>Joining a frontier lab is a clear alternative, but most AI deployment happens outside labs. Your marginal impact inside a large organisation is often smaller than your impact when founding something new. Equally, profitable businesses aren&#8217;t an inevitability. You should seriously consider working for or founding an AI safety startup.</p></li></ul><h2>Introduction</h2><p>Markets are terrible at pricing safety. In the absence of regulation, companies cut corners and externalise risks to society. And yet, for-profits may be the most effective vehicle we have for deploying safety at scale. Not because the incentives of capitalism align by chance with broader human values, but because the alternatives lack the resources, feedback loops, and distribution channels to turn safety insights into safer outcomes. For-profits are far from perfect, but have many advantages and a latent potential we should not ignore.</p><h2>Information, Integration, and Safety as a Product</h2><p>For advanced AI, the attack surface is phenomenally broad. It makes <a href="https://aisle.com/blog/aisle-discovered-12-out-of-12-openssl-vulnerabilities">existing code easier to crack</a>. <a href="https://en.wikipedia.org/wiki/Doppelganger_%28disinformation_campaign%29">Propaganda</a> becomes cheaper to produce and distribution of it becomes more effective. As jailbreaking AI recruiters becomes possible, so does the data-poisoning of entire companies.</p><p>Information about new threats and evolving issues isn&#8217;t broadcast to the world. Understanding where risk is most severe and how it can be mitigated is an empirical question. We need entities embedded all across the stack, from model development to deployment to evaluation. We need visibility over how this technology is used and misused, and enough presence to intervene when needed. &#8216;AI safety central command&#8217; cannot provide all these insights. Researchers acting without direct and constant experience with AI deployment cannot identify the relevant details.</p><p>Revenue is a reality check. If your product is being bought, people want it. If it isn&#8217;t, they either don&#8217;t know about it, don&#8217;t think it&#8217;s worth it, or don&#8217;t want it at all. For-profits learn what matters in an industry directly from the people they serve, giving the best insights money could buy.</p><p>This is not to say that AI safety non-profits aren&#8217;t valuable. Many do critical work which is difficult to support commercially. But by focusing entirely on research or advocacy and ignoring the commercial potential of their work, organisations cut themselves off from a powerful source of feedback. Research directions, careers, and even whole organisations can be sustained for years by persuading grantmakers and fellow researchers of a thesis, rather than proving value to people who would actually use the work. Without this corrective pressure, even well-intentioned research may drift from what the field actually needs. Commercialisation should not be seen as a distraction or a response to limited funding, but as a tool for staying at the bleeding edge of what is useful for the world.</p><h2>Productification</h2><p>Turning research into a product people can buy is extremely powerful for distribution. You are no longer hoping that executives, engineers, and politicians see value in work they do not understand tackling risks they may not believe in. It becomes a purchase. A budget decision. A risk-reward tradeoff that large organisations are very well suited to engage with.</p><p>There are clear gaps in securing AI infrastructure which can be filled today. If you&#8217;re wondering what an AI safety startup might actually <em>do</em>, here are some suggestions for commercial interventions targeting different parts of the stack.</p><ul><li><p><strong>Frontier Models:</strong> Interpretability tooling, evaluations infrastructure, and formal verification environments. Tools which might be implemented by labs and companies with direct access to frontier models to understand and control them better.</p></li><li><p><strong>Applications: </strong>Content screening, red-teaming as a service, and monitoring for misuse. Helping startups building on frontier models catch accidental or deliberate misuse of their platforms.</p></li><li><p><strong>Enterprise Deployments:</strong> Observability platforms, run-time guardrails, and hallucination detection. Enterprises and governments using AI to automate critical work should be able to catch issues early and reliably.</p></li><li><p><strong>Market Incentives: </strong>Model audit and certification, and safety-linked insurance. Creating market incentives which reward safer models when they&#8217;re released into the world.</p></li></ul><p>None of these require waiting for frontier labs to solve alignment, or hoping that someone else finds your work and decides to implement it. Instead of writing white papers hoping governments will regulate or frontier labs will dutifully listen, you build safety directly into products that customers come to rely on. One path hopes someone will do the work, whereas the other <em>is</em> the work.</p><h2>Safety Across The Stack</h2><p>When you tap your card at a shop to make a purchase, a network of financial institutions plays a role in processing your transaction. The point of sale system reads your card information, sends it on to a payment processor, who forwards the request to the appropriate card network. The issuing bank for your card authorises the transaction, the money is sent to cash clearing systems, and cash settlement is performed often through a central bank settlement system.</p><p>There is a sense in which all fraud happens at a bank. They have to release the fraudulent funds, after all. But the declaration that all fraud prevention initiatives should be focused on banks and banks alone comes across as fundamentally confused. Fraud prevention might be easier at other layers, and refusing to take those opportunities simply because it is in principle preventable at some more central stage would not lead to the best allocation of resources.</p><p>Similarly, when a user prompts an AI application, they are not simply submitting an instruction directly to a frontier model company. Just as tapping your card does more than instruct your bank, such a message goes through guardrails, model routing, observability layers, and finally frontier model safety measures. Every step of this process is an opportunity for robustness we should not let go to waste.</p><p>This becomes even more critical as AI agents begin acting autonomously in the world, doing everything from browsing and transacting to writing and executing sophisticated code. When an agent&#8217;s action passes through multiple services before having an effect, every link in that chain is both a potential failure point and an opportunity for a safety check.</p><p>Exclusively focusing AI safety interventions on frontier labs would be like securing the entire financial system by regulating only banks. Necessary, but nowhere near the most efficient or robust approach.</p><h2>Capital, Talent, and Credibility</h2><p>Successful for-profits are in an inherently better position to acquire resources than non-profits. Their path to funding, talent acquisition, and long-term influence is far stronger than that of their charitable counterparts.</p><p>There is an immense amount of venture capital washing around the AI space, estimated at <a href="https://www.bloomberg.com/news/articles/2025-10-03/ai-is-dominating-2025-vc-investing-pulling-in-192-7-billion">~$190 billion</a> in 2025. Flapping Airplanes raised <a href="https://flappingairplanes.com/">$180 million</a> in one round, comparable to what some of the largest AI safety grantmakers deploy annually, raised in a fraction of the time. VC allows you to raise at speed, try many approaches, and pivot more freely than would be possible in academia or when reliant on slower charitable funders.</p><p>In AI safety, non-profits are less likely compared to other sectors to be trapped in an economic struggle for survival. However, even in the AI safety ecosystem, philanthropy is much more limited than venture capital and more tightly concentrated among fewer funders. Non-profits are vulnerable, not just to the total capital available, but to the shifting attitudes of the specific grant makers they rely on. VC-backed companies, by contrast, are much more resilient to the ideological priorities of funders. If one loses interest, <a href="https://link.springer.com/rwe/10.1007/978-3-031-81653-6_34">many others remain available</a> as long as you have a strong business case.</p><p>Yes, there is a large amount of philanthropic capital in AI Safety compared with typical non-profits. Safety products can also be difficult to sell. But the question of whether safety-focused products sell well, as they do in other industries, is a hypothesis you can go out and test. If it turns out that they do, there could be an immense amount of capital available which should be used to make our world safer.</p><p>For-profits attract talented people not just through hefty pay packages, but also through their institutional prestige and the social capital they confer. You can offer equity to early employees, which is extremely useful for attracting top technical talent and is entirely unavailable to non-profits. Your employees can point to growing valuations, exciting products with sometimes millions or even billions of users, and influential integration of their technology with pride. For many talented and competent people, this is far more gratifying than publishing research reports or ever so slightly nudging at the Overton window.</p><p>All of this, increased access to talent, capital, and credibility, makes for-profits far easier to scale. And safety needs to scale. The amount of time we have until transformative AI arrives differs wildly between forecasts, though it seems frighteningly plausible that we have <a href="https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/">less than a decade</a> to prepare. If we are to scale up the workforce, research capacity, and integration into the economy of safety-focused products, we cannot afford anything other than the fastest approach to building capacity.</p><p>Success compounds. Founders, early employees, and investors in a successful for-profit acquire capital, credibility, and influence that they can reinvest in safety, whether by starting new ventures, funding others, or shaping policy. This virtuous cycle is largely unavailable to the non-profit founders, unless they later endow a foundation with, as it happens, money from for-profits.</p><p>In addition to tangible resources, a mature ecosystem of advisors and support networks exists to help startups succeed. VC funds, often staffed by ex-founders, provide strategic guidance and industry connections that are crucial for closing sales. There are many talented people who understand what startups offer and actively seek them out. An equivalent ecosystem just doesn&#8217;t exist for non-profits.</p><h3>Shaping The Industry From Within</h3><p>Being inside an industry is fundamentally different from being adjacent to it.</p><p>Embedding an organisation inside AI ecosystems enables both better information gathering and opportunities for intervention. If you can build safe products appropriate to the problems in an industry, you allow companies to easily purchase safety. If companies can purchase safety, then governments can mandate safety. But to get there, it is not enough to make this technology exist; the technology must be something you can buy.</p><p>Cloudflare started as a CDN. By becoming technically integrated, they slowly transformed into part of the critical infrastructure of the internet. Now, they make security decisions which shape the entire internet and impact billions of users every day. A safety-focused company embedded in AI infrastructure could do the same.</p><h2>Will Markets Corrupt Safety?</h2><p>Market incentives are not purely aligned with safety. The drive to improve capabilities, maximise revenue, and keep research proprietary will harm a profit-seeking organisation&#8217;s ability to make AI safer.</p><p>However, every institution has its pathologies. The incentives steering research-driven non-profits and academics are not necessarily better.</p><h3>Pure Research Also Has Misaligned Incentives.</h3><p>The incentives of safety and capitalism rarely align. The pressure to drive revenue and ship fast pushes towards recklessly cutting corners, and building what your customers demand in the short term rather than investing in long term safety.</p><p>However, research organisations have similar harmful incentives driving them away from research which is productive in the long term. The need to seek high-profile conference publications, pleasing grant makers, and building empires. Incentives of research organisations and individual researchers are notoriously misaligned with funders goals&#8217; in academia and industry alike. Pursuing a pure goal with limited feedback signals is extremely difficult as an organisation, regardless of structure.</p><p>Ideally, we would have both. For-profits which can use revenue as feedback and learn from market realities, alongside non-profits which can take longer-term bets on work needed for safety. The question is how to build a working ecosystem, not whatever is more purely focused on safety.</p><h3>Proprietary Knowledge Is Not Always Hoarded.</h3><p>For-profits have an incentive to keep information hidden to retain a competitive advantage. This could block broader adoption of safety techniques, and restrain researchers from making optimal progress.</p><p>Assuming that for-profits add resources and people to the AI safety ecosystem, rather than simply moving employees from non-profits, this is still advantageous. We are not choosing between having this research out in the open or hidden inside organisations. We are choosing between having this research hidden or having it not exist at all. In many sectors, the price of innovation is that incumbents conceal and extract rents from their IP for years.</p><p>Despite this, for-profits do have agency over what they choose to publish. Volvo famously gave away the patent to their <a href="https://www.volvogroup.com/en/about-us/heritage/three-point-safety-belt.html">three-point seatbelt</a> at the cost of their own market share, saving an estimated 1 million lives. Tesla gave away all of their <a href="https://www.forbes.com/sites/investor/2014/06/13/tesla-giving-away-its-patents-makes-sense/">electric vehicle patents</a> to help drive adoption of the technology, with <a href="https://global.toyota/en/newsroom/corporate/27512455.html">Toyota</a> following suit a few years later. Some of this additional knowledge created by expanding the resources in AI safety may still wind up in public hands.</p><h3>Markets Force Discovery Of Real Problems.</h3><p>The constant drive to raise money and make a profit is frequently counter to the best long-term interests of the customer. Investment which should be put into making a product safer today instead goes into sales teams, salaries, and metrics designed to reel in investors. It is true that many startups which begin with a strong safety thesis will drift into pure capabilities work or adjacent markets which show higher short-term growth prospects.</p><p>However, many initiatives operating without revenue pressure, such as researchers on grants or philanthropically-funded non-profits, can work for years on the wrong problem. For-profits will be able to see that they are working on the wrong thing, and are driven by the pressure to raise revenue to work on something else.</p><p>This is not to say that researchers are doing valueless work simply because they are not receiving revenue in the short term. Plenty of work should be done to secure a prosperous future for humanity which businesses will not currently pay for. Rather, mission drift is often a feature rather than a bug when your initial mission was ill-conceived. The discipline markets provide, forcing you to find problems people will pay to solve, is valuable.</p><h3>Failure Is A Strong Signal.</h3><p>The institutional failure modes of non-profits and grant-funded research are mostly benign. The research done is not impactful, and time is wasted. On the other hand, for-profits can truly fail in the sense that they fail to drive revenue and go bankrupt, or they can fail in more spectacular ways where they acquire vast resources which are misallocated. The difference is not that for-profits are inherently more likely to steer from their initial goals.</p><p>Uncertainty about impact is common across approaches. Whereas research that goes unadopted fails silently, and advocacy which fails to grab attention disappears without effect, for-profits are granted the opportunity to visibly and transparently fail. The AI safety ecosystem already funds work which fails silently, and is effectively taking larger risks with spending than we realise. Startups aren&#8217;t any more likely to fail to achieve their goals; they are in the pleasant position of knowing when they have failed.</p><p>Visible failure generates information the ecosystem can learn from. Silent failure vanishes unnoticed.</p><h2>Your Counterfactual Is Larger Than You Think.</h2><p>Markets are not efficient. The economy is filled with billion-dollar holes, which are uncovered not only by shifts in the technological and financial landscape but by the tireless work of individuals determined to find them. Just because there is money to be made by providing safety does not mean that it will happen by default without you.</p><p>Stripe was founded in 2010. Online payments had existed since the 1990s, and credit-card processing APIs were available for years. Yet it took until 2010 for someone to build a genuinely developer-friendly API, simply because nobody had worked on the problem as hard and as effectively as the Collison brothers.</p><p>Despite online messaging being widely available since the 1980s, Slack wasn&#8217;t founded until 2013. The focus, grit, and attention of competent people being applied to a problem can solve issues where the technology has existed for decades.</p><p>Markets are terrible at pricing in products which don&#8217;t exist yet. Innovation can come in the form of technical breakthroughs, superior product design, or a unique go-to-market strategy. In the case of products and services relevant to improving AI safety, there is an immense amount of opportunity which has appeared in a short amount of time. You cannot assume that all necessary gaps will be filled simply because there is money to be made there.</p><p>If your timelines are short, then the imperative to build necessary products sooner rather than later grows even greater. Even if a company is inevitably going to be built in a space, ensuring that it is built 6 months sooner could be the difference between safety being on the market and unsafe AI deployment being the norm.</p><p>For many, the alternative to founding a safety company is joining a frontier lab. However, most AI deployment happens <em>outside</em> labs in enterprises, government systems, and consumer-facing applications. If you want to impact how AI meets the world, you may have to go outside of the lab to do it. Your marginal impact inside a large organisation is often, counterintuitively, smaller than your marginal impact on the entire world.</p><h2>Historical Precedents</h2><p>History is littered with examples of companies using their expertise and market position to ship safety without first waiting around for permission.</p><p>Sometimes this means investing significant resources and domain expertise to develop something new.</p><ul><li><p><strong>Three point seatbelt:</strong> Volvo developed the three point seatbelt and gave away the patent. Their combination of in-house technical expertise and industry credibility enabled a safety innovation that transformed the global automotive industry.</p></li><li><p><strong>Toyota&#8217;s hybrid vehicle patents:</strong> Toyota gave away many hybrid vehicle patents in an attempt to accelerate the energy transition.</p></li><li><p><strong>Meta&#8217;s release of Llama3:</strong> At a time when only a small number of organisations had the resources to train LLMs from scratch, Meta open-sourced <a href="https://ai.meta.com/blog/meta-llama-3/">Llama3</a> making it available to safety researchers at a time when little else was in public hands.</p></li></ul><p>Or perhaps the technology already exists, and what matters is having the market position to distribute it or the credibility to change an industry&#8217;s standards.</p><ul><li><p><strong>Levi-Strauss supply chain audit:</strong> At the peak of their market influence, Levi-Strauss audited their supply chain insisting on certain minimum worker standards to continue dealing with suppliers. They enforced workers rights&#8217; in jurisdictions where mistreatment of employees was either legal or poorly monitored, doing what governments couldn&#8217;t or weren&#8217;t prepared to do.</p></li><li><p><strong>Cloudflare&#8217;s Project Galileo:</strong> Cloudflare provides security for small, sensitive websites at no cost. This helps journalists and activists operating in repressive countries avoid being knocked off the internet, and is entirely enabled by Cloudflare&#8217;s technology.</p></li><li><p><strong>WhatsApp end-to-end encryption:</strong> The technology existed, and the cryptography research was mature by this point. WhatsApp just built it into their product, delivering privacy protection to billions of users worldwide.</p></li><li><p><strong>Security for fingerprint and face recognition:</strong> Apple stores face and fingerprint data in a separate chip, making it impossible to steal or legally demand. This did not require regulation; this decision actually led to clashes with the US government. Because of their market position, Apple was able to push this security feature and protect hundreds of millions of users unilaterally.</p></li></ul><p>All of these required a large company&#8217;s resources, expertise, credibility, and market integration to create and distribute valuable technology to the world.</p><p>Building a for-profit which customers depend on, be it for observability, routing, or safety-tooling, lets you ship safety improvements directly into the ecosystem. When the research exists and the technology is straightforward, a market leader choosing to build it may be the only path to real-world implementation.</p><h2>It&#8217;s Up To You.</h2><p>For-profits are in a fundamentally strong position to access capital, talent, and information. By selling to other businesses and becoming integrated in AI development, they can not only identify the most pressing issues but directly intervene in them. They build the technological and social environment that makes unsafe products unacceptable and security a commodity to be purchased and relied upon.</p><p>Non-profits have done, and will continue to do, critical work in AI safety. But the ecosystem is lopsided. We have researchers and advocates, but not enough builders turning their insights into products that companies buy and depend on. The feedback loops, distribution channels, and ability to rapidly scale that for-profits provide are a necessity if safety is to keep pace with capabilities.</p><p>The research exists. The techniques are maturing. Historical precedents show us that companies embedded in an industry can ship safety in ways that outsiders cannot. What&#8217;s missing are the people willing to found, join, and build companies that close the gap between safety as a research topic and safety as a market expectation. We cannot assume that markets will bridge this divide on their own in the time we have left. If you have the skills and the conviction, this is a gap you can fill! </p><p>If you&#8217;re thinking about founding something, joining an early-stage AI safety company, or want to pressure-test an idea - reach out at <a href="mailto:team@bluedot.org">team@bluedot.org</a>. We&#8217;re always happy to talk.</p><p>BlueDot&#8217;s <a href="https://bluedot.org/courses/agi-strategy">AGI Strategy Course</a> is also a great starting point - at least 4 startups have come out of it so far, and many participants are working on exciting ideas. Apply <a href="https://web.miniextensions.com/9Kuya4AzFGWgayC3gQaX?prefill_PostHog%20Session%20ID=019cc3a5-b4c0-7bfa-971b-fb11538a475e">here</a>.</p><div><hr></div><p><em>Thanks to Ben Norman, Daniel Reti, Maham Saleem, and Aniket Chakravorty for their comments.</em></p><p><em>Lysander Mawby is a graduate of BlueDot&#8217;s first Incubator Week, which he went on to help run in v2 and v3. He is now building an AI safety company and taking part in <a href="https://fr8.so/">FR8</a>. Josh Landes is Head of Community and Events at BlueDot and, with Aniket Chakravorty, the initiator of <a href="https://bluedot.org/courses/incubator-week">Incubator Week</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[We asked 10+ AI safety orgs about their hiring needs. What they need most — people who can hit the ground running.]]></title><description><![CDATA[We spoke with 10+ hiring managers from AI safety organisations engaged in technical research, policy work, and communications to understand their hiring needs.]]></description><link>https://blog.bluedot.org/p/we-asked-ai-safety-orgs-about-their-hiring-needs</link><guid isPermaLink="false">https://blog.bluedot.org/p/we-asked-ai-safety-orgs-about-their-hiring-needs</guid><dc:creator><![CDATA[Li-Lian Ang]]></dc:creator><pubDate>Fri, 20 Feb 2026 10:45:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/564e0f94-792b-4abd-8a73-49a05e89c507_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We spoke with 10+ hiring managers from AI safety organisations engaged in technical research, policy work, and communications to understand their hiring needs.</p><p>The key challenge that surfaced was sourcing talent who can own and lead work independently, and care deeply about AI risk.</p><p>These are people with a strong track record, whether through years of professional experience or a few impressive projects, who can hit the ground running.</p><p>Here are some examples of what this experience looks like (these are more illustrative than specific!):</p><p><strong>Engineering (Lead / IC)</strong></p><ul><li><p>You have the technical depth to make good architectural calls on complex systems &#8212; databases, ML pipelines, large codebases.</p></li><li><p>You&#8217;ve shipped high-quality code in demanding environments (e.g. big tech, high-velocity startup)</p></li><li><p>You know how to get a junior colleague unstuck on a tricky bug, suggest better tooling, or propose a new process when the current one isn&#8217;t working.</p></li><li><p>(Lead) You&#8217;ve led and grown engineering teams, and can set a high bar for your team.</p></li><li><p>(IC) You can take vague requirements for a system and turn them into working systems, even in unfamiliar contexts.</p></li></ul><p><strong>Research lead</strong></p><ul><li><p>You know what it takes to publish high-quality research end-to-end &#8212; scoping questions, choosing methodology, organising the work, and communicating findings to diverse stakeholders.</p></li><li><p>You&#8217;re a recognised expert in your domain, with the publications and collaborations to show for it.</p></li><li><p>You ask good questions that open up impactful research directions, and you lean on a strong professional network to get things done.</p></li></ul><p><strong>Communications lead</strong></p><ul><li><p>You have a track record of producing high-quality content tailored to specific audiences, and your public profile reflects that.</p></li><li><p>You&#8217;ve managed large project budgets and timelines, making evidence-based decisions to improve outcomes.</p></li><li><p>You&#8217;ve engaged seriously with AI safety &#8212; perhaps even producing content on it already.</p></li></ul><p><strong>Policy lead</strong></p><ul><li><p>You&#8217;ve worked directly with government and understand how to navigate political institutions.</p></li><li><p>You have a strong network in policy environments and know how to get things in front of the right people.</p></li><li><p>You have a track record of translating complex technical research into clear, actionable policy memos and briefings.</p></li></ul><p>Note: these profiles are composites from the organisations we spoke to and are meant to be illustrative rather than specific.</p><h2><strong>You don&#8217;t need years of AI safety experience but you do need to care</strong></h2><p>Given that relatively few people can claim 5+ years of direct AI safety experience, orgs are looking for the next best thing &#8212; people with a strong track record of excellence on similar projects outside the field.</p><p>Mission alignment helps bridge the gap.</p><p>Orgs are looking for people who&#8217;ve engaged seriously enough with AI risk to articulate why this work matters from their own perspective, and ideally made career choices that reflect that conviction.</p><p>These are small (&lt;50), mission-driven orgs with salaries that can&#8217;t compete with big tech companies. As a result, they are looking for those whose motivation goes beyond compensation.</p><h2><strong>How to get started</strong></h2><p>The biggest barrier for capable talent entering the field is context.</p><p>Our <a href="https://bluedot.org/courses/agi-strategy?utm_source=substack&amp;utm_content=hiring%20managers">AGI strategy course</a> is designed to give you that context. You&#8217;ll develop your understanding of the risks AI poses and explore how you might contribute your expertise.</p><p>It&#8217;s a free, 30-hour course that AI safety orgs view as a strong signal of motivation. It is the first step most take in their AI safety careers.</p><p>AI safety needs your expertise to help ensure a positive future with AI. <a href="https://bluedot.org/courses/agi-strategy?utm_source=substack&amp;utm_content=hiring%20managers">Apply here</a> to get started.</p>]]></content:encoded></item><item><title><![CDATA[Running Versions of Our Courses (2026)]]></title><description><![CDATA[We're excited to hear you want to run a version of our courses! This page explains how you can use our materials and what options are available depending on your goals.]]></description><link>https://blog.bluedot.org/p/running-versions-of-our-courses-2026</link><guid isPermaLink="false">https://blog.bluedot.org/p/running-versions-of-our-courses-2026</guid><dc:creator><![CDATA[Joshua Landes]]></dc:creator><pubDate>Sun, 01 Feb 2026 17:05:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f569d33a-07a3-4a0a-8b21-a179e457b245_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>To take part in our facilitated courses instead, see the courses linked on <a href="http://bluedot.org">our homepage</a>.</p><h2>Our Courses</h2><p>We currently run four courses: the <a href="https://bluedot.org/courses/agi-strategy">AGI Strategy Course</a>, the <a href="https://bluedot.org/courses/technical-ai-safety">Technical AI Safety Course</a>, the (Frontier) <a href="https://bluedot.org/courses/ai-governance">AI Governance Course</a>, and the <a href="https://bluedot.org/courses/biosecurity">Biosecurity Course</a>.</p><h2>Using Our Curricula (Permissionless)</h2><p>Running an independent version of our courses is largely permissionless. You can pick up our curricula and start running sessions with your group right away. This works well for friends, workplace groups, student societies, and other local organizations who want to engage with the material together.</p><p>You can share and adapt our course materials including resource lists, exercises, sample answers, and discussion docs, provided that you:</p><p><strong>Brand it distinctly:</strong> Market the course in a way that makes clear who&#8217;s running it. Don&#8217;t use our name in a way that could confuse people as to who is offering the course.</p><p><strong>Give appropriate credit:</strong> When presenting these materials, please credit us as the source as &#8220;BlueDot Impact&#8221;.</p><p><strong>Don&#8217;t hold us liable for errors:</strong> While we try our best to ensure the accuracy and relevancy of our course materials, they&#8217;re provided solely on an &#8220;as is&#8221; basis, without warranty of any kind.</p><p>Note that we don&#8217;t own many of the linked-to resources themselves, so you might need permission from their original owners if you want to copy or translate them.</p><p><strong>Facilitator resources:</strong> We also provide access to our facilitator templates and documents to help you run effective sessions. You can find these linked <a href="https://docs.google.com/document/d/1pTlebk-quCh3DGBIqwW_hbdn-wcDWQKPkRFLpQNHp2s/edit?usp=sharing">here</a>. </p><p><strong>Tell us how it went:</strong> After you&#8217;ve run your course, we&#8217;d love to hear from you. Email us at <a href="mailto:team@bluedot.org">team@bluedot.org</a> with how many people completed the course, what challenges you ran into, what worked well, and whether there were any standout participants you think we should know about. This helps us improve our materials and potentially connect promising people with future opportunities.</p><h2>Running an Official BlueDot Cohort</h2><p>If you&#8217;re interested in running an official BlueDot cohort in your location, reach out to us first at <a href="mailto:team@bluedot.org">team@bluedot.org</a>  so we can discuss whether this makes sense and what it would involve.</p><p><strong>Requirements:</strong> You must either have completed our facilitator training program, or be someone with deep existing context in the relevant field.</p><p><strong>How it works:</strong> We would pay you our normal facilitator rate. However, all participants would need to apply through our standard application review process. If not enough applicants from your location clear our bar, the cohort cannot proceed. We&#8217;ll discuss expected numbers and logistics with you before you commit to anything.</p><p><strong>What accepted participants receive:</strong> Access to our Slack community, a certificate upon completion, and additional career acceleration support.</p><p><strong>Who this is for:</strong> This option is most relevant for local hubs in places with sufficient talent density. We maintain a high bar for our facilitated cohorts, so this works best in locations where you&#8217;re confident there&#8217;s a strong pool of potential applicants.</p><h2>Running Courses at Companies or Institutions</h2><p>If you want to run our courses internally at a company or institution - or if you&#8217;d like us to run them for you - please email us at <a href="mailto:team@bluedot.org">team@bluedot.org</a> to discuss what this may look like. We typically don't do this but may make exceptions for particularly high-impact opportunities.</p><h2>Example Marketing (Independent Version)</h2><p>Here&#8217;s an example of how you might market an independent version of our course, following the guidelines above:</p><blockquote><p><strong>Calling all students interested in the future of AI safety and alignment!</strong></p><p>&#129302; The Greendale Community College AI Society is excited to be hosting an AI safety course this semester, based on the popular AGI Strategy Course developed by BlueDot Impact.</p><p>We&#8217;ll be following their curriculum. You should do the readings and exercises beforehand, and then each week we&#8217;ll host a facilitated group discussion adapted for our university setting.</p><p>The discussions will run every Wednesday from 6-8pm in Group Study Room F starting on 17 September. To join the seminar series, RSVP at this link.</p><p>If you have any other questions, email <a href="mailto:ai.soc@greendale.edu">ai.soc@greendale.edu</a>. Looking forward to some fascinating discussions! &#128578;</p></blockquote><h2>What&#8217;s Not Okay</h2><p>Examples of things that would not be okay:</p><ul><li><p>&#8220;GCC AI Society is launching a round of the BlueDot Impact AGI Strategy course&#8221;</p></li><li><p>&#8220;We&#8217;re running the BlueDot AGI Strategy Course in Greendale, Colorado&#8221;</p></li><li><p>&#8220;Apply to the AGI Strategy Course&#8221;, linking to your version of the course</p></li><li><p>&#8220;Run in collaboration with BlueDot Impact&#8221;, unless we&#8217;ve explicitly agreed this</p></li><li><p>Issuing certificates &#8220;for completing the BlueDot AGI Strategy course&#8221; (but it&#8217;s fine to issue certificates &#8220;for completing GCC AI Society&#8217;s AI Alignment course, based on the BlueDot Impact curriculum&#8221;)</p></li></ul><h2>FAQs</h2><p><strong>Can local groups get copies of the discussion docs?</strong></p><p>Yes! These are linked <a href="https://docs.google.com/document/d/1pTlebk-quCh3DGBIqwW_hbdn-wcDWQKPkRFLpQNHp2s/edit?usp=sharing">here</a>. </p><p><strong>Can local groups use BlueDot&#8217;s infrastructure for running courses?</strong></p><p>Yes, almost all our software and corresponding documentation is available on <a href="https://github.com/bluedotimpact">GitHub.</a> You can raise issues there if you get stuck. We aren&#8217;t currently able to provide hosted versions of our software, or technical support beyond this. Local groups are also welcome to direct users to our course hub to help learners track their own reading completions and exercises.</p><p><strong>Can we use your facilitator training program?</strong></p><p>Yes! The resources, exercises and session plans for our facilitator training course are <a href="https://docs.google.com/document/d/1pTlebk-quCh3DGBIqwW_hbdn-wcDWQKPkRFLpQNHp2s/edit?usp=sharing">available online</a>. Though, we can&#8217;t run facilitator training for you. </p><p><strong>I&#8217;m planning to run a local group / previously participated in a local group version. Could I join the BlueDot facilitated course?</strong></p><p>Yes, please apply in the normal way and mention this in your application. Note that this does not guarantee you a place on our course.</p><p><strong>What&#8217;s the difference between running an independent version and an official BlueDot cohort?</strong></p><p>An independent version uses our curricula but is entirely run by you, for your community. An official BlueDot cohort means participants apply through our process, get access to our full resources (Slack, certificates, career support), and you&#8217;re compensated as a facilitator. The first is permissionless, the bar for the latter is higher.</p><p>If you have any other questions, feel free to contact us at <a href="mailto:team@bluedot.org">team@bluedot.org</a>. </p><p></p>]]></content:encoded></item><item><title><![CDATA[Give AI companies something to aim for. The case for beneficial capability evals.]]></title><description><![CDATA[When AI companies release a new model, they publish model cards detailing what it can and can&#8217;t do.]]></description><link>https://blog.bluedot.org/p/give-ai-companies-something-to-aim</link><guid isPermaLink="false">https://blog.bluedot.org/p/give-ai-companies-something-to-aim</guid><dc:creator><![CDATA[Adam Jones]]></dc:creator><pubDate>Sat, 10 Jan 2026 09:24:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4e2f1f06-7c71-422d-a935-f31f22dec1b2_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When AI companies release a new model, they publish model cards detailing what it can and can&#8217;t do. Looking across major model releases, about half of evaluations measure general capabilities (e.g. coding, reasoning, writing) while the other half focus on dangerous capabilities: can the model help create bioweapons? Can it assist with cyberattacks? How persuasive is it at spreading misinformation?</p><p>Dangerous capability evals make sense for safety. We need to know how worried we should be about models causing harm.</p><p>But we&#8217;re missing benchmarks for specific positive safety things we want AI to do. Capabilities that could actively keep us safe or steer us toward better futures.</p><h2><strong>What beneficial evals would measure</strong></h2><p>Imagine if model cards included sections on how well AI systems perform at:</p><ul><li><p><strong>Safety research:</strong> Assisting with interpretability work, alignment research, or red-teaming other AI systems</p></li><li><p><strong>Cyber defense:</strong> Not just penetration testing, but actually defending networks, detecting intrusions, and patching vulnerabilities</p></li><li><p><strong>Pandemic preparedness:</strong> Accelerating vaccine development, optimising PPE distribution, or improving disease surveillance</p></li><li><p><strong>Information integrity:</strong> Helping people find truth in a world full of mis/disinformation: fact-checking claims, detecting coordinated inauthentic behaviour, or surfacing reliable sources on contested topic</p></li></ul><p>Models have graduated from &#8216;confidently wrong intern&#8217; to &#8216;occasionally helpful new grad&#8217;, at least for some coding tasks. These evals and reinforcement learning (RL) environments could extend these into beneficial capabilities. If we&#8217;re heading toward a capabilities explosion, we need to ensure that beneficial capabilities outpace dangerous ones by a wide margin.</p><p><strong>From evals to RL environments</strong></p><p>Evals and RL environments are two sides of the same coin. An eval asks &#8220;can the model do X?&#8221; and produces a score. An RL environment asks the same question, but feeds that score back as a training signal.</p><p>In practice, this means a well-designed eval is most of the work to building an RL environment. If you can measure whether a model successfully identified the right papers, traced activations correctly, or generated a viable therapeutic drug candidate, you can turn that measurement into a reward signal. The eval&#8217;s scoring rubric becomes the RL environment&#8217;s reward function. The eval&#8217;s test cases become the RL environment&#8217;s training distribution.</p><h2><strong>A word of caution</strong></h2><p>Not everything that sounds safety-relevant actually is. Be careful that you&#8217;re measuring something genuinely useful for safety, not just general capabilities dressed up in safety language.</p><p>Getting models to be better at coding in Python is a general capability, <em>not </em>safety-specific. But improving models&#8217; use of TransformerLens for interpretability research unblocks a specific safety-relevant bottleneck.</p><p>The goal is to create targets that, if hit, would actually make AI development go better, without inadvertently pushing general capabilities that could be misused.</p><h2><strong>How to build beneficial capability evals</strong></h2><h3><strong>Approach 1: Reproduce existing safety projects</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Lnxg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Lnxg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png 424w, https://substackcdn.com/image/fetch/$s_!Lnxg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png 848w, https://substackcdn.com/image/fetch/$s_!Lnxg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png 1272w, https://substackcdn.com/image/fetch/$s_!Lnxg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Lnxg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png" width="1456" height="719" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:719,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Lnxg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png 424w, https://substackcdn.com/image/fetch/$s_!Lnxg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png 848w, https://substackcdn.com/image/fetch/$s_!Lnxg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png 1272w, https://substackcdn.com/image/fetch/$s_!Lnxg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a80fdc-c47c-4966-a00e-43f29cc0bce0_1600x790.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Take existing safety-relevant projects and try to get AI to reproduce them. Where does the model fail? Those failure points become eval problems you can measure against.</p><p>For example, <a href="https://bluedot.org/">BlueDot Impact</a> runs courses where participants complete <a href="https://bluedot.org/projects">projects</a> on AI safety, biosecurity, and other safety-relevant domains. Take a completed project &#8212; say, a mechanistic interpretability analysis of a small transformer &#8212; and see if a model can replicate it. If the model gets stuck on a specific step (maybe it can&#8217;t correctly identify attention head functions), that&#8217;s a concrete eval task.</p><p>This approach grounds you in real work that humans have already validated as useful.</p><h3><strong>Approach 2: Talk to safety researchers</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FgkJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FgkJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png 424w, https://substackcdn.com/image/fetch/$s_!FgkJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png 848w, https://substackcdn.com/image/fetch/$s_!FgkJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png 1272w, https://substackcdn.com/image/fetch/$s_!FgkJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FgkJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png" width="628" height="371.7967032967033" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:862,&quot;width&quot;:1456,&quot;resizeWidth&quot;:628,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FgkJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png 424w, https://substackcdn.com/image/fetch/$s_!FgkJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png 848w, https://substackcdn.com/image/fetch/$s_!FgkJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png 1272w, https://substackcdn.com/image/fetch/$s_!FgkJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbc2509f-30e5-4890-aefb-54eb98acaa18_1600x947.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Go directly to people doing safety-relevant work. Ask them to keep a log for a couple of weeks: every time AI tools frustrate them, slow them down, or just can&#8217;t do something they need, write it down. Then collect these logs and look for patterns.</p><p>You might find things like: &#8220;Claude keeps hallucinating paper citations when I ask for related work,&#8221; or &#8220;I can&#8217;t get any model to correctly trace activations through residual stream connections,&#8221; or &#8220;It takes 10 back-and-forths to get the model to understand my codebase structure.&#8221;</p><p>These frustrations point directly to capability gaps. Turn them into evals.</p><h3><strong>Breaking down the work</strong></h3><p>Whatever approach you take, you&#8217;ll need to:</p><p><strong>Map the process.</strong> Break down the safety-relevant task into all its component steps. For example, with vaccine development: literature review &#8594; identifying which parts the immune system can target &#8594; choosing an approach (mRNA, viral vector, etc.) &#8594; designing candidates &#8594; scaling up manufacturing &#8594; running clinical trials.</p><p><strong>Prioritise.</strong> Figure out which parts of the pipeline are most important to work on first. Maybe running clinical trials is the biggest bottleneck or the literature review step that takes researchers months.</p><p><strong>Define tasks.</strong> For each component, specify what success looks like in a way that&#8217;s measurable. For the literature review step, this might be: &#8220;Given a novel pathogen genome sequence, identify the 10 most relevant prior papers on similar pathogens, ranked by relevance to vaccine development&#8221;. Include a rubric for scoring relevance and completeness.</p><p><strong>Build evaluation infrastructure.</strong> Many modern evals are &#8220;agent-evaluated&#8221;&#8212;you get one model to attempt a task, then use another model as a judge to verify whether specific conditions have been met. <a href="https://ukgovernmentbeis.github.io/inspect_ai/">Inspect</a> is a good framework for getting started with building evals. Consider submitting your work to<a href="https://github.com/UKGovernmentBEIS/inspect_evals"> inspect-evals</a> to make it discoverable.</p><p><strong>Iterating and improving. </strong>A key is desirable difficulty: we need evaluations that remain challenging and informative as capabilities improve. If in doubt, err on the side of too difficult.</p><h3><strong>Getting Your Work Used</strong></h3><p>Building good evals isn&#8217;t enough. You need companies to actually find and use them.</p><ul><li><p><strong>Make it discoverable:</strong> Publish on your blog, get it circulating on Twitter/X, or submit to relevant publications</p></li><li><p><strong>Make it easy to run:</strong> Companies are more likely to adopt your eval if they can spin it up quickly</p></li><li><p><strong>Work directly with companies:</strong> Offer to help them run it, troubleshoot issues, and interpret results</p></li></ul><h2><strong>The business opportunity</strong></h2><p>Building good evals is a serious undertaking. No single person or team is going to create comprehensive positive capability benchmarks across all safety-relevant domains.</p><p>But there&#8217;s a business opportunity here. Companies will pay for RL environments: the Docker containers, code infrastructure, and carefully written rubrics for each task. A path here might be:</p><ul><li><p>Start with a small batch or proof of concept</p></li><li><p>Pitch it to a company</p></li><li><p>If they&#8217;re interested, they might pay for a set of ~100 problems</p></li><li><p>They&#8217;ll QA the work, and if it&#8217;s good, they&#8217;ll ask for more</p></li></ul><p>Two people working full-time could potentially (with a lot of coffee) get an initial batch done in a month.</p><h2><strong>Start building</strong></h2><p>Pick a safety-relevant domain you know, map the steps, find the bottlenecks, and build something. You can also:</p><ul><li><p><a href="https://web.miniextensions.com/9UQbEOQ10oTYrqpIOwVS">Apply</a> to our <a href="https://blog.bluedot.org/p/announcing-incubator-week-v2">incubator weeks</a>.</p></li><li><p>Use or apply to the <a href="https://bluedot.org/courses/technical-ai-safety-project">Technical AI Safety Project</a> sprint </p></li><li><p>Learn more about plans to make AI go well on the <a href="https://bluedot.org/courses/agi-strategy">AGI strategy course</a>.</p></li><li><p>Tag @BlueDotImpact on <a href="https://x.com/BlueDotImpact">Twitter</a> or <a href="https://www.linkedin.com/company/bluedotimpact">LinkedIn</a>. We&#8217;d love to see what you build!</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Common biochemistry prefixes and suffixes]]></title><description><![CDATA[Memorise these if you want to have an easier time reading biology literature]]></description><link>https://blog.bluedot.org/p/bio-terminology</link><guid isPermaLink="false">https://blog.bluedot.org/p/bio-terminology</guid><dc:creator><![CDATA[Will Saunter]]></dc:creator><pubDate>Tue, 06 Jan 2026 16:38:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7d0fae82-2128-4fb1-ba82-7036180dc394_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Biology papers can sometimes feel impenetrable:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9M6C!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9M6C!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 424w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 848w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 1272w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9M6C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png" width="1456" height="638" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:638,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:215406,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.bluedot.org/i/183685312?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9M6C!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 424w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 848w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 1272w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://www.mdpi.com/2218-273X/14/9/1186#">Source</a></figcaption></figure></div><p>But luckily there are a few common prefixes and suffixes that crop up again and again. Once you&#8217;ve learned this lingo, you&#8217;ll have a far easier time fighting through biology papers and wikipedia articles.</p><p><strong>Cell structure and location</strong></p><ul><li><p><strong>Cyto-</strong> &#8594; cell (cytoplasm, cytoskeleton, cytokine)</p></li><li><p><strong>-some</strong> &#8594; discrete body/structure (ribosome, lysosome, chromosome, endosome)</p></li><li><p><strong>Endo-</strong> &#8594; within/inside (endocytosis, endoplasmic reticulum, endotoxin)</p></li><li><p><strong>Exo-</strong> &#8594; outside/outward (exocytosis, exotoxin, exonuclease)</p></li><li><p><strong>Peri-</strong> &#8594; around (periplasm, perinuclear)</p></li><li><p><strong>Trans-</strong> &#8594; across (transmembrane, transcription, translation)</p></li><li><p><strong>Intra-</strong> &#8594; within (intracellular, intranuclear)</p></li><li><p><strong>Extra-</strong> &#8594; outside (extracellular, extrachromosomal)</p></li></ul><p><strong>Enzymes and processes</strong></p><ul><li><p><strong>-ase</strong> &#8594; enzyme (protease, kinase, polymerase, lipase)</p></li><li><p><strong>-sis</strong> &#8594; process/action (mitosis, apoptosis, lysis, synthesis)</p></li><li><p><strong>-lysis</strong> &#8594; breaking down/splitting (hydrolysis, glycolysis, proteolysis)</p></li><li><p><strong>-genesis</strong> &#8594; creation/origin (biogenesis, pathogenesis, oncogenesis)</p></li><li><p><strong>Kinase</strong> &#8594; adds phosphate groups (protein kinase, tyrosine kinase)</p></li><li><p><strong>Phosphatase</strong> &#8594; removes phosphate groups</p></li><li><p><strong>-merase</strong> &#8594; polymerising enzyme (polymerase, telomerase)</p></li><li><p>Enzyme names usually end in <strong>-ase</strong> with the substrate as prefix (lactase acts on lactose, protease on proteins)</p></li></ul><p><strong>Eating, destroying, killing</strong></p><ul><li><p><strong>Phago-/-phage</strong> &#8594; eat/engulf (phagocyte, macrophage, phagocytosis)</p></li><li><p><strong>Bacteriophage/phage</strong> &#8594; virus that infects bacteria (you&#8217;ll see &#8220;phage&#8221; used standalone constantly in molecular biology)</p></li><li><p><strong>-cidal/-cide</strong> &#8594; killing (bactericidal, virucidal, fungicidal)</p></li><li><p><strong>-static</strong> &#8594; inhibiting growth without killing (bacteriostatic)</p></li><li><p><strong>-lytic</strong> &#8594; causing lysis/destruction (cytolytic, haemolytic)</p></li></ul><p><strong>Molecules and macromolecules</strong></p><ul><li><p><strong>Glyco-/-glycan</strong> &#8594; sugar/carbohydrate (glycoprotein, glycolysis, proteoglycan)</p></li><li><p><strong>Lipo-/Lip-</strong> &#8594; fat/lipid (lipoprotein, lipase, lipid bilayer)</p></li><li><p><strong>Proteo-/Prot-</strong> &#8594; protein (proteome, protease, proteolysis)</p></li><li><p><strong>Nucleo-</strong> &#8594; nucleus or nucleic acid (nucleotide, nucleosome, endonuclease)</p></li><li><p><strong>Poly-</strong> &#8594; many (polymer, polypeptide, polysaccharide)</p></li><li><p><strong>Oligo-</strong> &#8594; few (oligonucleotide, oligosaccharide)</p></li><li><p><strong>Mono-</strong> &#8594; one (monomer, monosaccharide)</p></li><li><p><strong>-ome<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></strong> &#8594; complete set (genome = complete set of genes, proteome = complete set of proteins)</p></li></ul><p><strong>Useful general prefixes</strong></p><ul><li><p><strong>Bio-</strong> &#8594; life/living (biosynthesis, biomarker)</p></li><li><p><strong>Hetero-</strong> &#8594; different (heterozygous, heterogeneous)</p></li><li><p><strong>Homo-</strong> &#8594; same (homozygous, homologous)</p></li><li><p><strong>Iso-</strong> &#8594; equal/same (isomer, isotonic)</p></li><li><p><strong>Hyper-</strong> &#8594; excess/above (hyperactivation, hypertonic)</p></li><li><p><strong>Hypo-</strong> &#8594; deficient/below (hypotonic, hypoxia)</p></li><li><p><strong>Anti-</strong> &#8594; against (antibody, antigen, antibiotic)</p></li><li><p><strong>Pro-</strong> &#8594; before/precursor (promoter, prophage, proinflammatory)</p></li></ul><h3>Test yourself!</h3><p>Go through the abstract from the start of this post word-by-word and try to rewrite it in simple language with the help of the terminology above. You might not get everything, but I bet you&#8217;ll get further than you expect.</p><p>Use an LLM to check your work and help you with any parts you&#8217;re still confused about.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9M6C!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9M6C!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 424w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 848w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 1272w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9M6C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png" width="1456" height="638" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:638,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:215406,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.bluedot.org/i/183685312?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!9M6C!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 424w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 848w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 1272w, https://substackcdn.com/image/fetch/$s_!9M6C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F257ea93e-18a0-4bcc-b15c-1c463a2b1e2a_1478x648.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Note this is different from &#8220;-some&#8221; in the cell structure section</p></div></div>]]></content:encoded></item><item><title><![CDATA[A playbook for field strategy]]></title><description><![CDATA[This is how you can produce a good strategy for solving important global problems, fast.]]></description><link>https://blog.bluedot.org/p/field-strategy-playbook</link><guid isPermaLink="false">https://blog.bluedot.org/p/field-strategy-playbook</guid><dc:creator><![CDATA[Dewi Erwan]]></dc:creator><pubDate>Tue, 30 Dec 2025 13:25:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8154fc59-a752-4747-a5ec-7c8192bba116_3040x1600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At BlueDot, we&#8217;ve used this playbook to design draft strategies for specific biosecurity interventions, including <a href="https://docs.google.com/document/d/1siKzEQvZqkKI1oPWpKLPPOWSqJCTB_cIr3PYHSJRYpg/edit?tab=t.0">UV air disinfection</a>, <a href="https://docs.google.com/document/d/1w-KqzvdN-GqH6pA4I0koqYnKEySwKHtS_945Bgp-fXk/edit?tab=t.0#heading=h.2gsm4b1fvo18">DNA synthesis screening</a>, and <a href="https://docs.google.com/document/d/1NkjrwATO3asR4tvQcM7imBpOLgXs04Bb35HLq5cnWNs/edit?tab=t.0#heading=h.2gsm4b1fvo18">pandemic early warning systems</a>. These strategies inform where we direct top talent and which new projects we help launch.</p><h2><strong>The need for field-level strategies</strong></h2><p>A <a href="https://www.bridgespan.org/getmedia/6d7adede-31e8-4a7b-ab87-3a4851a8abac/field-building-for-population-level-change-march-2020.pdf">field</a> is a community of individuals and organisations working to address a shared problem. In our case, the fields we operate in are trying to mitigate risks from AI and synthetic biology.</p><p>These fields are under-resourced and confused. Without a strategy for what we&#8217;re trying to achieve and how we&#8217;re going to achieve it, we risk producing elegant answers to pointless questions, focusing our scarce resources on unimportant problems, and failing to protect humanity from harm.</p><p>It&#8217;s nobody&#8217;s job to produce field-level strategy &#8212; most actors in the community focus on their own organization or their own role.</p><p>Many &#8220;strategies&#8221; that people do produce are shallow, one-off wishlists that don&#8217;t get distributed to relevant stakeholders. They set goals before deeply understanding the problem and analysing what matters most for whether or not we&#8217;ll succeed. Strategy isn&#8217;t a goal, it&#8217;s a form of problem-solving, and you can&#8217;t solve a problem you don&#8217;t understand and haven&#8217;t defined.</p><p>Field strategy is still challenging to do well. Defining the &#8220;problem-to-be-solved&#8221; is contentious and a moving target. Information is privately held or not written down. Experts disagree because they have different values and world-models. You might just aggregate people&#8217;s opinions and produce a vague mush. Few people have done this well before. You&#8217;ve not been appointed to do this, so you need to earn legitimacy and trust by doing the hard work well.</p><p>Here&#8217;s how you can overcome these challenges. </p><h2><strong>How to produce a field strategy</strong></h2><p>These steps should be done in parallel.</p><h3><strong>Read and take notes</strong></h3><p>You&#8217;re not the first person to think about this problem. Start by reading the field&#8217;s seminal literature, and important pieces from adjacent fields. You can use AI to search for the best papers and blog posts, and they&#8217;re also great thought partners to ask you questions, give you feedback, and prod your mental models for how things work.</p><p>Take lots of scrappy notes. Don&#8217;t write prose; stick to <a href="https://sive.rs/1s">one-sentence bullet points</a>. Synthesise what you&#8217;re learning into the most critical insights, and group your notes into sensible-seeming categories to help you build a stronger mental model of the field. Keep track of your confusions and uncertainties, as these can steer further research and interview questions.</p><h3><strong>Ask and listen</strong></h3><p>You&#8217;re unlikely to find the most important insights on the public web. They live inside people&#8217;s heads or in private google docs. To acquire this information, you need to talk to lots of the right people. Connect with them via cold emails or via <a href="https://www.forbes.com/sites/bruceupbin/2013/03/27/the-art-of-the-email-introduction-10-rules-for-emailing-busy-people/#4e8385617f7b">warm</a> <a href="https://medium.com/1517/advice-from-1517-how-to-get-a-killer-intro-440e0619128f">introductions</a>. As a heuristic, try to have 50 calls in your first 50 days of working full-time on this.</p><p>The usefulness of conversations tends to follow a power-law: most conversations will be fine, a few will be extraordinary, some you won&#8217;t realize were useful until weeks later. So you need to constantly search for the most insightful people.</p><p>To gather the most juicy insights during your calls, you need to build a connection quickly. At the start of every conversation, share your backstory, ask them for theirs, and find common ground. Demonstrate you&#8217;re competent, be vulnerable, and mirror their energy.</p><p>Avoid biasing them with your ideas early on. Ask open questions that give them a lot of space to explore and share things you don&#8217;t expect. For example, &#8220;If we&#8217;ve solved X problem in 10 years, what&#8217;s the most important thing that needs to happen next year?&#8221;. Read <a href="https://wilselby.com/2020/06/the-mom-test-summary-and-insights/">The Mom Test</a> for question-asking advice.</p><p>After you&#8217;ve asked a broad open question, listen intently and ask a follow-up question that zooms in on a specific thing they said that seems important. You know it&#8217;s a good question if their eyes light up or if they have to think super hard and then say something really valuable.</p><p>You should also come prepared with sharp, open questions that enable them to build on what you already know. For example, &#8220;In X paper, I noticed Y, but this conflicts with my intuition Z. What&#8217;s your intuition about Z, and what drives that?&#8221;.</p><p>If you&#8217;re confused, express that confusion and give them the opportunity to help you. If you hear them say something that doesn&#8217;t make sense to you or doesn&#8217;t align with your model for how things work, chase it. Ask follow-up questions. This is how you discover unknown unknowns.</p><h3><strong>Move fast through the network</strong></h3><p>Every call is an opportunity to bounce into the next conversation. In the final 5 minutes, prioritize generating names for potential introductions. If you&#8217;ve asked great questions, demonstrated that you&#8217;re determined to solve this problem, and they&#8217;ve had fun, they&#8217;ll feel excited to introduce you to people they admire.</p><p>To help them generate names, prompt them with:</p><ul><li><p>Who&#8217;s done the best writing on this problem?</p></li><li><p>Whose work are you most impressed with?</p></li><li><p>Who do you turn to for help when you&#8217;re stuck?</p></li><li><p>Who would I learn the most from?</p></li></ul><p>Explicitly ask for introductions before the call ends. Then you need to make it as easy as possible for them to make those introductions. Immediately after the call ends, send them a complete draft email they can use to make the introductions. This should include a subject, a bio of yourself and a description of this project. Aim for &lt;100 words.</p><p>Once you&#8217;ve been introduced, you need to move fast. If you send them a scheduling link, it might take them 1-2 weeks to click on it, and they might schedule a meeting for 1-2 weeks after that. But you need to move as deep into the &#8220;intros chain&#8221; as quickly as possible.</p><p>Immediately after you&#8217;ve been introduced to someone, send them a calendar invite for the next day at a reasonable time for their timezone. Use this format for the calendar event title: &#8220;[TBC] YourName&lt;&gt;TheirName&#8221;. Reply to the introduction asking if that time works, and if not, when works best for them. This tactic runs a higher risk of annoying people and shouldn&#8217;t be used with everyone, but I think it&#8217;s worth the speed benefits most of the time. Use it thoughtfully.</p><h3><strong>Write and rewrite</strong></h3><p>After your first few calls, start writing your v1 scrappy strategy doc. In separate sections, answer these questions:</p><ul><li><p>What is the precise problem/threat you&#8217;re trying to solve?</p></li><li><p>If this field succeeds in 5-10 years, what&#8217;s different about the world?</p></li><li><p>Where are things at today? Who&#8217;s working on this, what&#8217;s been tried, what&#8217;s the funding and policy landscape?</p></li><li><p>What&#8217;s the biggest obstacle between here and success?</p></li><li><p>What approach could overcome this obstacle, and why would it work?</p></li><li><p>What needs to happen in the next 6-12 months?</p></li></ul><p>Here are some work-in-progress examples: <a href="https://docs.google.com/document/d/1siKzEQvZqkKI1oPWpKLPPOWSqJCTB_cIr3PYHSJRYpg/edit?tab=t.0">UV air disinfection</a>, <a href="https://docs.google.com/document/d/1w-KqzvdN-GqH6pA4I0koqYnKEySwKHtS_945Bgp-fXk/edit?tab=t.0">DNA synthesis screening</a>, <a href="https://docs.google.com/document/d/1NkjrwATO3asR4tvQcM7imBpOLgXs04Bb35HLq5cnWNs/edit?tab=t.0#heading=h.2gsm4b1fvo18">pandemic early warning systems</a>.</p><p>This structure is adapted from Richard Rumelt&#8217;s &#8220;<a href="https://www.willpatrick.co.uk/notes/good-strategy-bad-strategy-richard-rumelt">Good Strategy, Bad Strategy</a>,&#8221; which is well worth the read. The most common failure mode when doing strategy is jumping from &#8220;what success looks like&#8221; to &#8220;what should we do&#8221;, without diagnosing <em>why</em> we&#8217;re not already succeeding.</p><p>Remember that your first call with an expert is only the beginning of your relationship with them. Give them commenting access on your google doc, thank them for their input, and ask for their feedback. You know this is working when people spend a lot of time in your doc, when experts debate each other in the comments and generate new insights, and when people ask you if they can share the doc with their peers.</p><p>This isn&#8217;t a one-off, linear process. It&#8217;s messy, and you&#8217;ll jump around a lot. You&#8217;ll discover big new questions and conflicting evidence that undermine your confidence and makes you feel confused. You&#8217;ll get lost down rabbit holes. And you should be prepared to rewrite everything from scratch a few times.</p><p>But after a few weeks or months, you&#8217;ll have a clear sense for where this field needs to be, a sharp diagnosis for the biggest obstacle blocking progress, a plan of action for overcoming that obstacle, and widespread buy-in from the most powerful and influential stakeholders in the field.</p><h2><strong>Taking action</strong></h2><p>This process doesn&#8217;t just produce a document. It gives you strong relationships with influential players throughout the field. It gives you situational awareness about what everyone&#8217;s doing and why, and what&#8217;s blocking them. It makes you a person people turn to when they want to know what&#8217;s happening, what matters most, and what to do next.</p><p>Then the next phase begins: obsessively communicating the strategy, overseeing implementation, removing obstacles to progress, and updating and refining the strategy as the field evolves and we learn from reality. More details on this in a future blog post.</p><p>If you&#8217;re working on an important global problem and you don&#8217;t know what to do (and seemingly neither does anyone else), this is how you can make yourself useful to a field that needs direction.</p><h2><strong>Further reading</strong></h2><p>These blog posts will help you understand what we mean by <em>strategy</em>. I believe it&#8217;s worth spending 5+ hours engaging with these.</p><ul><li><p><a href="https://publications.armywarcollege.edu/News/Display/Article/3706569/strategy-as-problem-solving/">Strategy as Problem-Solving &#8211; US Army War College</a></p></li><li><p><a href="https://www.lennysnewsletter.com/p/good-strategy-bad-strategy-richard">Good Strategy, Bad Strategy &#8211; Richard Rumelt</a></p></li><li><p><a href="https://longform.asmartbear.com/great-strategy/">What makes a strategy great</a> &#8211; Jason Cohen</p></li><li><p><a href="https://www.belfercenter.org/sites/default/files/pantheon_files/files/publication/DavidPetraeusTranscript.pdf">Strategic Command &#8211; David Petraeus</a></p></li><li><p><a href="https://www.bridgespan.org/getmedia/6d7adede-31e8-4a7b-ab87-3a4851a8abac/field-building-for-population-level-change-march-2020.pdf">Field Building for Population-Level Change &#8211; Bridgespan Group</a></p></li><li><p><a href="https://drive.google.com/file/d/1pJhPdSJioMTURrGApIqf4US-KjqRluO4/view?usp=sharing">The Case for a Partnership Between Field Strategists And Philanthropists &#8211; Tom Kalil</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Should you do an AI safety research / engineering project?]]></title><description><![CDATA[Probably yes if you can code, but it also depends on how you plan to contribute to AI safety.]]></description><link>https://blog.bluedot.org/p/should-you-do-an-ai-safety-research</link><guid isPermaLink="false">https://blog.bluedot.org/p/should-you-do-an-ai-safety-research</guid><dc:creator><![CDATA[Li-Lian Ang]]></dc:creator><pubDate>Mon, 29 Dec 2025 11:20:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2aa5be33-f5a2-4564-bb7e-73d15fbd6f89_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The short answer is probably yes if you can code.</p><p>The long answer depends on what you&#8217;re trying to do. Are you trying to figure out how you might contribute your technical skills to AI safety, or get opportunities to do it?</p><p><em>This blog post is written for graduates of the <a href="https://bluedot.org/courses/technical-ai-safety">Technical AI Safety course</a> who have spent 30 hours learning about current safety techniques, and the gaps to building safer AI.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Mp0z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Mp0z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png 424w, https://substackcdn.com/image/fetch/$s_!Mp0z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png 848w, https://substackcdn.com/image/fetch/$s_!Mp0z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png 1272w, https://substackcdn.com/image/fetch/$s_!Mp0z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Mp0z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png" width="1600" height="846" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:846,&quot;width&quot;:1600,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:215468,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Mp0z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png 424w, https://substackcdn.com/image/fetch/$s_!Mp0z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png 848w, https://substackcdn.com/image/fetch/$s_!Mp0z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png 1272w, https://substackcdn.com/image/fetch/$s_!Mp0z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc7be07-e08d-4fdb-b7df-69a45a57add0_1600x846.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>What counts as &#8216;can code&#8217;?</strong></h2><p>I don&#8217;t mean you have to have professional software engineering experience. I mean that you at least feel comfortable independently completing a simple coding project.</p><p>This means things you should be able to:</p><ul><li><p>Read and review code</p></li><li><p>Debug errors and figure things out when you get stuck</p></li><li><p>Write basic functions and loops</p></li></ul><p>If that&#8217;s not you yet and you want to do a research / engineering project, you&#8217;ll want to build those skills first.</p><h2><strong>You should probably do a project if&#8230;</strong></h2><p><strong>You&#8217;re applying for research and engineering roles, but not getting the roles you want.</strong> We&#8217;ve spoken to hiring managers from a variety of AI safety organisations including government, non-profits and frontier AI companies, who all tell us the same thing. An excellent project is one of the strongest hiring signals.</p><p>Most AI safety orgs are not credentialist, so having a project also puts you on more even footing, even if you don&#8217;t have years of professional software engineering or research experience. This is also the reason why there are so many AI safety research fellowships like <a href="https://www.arena.education/">ARENA</a>, <a href="https://www.matsprogram.org/">MATS</a>, <a href="https://www.pivotal-research.org/fellowship">Pivotal</a>, <a href="https://www.lasrlabs.org/">LASR</a>, <a href="https://princint.ai/">PIBBS</a>.</p><p>Moreover, doing a project will give you some of the skills for technical interviews when applying.</p><p><strong>You&#8217;re still figuring out how you want to contribute.</strong> While you could (and should!) get others&#8217; takes on what technical work is like, just trying the thing can give you way more information than just reading or talking to people. It can be a cheap test for you to see for yourself what doing this work is like.</p><p>Before you devote time to upskilling, see for yourself what that work entails. You&#8217;ll surprise yourself by how much you can just jump in and learn as you go, rather than upskill generally.</p><p><strong>You&#8217;re earlier in building your technical skills.</strong> Doing the real thing is one of the most effective ways to learn because it forces you to focus on the actual skills you need. Even if you don&#8217;t land a role immediately, you&#8217;ll be building your portfolio along the way.</p><h2><strong>You might not need this if&#8230;</strong></h2><p><strong>You already have a strong portfolio.</strong> When I speak to hiring managers, and AI safety researchers and engineers, they recommend just applying because people often underestimate how experienced they are or overestimate how experienced they need to be for the role.</p><p>They&#8217;ve also said that you&#8217;re not penalised for reapplying. Just make sure to highlight what&#8217;s changed since your last application. In fact, this is often a strong signal of how high agency you&#8217;ve been in upskilling since then.</p><p>If you already have a strong portfolio or technical skills, apply first. Do a project later.</p><h2><strong>This probably isn&#8217;t for you if&#8230;</strong></h2><p><strong>You&#8217;re doing this because you think it&#8217;s the only way to contribute.</strong> It&#8217;s not.</p><p>Especially if you&#8217;re several years into your career, you&#8217;ve racked up expertise that others are spending months trying to acquire. Leverage that!</p><p>Look for areas where you can contribute your unique skills and experience. You can read <a href="https://80000hours.org/career-reviews/">80,000 Hours career review</a> or check out our<a href="https://bluedot.org/courses/agi-strategy"> AGI Strategy course</a> to learn about other pathways. There are many high-impact paths that don&#8217;t require touching code.</p><h2><strong>So you&#8217;ve decided to do a project</strong></h2><p>You can start right now by following our <a href="https://bluedot.org/courses/technical-ai-safety-project">Technical AI Safety Project sprint</a>.</p><p>You can get a richer experience by applying, where we&#8217;ll provide:</p><ul><li><p><strong>Mentorship:</strong> Having someone familiar with AI safety can guide your project direction with how to pick and scope your project, what tools to use, and who to talk to.</p></li><li><p><strong>Accountability:</strong><a href="https://xkcd.com/874/"> Setting goals is hard</a>, sticking to them is even harder. Working on this with someone else will be a major boost!</p></li><li><p><strong>Rapid</strong> <strong>feedback</strong>: A point person to review your work as you go, so you can iterate faster.</p></li></ul><p>Whether you follow the guide independently or apply to join a cohort, we&#8217;re excited to see what you come up with! Tag @BlueDotImpact on LinkedIn or Twitter with your project.</p>]]></content:encoded></item><item><title><![CDATA[How to host your own women in AI safety event in <2 hours]]></title><description><![CDATA[30+ women have come to each of our last 5 women in AI safety events.]]></description><link>https://blog.bluedot.org/p/how-to-host-your-own-women-in-ai</link><guid isPermaLink="false">https://blog.bluedot.org/p/how-to-host-your-own-women-in-ai</guid><dc:creator><![CDATA[Li-Lian Ang]]></dc:creator><pubDate>Wed, 17 Dec 2025 02:10:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VHof!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>30+ women have come to each of our last 5 women in AI safety events. At the end of every event, they ask us for more!</p><p>I&#8217;d thought about running an event like this before, but felt daunted. It seemed like a huge lift. I wasn&#8217;t even sure women would <em>want</em> to attend.</p><p>When I went to one that Monika Jotautait&#279; hosted, I realised that (1) there&#8217;s a LOT of demand for this event and (2) it&#8217;s pretty straightforward to organise.</p><p>Here&#8217;s how you can plan this event in less than 2 hours.</p><p><em>This blog post is written for graduates of <a href="https://bluedot.org/courses">BlueDot&#8217;s courses</a>.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VHof!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VHof!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VHof!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VHof!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VHof!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VHof!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;No alternative text description for this image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="No alternative text description for this image" title="No alternative text description for this image" srcset="https://substackcdn.com/image/fetch/$s_!VHof!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VHof!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VHof!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VHof!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff74d50d8-6670-44d7-8301-3f90ad69e9ba_2048x1536.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">30+ women from our speed-friending event in December.</figcaption></figure></div><h2>What you&#8217;ll create</h2><p>When I look around, I don&#8217;t see many women in AI safety. I&#8217;m not sure why, but what I do know is that these events help create:</p><ul><li><p>A space where women feel comfortable showing up</p></li><li><p>Connections with people navigating similar paths</p></li><li><p>Role models whose journeys they can learn from</p></li></ul><p>My hope is that by concentrating women in one place, we make it easier to show up, stay in the field, and maybe even bring friends along.</p><h2>Two event formats</h2><p>Here are two that worked well and are simple to organise:</p><h3>Speed friending</h3><p>Have people pair up for 7-minute conversations, give people a minute to exchange contacts at the end, then swap over to a new pair. After 6-8 rounds, have an open social.</p><p>I recommend placing <a href="https://bjj-timer.com/">this timer</a> where everyone can see it. </p><h3>Talk + socials</h3><p>Invite someone to give a 20-minute talk with a 20-minute Q&amp;A about something that other women in AI safety might find useful. </p><p>The talk encourages attendance and gives people something to discuss during the social.</p><h2>Who to invite</h2><p>A rule of thumb is to organise an event that YOU would be excited to attend. That includes deciding who it&#8217;s for.</p><p>&#8220;Women in AI safety&#8221; is still a broad category. Maybe you want women already working in the field. Maybe you want women curious about entering. Maybe you want a mix. All are valid &#8212; just be intentional.</p><p>It can feel painful to tell people no, but <strong>thoughtful, considered exclusion is vital to any gathering.</strong> (Take it from <a href="https://voltagecontrol.com/blog/my-favorite-learnings-from-priya-parkers-the-art-of-gathering/">Priya Parker</a>, an expert in event planning!)</p><h2>4 steps to bring your event to life</h2><ol><li><p><strong>Pick a date and time</strong>: I find that weekday evenings (6/7pm) tend to work well.</p></li><li><p><strong>Find a space (or go online)</strong>: If you don&#8217;t have access to an events space, reach out to other AI safety orgs to see if you can use their office space. Otherwise, you can run it online using our Zoom account.</p></li><li><p><strong>Submit our <a href="https://airtable.com/appkKc02xe9D0le4B/pagwYlzb2uPf9L8XF/form">events form</a>:</strong> We&#8217;ll get you set up on our <a href="https://luma.com/bluedotevents?k=c">events calendar</a>, which handles invites, feedback forms, reminder emails and guest list. We can also provide a budget for snacks.</p></li><li><p><strong>Start sending out invites:</strong> You&#8217;ll surprise yourself by how many people are excited by this type of event. Having 10-20 people show up is plenty for great conversations.</p></li></ol><h2>It&#8217;s that simple</h2><p>We&#8217;d love to see more women in AI safety, and I believe that events like these can help make AI safety feel more approachable. This isn&#8217;t the only thing we can do to encourage diversity in the field, but it&#8217;s one way to start.</p><p>We&#8217;re excited to see your event come to life! <a href="https://airtable.com/appkKc02xe9D0le4B/pagwYlzb2uPf9L8XF/form">Fill out the form</a> and let&#8217;s make it happen.</p>]]></content:encoded></item><item><title><![CDATA[The software engineer’s guide to making your first AI safety contribution in <1 week]]></title><description><![CDATA[You&#8217;re an experienced software engineer who&#8217;s ready to start contributing to making AI go well.]]></description><link>https://blog.bluedot.org/p/swe-ai-safety-project-guide</link><guid isPermaLink="false">https://blog.bluedot.org/p/swe-ai-safety-project-guide</guid><dc:creator><![CDATA[Li-Lian Ang]]></dc:creator><pubDate>Fri, 05 Dec 2025 15:11:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUc5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b9c29b1-3ff5-4ef6-8c04-e91c608ec10e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You&#8217;re an experienced software engineer who&#8217;s ready to start contributing to making AI go well. You&#8217;re unsure which of these <a href="https://blog.bluedot.org/p/im-an-experienced-swe">three areas</a> you&#8217;re best placed to leverage your engineering skills:</p><ul><li><p>Scale AI safety research</p></li><li><p>Build tools for AI safety researchers</p></li><li><p>Contribute directly to AI safety research</p></li></ul><p>This guide walks through a project you can complete in &lt;1 week to make your first contribution.</p><p><em>This blog post was written for graduates of BlueDot&#8217;s <a href="https://bluedot.org/courses/technical-ai-safety">Technical AI Safety course</a> who want to contribute their software engineering skills.</em></p><h2><strong>Why do a project?</strong></h2><p>Projects help you figure out where to apply your skills. Does your experience programming GPUs apply to training infrastructure? Does your agent scaffolding experience translate to evals? You&#8217;ll learn more from trying than from months of deliberation.</p><p>If you find a gap you can fill, it could lead you to orgs doing that work &#8212; or inspire you to start something yourself.</p><p>When applying to AI safety roles, this project could also serve as a helpful signal for your skills and motivation. A well-executed project can help demonstrate clear reasoning, good communication, and high agency. I&#8217;ve spoken to several hiring managers who made offers or fast-tracked candidates because of excellent projects.</p><p>While you could complete a project through programs and fellowships which provide more structure, mentorship and stipends, you could also do it yourself!</p><p>You already have what it takes to start. Here&#8217;s how.</p><h2><strong>Getting started</strong></h2><ul><li><p><strong>Block out 20-40 hours in your calendar.</strong></p><ul><li><p>If you want to keep going after that, schedule more time later. Right now, focus on finishing something by the end of the week.</p></li></ul></li><li><p><strong>Schedule focused blocks.</strong></p><ul><li><p>Aim for 2-4 hour sessions where you can get <a href="https://www.scotthyoung.com/blog/2013/04/12/how-to-focus/">&#8220;stuck in&#8221;</a>. The more fragmented your time, the more time you&#8217;ll burn context switching.</p></li><li><p>Rope someone in to work on this with you or be accountable to.</p></li></ul></li><li><p><strong>Protect this time.</strong></p><ul><li><p>It&#8217;s easy to let social events or work meetings eat up your project hours. If this matters to you, treat these blocks like any other important commitment.</p></li><li><p>Make a calendar event for yourself!</p></li></ul></li><li><p><strong>Build a routine.</strong></p><ul><li><p>Work at the same time each day if possible.</p></li><li><p>Set a clear intention and put it somewhere you can see it. E.g. &#8220;Every day for the next 5 days, I&#8217;ll spend 4 hours working on my project at my desk.&#8221;</p></li></ul></li></ul><h1><strong>Choose your path</strong></h1><h2><strong>Option 1: Fix open issues in AI safety tools</strong></h2><p><em>Contribute to tools that AI safety researchers use every day.</em></p><p>Pick 2-5 good first issues to solve from an open source AI safety repo, like:</p><ul><li><p><a href="https://github.com/UKGovernmentBEIS/inspect_evals">Inspect Evals</a></p></li><li><p><a href="https://github.com/UKGovernmentBEIS/control-arena">ControlArena</a></p></li><li><p><a href="https://github.com/UKGovernmentBEIS/inspect_ai">Inspect</a></p></li><li><p><a href="https://github.com/TransformerLensOrg/TransformerLens">TransformerLens</a></p></li><li><p><a href="https://github.com/hijohnnylin/neuronpedia#readme">Neuronpedia</a></p></li><li><p><a href="https://github.com/ndif-team/nnsight">nnsight</a></p></li><li><p><a href="https://github.com/safety-research/safety-tooling">safety-tooling</a></p></li><li><p><a href="https://www.aisafety.com/projects">Other volunteer projects</a></p></li><li><p><em>(email me if you know of more! anglilian@bluedot.org)</em></p></li></ul><p>You can also message the maintainers, join their Discord / Slack communities or just try out the tools to figure out what needs improving.</p><p>For example,<a href="https://medium.com/@anthonyduong1/my-ai-alignment-project-fixing-open-source-issues-25e59d32a16a"> Anthony Duong</a> looked at the issues on TransformerLens and spent a few weekends making PRs.</p><h2><strong>Option 2: Replicate and extend a research finding</strong></h2><p><em>Get closer to research by reproducing and extending a published result.</em></p><p>The goal is to reproduce and add a small tweak to ONE interesting finding from a paper.</p><p>Some ideas for picking a starting point:</p><ul><li><p>Reviewing the resources from the Technical AI safety course</p></li><li><p>Get inspired by past projects from <a href="https://bluedot.org/projects">BlueDot</a> or <a href="https://www.arena.education/previous-capstone-projects">ARENA</a> alumni</p></li><li><p>Replicate and extend Anthropic&#8217;s <a href="https://www.lesswrong.com/posts/y5EniHFSpNxhLbmq6/how-to-replicate-and-extend-our-alignment-faking-demo">alignment faking demo</a></p></li><li><p>Pick an <a href="https://docs.google.com/document/d/1gi32-HZozxVimNg5Mhvk4CvW4zq8J12rGmK_j2zxNEg/edit?tab=t.0#heading=h.q14pjbvzx1x">open problem in evals</a></p></li><li><p>Pick an <a href="https://docs.google.com/document/d/1p-ggQV3vVWIQuCccXEl1fD0thJOgXimlbBpGk6FI32I/edit?usp=drivesdkhttps://docs.google.com/document/d/1p-ggQV3vVWIQuCccXEl1fD0thJOgXimlbBpGk6FI32I/edit?usp=drivesdk">open problem in mech interp</a></p></li></ul><p>If you want to spend closer to 20 hours on the project, pick papers that have code (and datasets if applicable) available for you to run. Otherwise, expect to spend a lot more time working out how to implement the code.</p><p>Replicating a finding from scratch is more feasible for papers like evals or elicitation techniques that don&#8217;t require deep ML expertise.</p><p>Don&#8217;t get too bogged down with trying to make a novel research contribution. It takes months to develop good <a href="https://www.alignmentforum.org/s/5GT3yoYM9gRmMEKqL/p/Ldrss6o3tiKT6NdMm">research taste</a>. Instead, follow where your curiosity leads you.</p><p>Then, find the fastest way to get signal on whether this is an idea worth pursuing before running a high volume of tests. Can you <a href="https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/overview">prompt the model</a> and see what happens? Can you use a fine-tuning API like <a href="https://docs.together.ai/docs/fine-tuning-quickstart">TogetherAI</a> or <a href="https://platform.openai.com/docs/api-reference/fine-tuning/create">OpenAI</a>?</p><p>Remember, this is meant to be a <em>short</em> project. You can always build on this in your next iteration.</p><p>You can get compute for your project from providers like <a href="https://www.runpod.io/">RunPod</a> or <a href="https://www.hyperbolic.ai/">Hyperbolic</a>, and access open source models via <a href="https://openrouter.ai/">OpenRouter</a>. If funding becomes a constraint, you can apply for a <a href="https://bluedot.org/blog/rapid-grants-for-bluedot-projects">small grant</a> as a BlueDot course graduate.</p><p>Ethan Perez has written useful tips for empirical research <a href="https://www.alignmentforum.org/posts/dZFpEdKyb9Bf4xYn7/tips-for-empirical-alignment-research">here</a>.</p><h2><strong>Option 3: Make research reproducible</strong></h2><p><em>Unblock other researchers by fixing what&#8217;s broken in the replication process</em></p><p>Many published papers are hard to replicate because the code is buggy, dependencies are missing, or the workflow is unnecessarily painful.</p><p>Your goal is to pick a paper with available code, try to run it, and fix whatever breaks or makes it painful to work with.</p><p>This could mean:</p><ul><li><p>Fixing broken code or missing dependencies</p></li><li><p>Writing clearer setup instructions or documentation</p></li><li><p>Building micro-tooling for repetitive, manual steps (e.g., a <a href="https://github.com/safety-research/safety-examples/blob/main/examples/inference/get_responses.py">script for batch querying LLMs</a>, a config manager for hyperparameters, or a notebook that visualises results)</p></li><li><p>Packaging the reproduction in a way that &#8220;just works&#8221; for the next person</p></li></ul><p>Focus on making it easy for everyone who comes after you to build on it. By the end, you should have opened a PR or created an issue that ideally gets merged!</p><h1><strong>Write it up!</strong></h1><p>The biggest mistake people make is treating the write-up as an afterthought. Your work won&#8217;t advance AI safety if no one engages with it.</p><p><strong>Plan to spend at least one full day writing this up. </strong>If you&#8217;re spending less, you&#8217;re not spending enough.</p><p>This includes the different forms of write-ups, like your longer-form blog and the Twitter/X or LinkedIn post that helps with distribution.</p><p>As a <a href="https://arc.net/l/quote/crorikxf">rule of thumb</a>, you should allocate your writing time proportional to how much total reading time each format will receive. Writing a viral thread deserves as much effort as writing a detailed post.</p><h2><strong>Why write it up</strong></h2><p>Your write-up is how you get feedback and find a home for your work. It&#8217;s also how others gauge how well you know your stuff and how much you&#8217;ve thought it through. </p><p>Many of our past graduates have found their co-founders, collaborators, roles and funding opportunities from posting their projects.</p><p>You might be thinking: &#8220;I can write it up much faster than that&#8221; or &#8220;I&#8217;d rather spend time working on the project&#8221;. But if you want your work to reach people, it&#8217;s worth communicating well.</p><p>Writing clearly demonstrates your understanding. You can&#8217;t write clearly about something you don&#8217;t fully get.</p><p>Think about explaining your own codebase to a new hire. If you&#8217;re fumbling through the explanation, you probably don&#8217;t understand the architecture as well as you thought. Clear communication is an indicator of understanding, and it takes work to achieve.</p><h2><strong>How to write it well</strong></h2><p><strong>Lead with what you did.</strong> Don&#8217;t bury your insights in <a href="https://postimg.cc/WtzSPc3d">walls of text</a> or unnecessary jargon. State it clearly upfront.</p><p><strong>Explain why it matters.</strong> Motivate why you did this and why anyone should care. People are more likely to read your work (and remember it) when they understand the why.</p><p><strong>Keep it simple.</strong> The goal is for people to actually read and understand what you&#8217;ve done. This is not easy. Writing something short and clear is much harder than rambling on. But that&#8217;s exactly why you need to devote real time to it.</p><p><strong>Get feedback constantly.</strong> Good writing requires iteration. Explain your project to others as you work on it. See if they understand. See if they&#8217;re convinced it&#8217;s compelling. If not, workshop your idea or delivery.</p><p>Don&#8217;t wait until the eleventh hour. Many AI safety researchers like<a href="https://www.alignmentforum.org/posts/jP9KDyMkchuv6tHwm/how-to-become-a-mechanistic-interpretability-researcher#Write_up_your_work_"> Neel Nanda</a> have highlighted how important this is. Start early and keep refining as you go.</p><h1><strong>Share your work</strong></h1><p>I know it feels scary to stake your name publicly on what you&#8217;ve done. But here&#8217;s the thing: <strong>your work is far more likely to get drowned out in the noise than criticised. </strong>You have to put in a LOT of work to be seen (there&#8217;s a whole industry around marketing!). And being seen is exactly what you want.</p><p>If you&#8217;re working on research, much of the community is on Twitter/X, so focus on making a thread and posting on LessWrong and the Alignment Forum.</p><p>Star this project on your GitHub, feature it on your blog (make one if you don&#8217;t have one!), post it on LinkedIn and keep talking to people about it.</p><p>So what are you waiting for? Let&#8217;s get started!</p><p><em>PS: Use/apply to our <a href="https://bluedot.org/courses/technical-ai-safety-project">Technical AI Safety Project</a> sprint.</em></p>]]></content:encoded></item><item><title><![CDATA[Running versions of our courses]]></title><description><![CDATA[To take part in our facilitated courses instead, see the courses linked on our homepage.]]></description><link>https://blog.bluedot.org/p/running-versions-of-our-courses</link><guid isPermaLink="false">https://blog.bluedot.org/p/running-versions-of-our-courses</guid><dc:creator><![CDATA[BlueDot Impact]]></dc:creator><pubDate>Wed, 03 Dec 2025 05:53:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUc5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b9c29b1-3ff5-4ef6-8c04-e91c608ec10e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>To take part in our facilitated courses instead, see the courses linked on <a href="https://bluedot.org/">our homepage</a>.</em></p><p><em>Let us know how you&#8217;re using our courses <a href="https://forms.bluedot.org/UdXLJKhY463CAI0Th2cj">here</a>, so we can potentially offer more support in the future.</em></p><p>We&#8217;re thrilled you&#8217;re excited to run an independent version of our course! This page sets out guidance for branding your course, as well as explains what support we can offer local groups.</p><p>It&#8217;s great when friends, workplace groups, student societies and other local organizations run versions of our courses. This helps further our mission of accelerating driven individuals to develop the knowledge, skills and connections needed to have a significant positive impact.</p><p>To support this, you can share and adapt our course materials including resource lists, exercises, sample answers and discussion docs, provided that you:</p><ul><li><p><strong>Brand it distinctly:</strong> Market the course in a way that it&#8217;s clear who&#8217;s running the course. Don&#8217;t use our names like &#8220;AI Safety Fundamentals&#8221; or &#8220;BlueDot Impact&#8221; in a way that could confuse people as to who is offering the course.</p></li><li><p><strong>Give appropriate credit:</strong> When presenting these materials, please give credit to the source, for example &#8220;AI Safety Fundamentals&#8221; or &#8220;BlueDot Impact&#8221;.</p></li><li><p><strong>Don&#8217;t hold us liable for errors:</strong> While we try our best to ensure the accuracy and relevancy of our course materials, they&#8217;re provided solely on an &#8220;as is&#8221; basis, without warranty of any kind.</p></li></ul><p>Note that we don&#8217;t own many of linked-to resources themselves, so you might need permission from their original owners if you want to copy or translate them.</p><h2><strong>Example</strong></h2><p>Here&#8217;s an example of how you might market your course, following the guidelines above:</p><blockquote><p>Calling all students interested in the future of AI safety and alignment!</p><p>&#129302; The Greendale Community College AI Society is excited to be hosting a AI alignment course this semester, based on the popular AI Safety Fundamentals courses developed by BlueDot Impact.</p><p>We&#8217;ll be following <a href="https://bluedot.org/courses/alignment">their curriculum</a>. You should do the readings and exercises beforehand, and then each week we&#8217;ll host a facilitated group discussion adapted for our university setting.</p><p>The discussions will run every Wednesday from 6-8pm in Group Study Room F starting on 17 September. To join the seminar series, RSVP at <a href="https://greendale.edu/ai-soc/ai-alignment">this link</a>.</p><p>If you have any other questions, email <a href="mailto:ai.soc@greendale.edu">ai.soc@greendale.edu</a>. Looking forward to some fascinating discussions! &#128578;</p></blockquote><p>Examples of things that would not be okay:</p><ul><li><p>&#8220;GCC AI Society is launching a round of the AI Safety Fundamentals course&#8221;.</p></li><li><p>&#8220;We&#8217;re running the AI Safety Fundamentals course in Greendale, Colorado&#8221;.</p></li><li><p>&#8220;Apply to AI Safety Fundamentals&#8221;, linking to your version of the course.</p></li><li><p>&#8220;Run in collaboration with BlueDot Impact&#8221;, unless we&#8217;ve explicitly agreed this.</p></li><li><p>Issuing certificates &#8220;for completing the AI Safety Fundamentals course&#8221;. But it&#8217;s fine to issue certificates &#8220;for completing GCC AI Society&#8217;s AI Alignment course, based on the AI Safety Fundamentals curriculum&#8221;.</p></li></ul><p>This similarly applies to the names AISF, AGISF, AGI Safety Fundamentals, Biosecurity Fundamentals. We&#8217;re fine with people using the generic course names: AI alignment, AI governance, and pandemics.</p><h2><strong>FAQs</strong></h2><p><strong>Can local groups get copies of the discussion docs?</strong></p><p>Yes! These are linked on the individual course webpages.</p><p><strong>Can local groups use BlueDot&#8217;s infrastructure for running courses?</strong></p><p>Yes, almost all our software and corresponding documentation is available <a href="https://github.com/bluedotimpact/">on GitHub</a>. You can raise issues there if you get stuck. We aren&#8217;t currently able to provide hosted versions of our software, or technical support beyond this. Local groups are also welcome to direct users to our course hub to help learners track their own reading completions and exercises.</p><p><strong>Can we use your facilitator training program?</strong></p><p>Yes! The resources and exercises for our facilitator training course are <a href="https://bluedot.org/courses/facilitator-training">available online</a> for self-study.</p><p><strong>I&#8217;m planning to run a local group / previously participated in a local group run version. Could I join the BlueDot facilitated course?</strong></p><p>Yes, please apply in the normal way and mention this in your application. Note that this does not guarantee you a place on our course.</p><p>If you have any other questions, feel free to <a href="https://bluedot.org/contact">contact us</a>!</p>]]></content:encoded></item><item><title><![CDATA[Navigating dual-use projects in biosecurity]]></title><description><![CDATA[Advice for the BlueDot biosecurity course and hackathons]]></description><link>https://blog.bluedot.org/p/dual-use-advice</link><guid isPermaLink="false">https://blog.bluedot.org/p/dual-use-advice</guid><dc:creator><![CDATA[Will Saunter]]></dc:creator><pubDate>Fri, 28 Nov 2025 05:47:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUc5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b9c29b1-3ff5-4ef6-8c04-e91c608ec10e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Some projects designed to reduce biological risk can themselves be misused. This involves creating tools, datasets, or capabilities that bad actors could exploit.</p><p>Examples include:</p><ul><li><p><strong>Building benchmarks to test whether AI models can assist with dangerous biology.</strong> These can help AI developers identify and constrain dangerous capabilities before deployment, or inform policymakers about emerging risks. But the benchmark code or collated data could also help malicious actors test which models are most useful for their purposes, or provide a roadmap for extracting harmful outputs.</p></li><li><p><strong>Developing AI agents with access to biological tools.</strong> Such agents could accelerate defensive research, drug discovery, or outbreak response. But the same capabilities could be repurposed to help bad actors navigate complex biological procedures or identify vulnerabilities in biosecurity infrastructure.</p></li><li><p><strong>Red-teaming exercises that probe biosecurity vulnerabilities.</strong> These can reveal weaknesses in time to fix them, and build the case for policy action. But detailed findings about what&#8217;s broken, if shared too broadly, hand attackers a list of exploitable gaps.</p></li><li><p><strong>Threat modelling that produces detailed attack scenarios.</strong> This can help defenders prioritise resources and anticipate adversary behaviour. But concrete scenarios, if they escape controlled settings, can serve as instruction manuals.</p></li></ul><p>Whether any particular project does more good than harm depends heavily on context: who&#8217;s doing it, where, with what safeguards, and how the outputs are managed. There&#8217;s genuine uncertainty here, and reasonable people disagree. The point isn&#8217;t that these projects should never happen, but that they require more care than work without dual-use dimensions.</p><h3><strong>Dual-use is a spectrum, not a binary</strong></h3><p>Not all dual-use work is equal:</p><ul><li><p><strong>Some should be done openly.</strong> Low-risk contributions to detection, broad-spectrum countermeasures, or policy analysis.</p></li><li><p><strong>Some should be done carefully.</strong> In controlled settings with experienced collaborators, perhaps with government stakeholder input.</p></li><li><p><strong>Some requires serious information security.</strong> Maybe even classification, with appropriate institutional backing.</p></li></ul><p>It&#8217;s important to remember that your good intentions don&#8217;t eliminate the risk. Someone building a benchmark to measure misuse-relevant AI capabilities, even with the goal of demonstrating those capabilities should be restricted, is producing something that could itself be useful to bad actors.</p><h3><strong>What to do</strong></h3><p>If you&#8217;re considering a project with dual-use dimensions:</p><p><strong>1. Get experienced mentors.</strong> This is the single most important step. Senior people in biosecurity have developed intuitions about what&#8217;s safe and what isn&#8217;t. They&#8217;ve seen projects go wrong. They have relationships with funders and policymakers who can help navigate tricky situations.</p><p><strong>2. Don&#8217;t do it alone.</strong> The &#8220;unilateralist&#8217;s curse&#8221; applies here: if twenty-four people independently decide a risky project isn&#8217;t worth pursuing, but the twenty-fifth goes ahead anyway, the damage is done. Check your reasoning with others before acting.</p><p><strong>3. Consider the institutional setting.</strong> Some work genuinely belongs in organisations with government relationships, security clearances, and established protocols for handling sensitive material.</p><p><strong>4. Think about outputs.</strong> Before you create a dataset, tool, or benchmark, ask: if this were publicly released, who would benefit? If the answer includes potential bad actors, you need a plan for controlled access, or you need to reconsider whether this project should exist at all.</p><p>And remember: defensive value doesn&#8217;t happen automatically. If you&#8217;re building something intended to help biosecurity professionals, you need to actively get it into their hands. Don&#8217;t assume the right people will find your tool or analysis and use it. Test whether it&#8217;s actually useful to them before you start, tailor it to their expressed needs, and build relationships that ensure it reaches them. A dual-use tool that only bad actors end up using is worse than no tool at all.</p><h3><strong>When in doubt</strong></h3><p>If you&#8217;re unsure whether a project idea has dual-use concerns, that uncertainty is itself informative. Reach out to your course facilitators, or to experienced biosecurity professionals who can help you navigate these questions.</p><p>The biosecurity field needs capable, ambitious people. But capability and ambition should be paired with caution and collaboration. The goal isn&#8217;t to discourage you from working on hard problems; it&#8217;s to ensure that when you do, the work makes things better rather than worse.</p>]]></content:encoded></item><item><title><![CDATA[I'm an experienced software engineer. How can I contribute to AI safety?]]></title><description><![CDATA[AI safety needs excellent software engineers.]]></description><link>https://blog.bluedot.org/p/im-an-experienced-swe</link><guid isPermaLink="false">https://blog.bluedot.org/p/im-an-experienced-swe</guid><dc:creator><![CDATA[Li-Lian Ang]]></dc:creator><pubDate>Thu, 27 Nov 2025 22:42:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUc5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b9c29b1-3ff5-4ef6-8c04-e91c608ec10e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI safety needs excellent software engineers. You can already start contributing to making AI go well, even without deep ML expertise.</p><p>Here are three broad ways how:</p><ol><li><p>Scale AI safety research</p></li><li><p>Build tools for AI safety researchers</p></li><li><p>Contribute directly to AI safety research</p></li></ol><p><em>This blog post was written for graduates of BlueDot&#8217;s <a href="https://bluedot.org/courses/technical-ai-safety">Technical AI Safety course</a> who want to contribute their software engineering skills.</em></p><h1><strong>Scale AI safety research</strong></h1><blockquote><p>This means taking promising safety experiments that work on 1,000 examples and running them efficiently on millions, or moving from single GPUs to distributed clusters.</p></blockquote><p>This path allows you to leverage your engineering expertise in scaling code without deep ML expertise.</p><p>Some examples of what this might look like:</p><ul><li><p><strong>Optimising compute utilization:</strong> Turning single-GPU experiments into distributed training runs, or improving GPU utilisation from 30% to 90% through better batching and memory management.</p></li><li><p><strong>Scaling research code:</strong> Refactoring Jupyter notebooks that take days to run into production pipelines that finish in hours, or creating frameworks that handle datasets too large to fit in memory.</p></li><li><p><strong>Making evaluations reliable:</strong> Debugging why evaluation runs partially fail overnight, handling API timeouts gracefully, and implementing automatic retries for failed samples.</p></li><li><p><strong>Building observability:</strong> Creating dashboards that show which evaluation samples are running, which failed, and why &#8211; bringing established SWE practices like distributed tracing to ML workflows.</p></li></ul><p>It&#8217;s generally useful to understand the basics of ML, but in most cases, good questions to researchers will get you the context you need. You don&#8217;t have to know how to design a loss function or interpret attention patterns. Focus on your value-add &#8211; your engineering skills!</p><h1><strong>Building tools for AI safety researchers</strong></h1><blockquote><p>This means creating the tools that multiply researcher productivity.</p></blockquote><p>AI safety researchers spend significant time on infrastructure: running evaluations, analysing model internals, managing compute, and reproducing experiments. Some common examples of research workflows and tools include:</p><h2><strong>Evaluation frameworks</strong></h2><p>Researchers run the same evaluations repeatedly to track how models perform over time, and when a paper publishes results, others want to reproduce them or test them on newer models.</p><p>This requires three things:</p><ul><li><p><strong>Infrastructure layer: </strong>Managing resource allocation of servers and compute, where large-scale evaluations run.</p><ul><li><p>For example: using Kubernetes to manage compute, setting up cloud infrastructure like AWS, GCP or Azure and using job schedulers for running millions of evals.</p></li></ul></li></ul><ul><li><p><strong>Orchestration layer: </strong>Building the framework for running evaluations.</p><ul><li><p>Tools like<a href="https://inspect.aisi.org.uk/"> Inspect</a> provide the building blocks for prompting models and analysing their responses systematically.</p></li><li><p>Inference optimization tools like<a href="https://github.com/vllm-project/vllm"> vLLM</a> and<a href="https://github.com/sgl-project/sglang"> SGLang</a> handle efficient model serving with batching and memory management at scale.</p></li><li><p>Model serving platforms like<a href="https://ollama.com/"> Ollama</a> make it easy to run models locally or self-host them for evaluation workflows.</p></li><li><p>API rate limiting infrastructure that centrally manages rate limits across model API providers (OpenAI, Anthropic, etc.).</p></li></ul></li><li><p><strong>Evaluation layer:</strong> Making specific evaluations reproducible and portable across models.</p><ul><li><p>Libraries like<a href="https://github.com/UKGovernmentBEIS/inspect_evals"> inspect-evals</a> provide ready-to-run evaluations for MMLU or other benchmarks. Instead of manually implementing them each time, researchers can run them with a single line of code.</p></li><li><p><a href="https://github.com/UKGovernmentBEIS/control-arena">ControlArena</a> uses Inspect&#8217;s framework to evaluate different control protocols.</p></li><li><p>Analysis tools like<a href="https://docent.transluce.org/"> Docent</a> or<a href="https://meridianlabs-ai.github.io/inspect_scout/"> InspectScout</a> help researchers interpret evaluation results without custom analysis scripts.</p></li><li><p>Visualisation tools like <a href="https://meridianlabs-ai.github.io/inspect_viz/">Inspect Viz</a> or <a href="https://epoch.ai/data">dashboards</a> help communicate these results more broadly.</p></li></ul></li></ul><h2><strong>Mechanistic interpretability tools</strong></h2><p>Understanding what happens inside models requires examining activations at each layer, testing changes, and tracking how information flows through the network.</p><ul><li><p><strong>Experimentation tools:</strong> Tools like<a href="https://transformerlensorg.github.io/TransformerLens/"> TransformerLens</a> let researchers probe models without building infrastructure from scratch. They can use standard functions to run experiments in minutes instead of writing custom code that takes hours.</p></li><li><p><strong>Model access:</strong> Large models often don&#8217;t fit on a single GPU or require significant compute resources. Tools like<a href="https://nnsight.net/"> NNsight</a> provide API access to models, so researchers can run experiments without self-hosting.</p></li></ul><h2><strong>Running open-source models</strong></h2><p>Testing open-source models involves practical challenges like:</p><ul><li><p><strong>Debugging:</strong> Getting models to run as expected, handling API quirks, and troubleshooting configuration issues</p></li><li><p><strong>Reproducibility:</strong> Setting up chat templates, parameters, and output formats to match published results</p></li><li><p><strong>Self-hosting:</strong> Configuring models for local deployment to reduce costs or have more control over the setup</p></li></ul><h2><strong>AI-assisted research infrastructure</strong></h2><p>Building better tooling here means creating scaffolds that help AI understand research contexts and produce working, reliable code with good scaffolding, like:</p><ul><li><p><strong>MCP integration:</strong> Connecting AI agents to MCP servers to access evaluation results, model outputs, or experimental data</p></li><li><p><strong>Automated workflow design:</strong> Using AI to generate evaluation pipelines, data processing scripts, or analysis code based on research requirements</p></li><li><p><strong>Structured codebases:</strong> Organising projects so AI tools can navigate research code, understand context, and suggest relevant changes</p></li></ul><h2><strong>The gap</strong></h2><p>These tools exist but aren&#8217;t perfect. The key here is having a <strong>product mindset</strong>:</p><ul><li><p>Treat AI safety researchers as your customer and the tools you&#8217;re building as the product.</p></li><li><p>Replicate their paper to understand the workflow of running an evaluation or training experiment.</p></li><li><p>Talk to researchers to understand their pain points.</p></li><li><p>Understand why some researchers opt against using particular tools.</p></li><li><p>Ask the maintainers of existing tools where the gaps are.</p></li></ul><p>As a software engineer (or technical product manager), you can leverage your unique value here! You can build tools that handle multiple use cases, have actual documentation, and won&#8217;t break when someone updates a dependency.</p><p>Some quick ways to start:</p><ul><li><p>Pick up issues in open source repos, like <a href="https://inspect.aisi.org.uk/">Inspect</a>, <a href="https://github.com/UKGovernmentBEIS/control-arena?tab=readme-ov-file">ControlArena</a> or <a href="https://transformerlensorg.github.io/TransformerLens/">TransformerLens</a></p></li><li><p>Try out using the libraries to get a sense of where the gaps are</p></li><li><p>Implement a benchmark for <a href="https://github.com/UKGovernmentBEIS/inspect_evals/tree/main">inspect-evals</a> in their open issues</p></li><li><p>Run an evaluation on a self-hosted open-source model</p></li></ul><p><em>Here are more details on <a href="https://www.lesswrong.com/posts/6P8GYb4AjtPXx6LLB/tips-and-code-for-empirical-research-workflows">research tools and workflows</a>.</em></p><h1><strong>Contribute directly to AI safety research</strong></h1><blockquote><p>This means developing techniques to train AI systems to be safer, experimenting with approaches, testing what works, and implementing solutions directly on the models.</p></blockquote><p>The depth of ML expertise you need depends entirely on how you want to contribute:</p><ul><li><p><strong>More ML-heavy</strong> contributions involve designing the experiments. They propose hypotheses on safety techniques and guide the research direction. These roles are typically research leads or scientists.</p></li><li><p><strong>More engineering-heavy</strong> contributions involve turning research ideas into runnable experiments. They implement the training pipeline. These roles are typically called research engineers or contributors.</p></li></ul><p>Note: While the job title might be the same, the split between ML and engineering varies widely by org. So don&#8217;t anchor too hard on job titles.</p><p>With a basic understanding of how AI systems work, you could:</p><ul><li><p>Test whether a safety technique that works on GPT-4 also works on Claude or open-source models.</p></li><li><p>Take a paper on jailbreak resistance and test it on new prompts or different model sizes</p></li><li><p>Replicate a paper and tweak one variable. (<a href="https://www.lesswrong.com/posts/ivWPqkipkKywQbdDw/contextual-constitutional-ai">example</a>)</p></li><li><p>Evaluate how METR&#8217;s paper on AI&#8217;s doubling of SWE task lengths looks for offensive cybertasks. (<a href="https://sean-peters-au.github.io/2025/07/02/ai-task-length-horizons-in-offensive-cybersecurity.html">example</a>)</p></li></ul><p>Getting these experiments to actually work is where your engineering skills shine! Research code is messy, and reproducing results often requires debugging. You&#8217;d be solving problems similar to what AI researchers face.</p><p>Then, you can post your findings on LessWrong or the Alignment Forum. These are genuine research contributions! You&#8217;re validating results, finding edge cases, and building evidence about what works. Many successful researchers started here, and there&#8217;s a lot of low-hanging fruit.</p><p>However, you&#8217;ll need far more ML expertise if you want to do things like:</p><ul><li><p>Design novel reinforcement learning approaches for alignment</p></li><li><p>Propose radically new mechanistic interpretability techniques</p></li><li><p>Lead research directions</p></li></ul><p>If ML expertise isn&#8217;t your differential advantage, don&#8217;t force it! Your software engineering skills are already incredibly valuable. You don&#8217;t <em>have</em> to spend 100s of hours upskilling on ML when there are also other ways to contribute.</p><p>This isn&#8217;t to say you need to learn everything there is to know about ML or get a PhD to start. You might instead start with a particular area, research paper or question and gain just enough context to achieve your goal. If you do want to upskill, self-studying <a href="https://www.arena.education/">ARENA</a> is a good place to start.</p><p>Other engineers contributing to direct research have also written advice, like <a href="https://www.alignmentforum.org/posts/dZFpEdKyb9Bf4xYn7/tips-for-empirical-alignment-research">Ethan Perez</a> and <a href="https://www.lesswrong.com/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers">Andy Jones</a>.</p><h1><strong>What do I do now?</strong></h1><p><strong>Start applying.</strong> AI safety needs great engineers, and many orgs are looking for engineering talent, even if they haven&#8217;t posted roles yet.</p><p>Check the BlueDot community Slack for open roles, or directly reach out to AI safety orgs and ask about their engineering bottlenecks.</p><p>While you&#8217;re applying, you can also:</p><ul><li><p><a href="https://bluedot.org/facilitate">Facilitate</a> BlueDot&#8217;s<a href="https://bluedot.org/courses/technical-ai-safety"> Technical AI Safety</a> course</p></li><li><p>Talk to engineers or researchers in AI safety to understand where the engineering bottlenecks are.</p></li><li><p><a href="https://blog.bluedot.org/p/swe-ai-safety-project-guide">Follow our guide</a> to complete an AI safety project in &lt;1 week.</p></li><li><p>Replicate a safety paper to understand the workflow. (<a href="https://bluedot.org/projects">project examples</a>)</p></li><li><p>Resolve issues on or contribute to improving open source AI safety tools. (<a href="https://medium.com/@anthonyduong1/my-ai-alignment-project-fixing-open-source-issues-25e59d32a16a">example</a>)</p></li><li><p><a href="https://arena-chapter0-fundamentals.streamlit.app/">Build your ML foundation</a> (not because you need to be an expert, but because understanding what you&#8217;re building infrastructure for makes you more effective)</p></li></ul><p>There are almost certainly more ways to contribute your engineering skills to AI safety. Lean on what you do best!</p><h1><strong>Acknowledgments</strong></h1><p>As someone outside both engineering and AI safety research, I&#8217;ve leaned on the experience of others. Thanks to Adam Jones, Alexander Meinke, Jun Shern Chan, Max McGuinness, Monika Jotautait&#279;, Oliver Makins and Rusheb Shah for their feedback. Any misrepresentations are my own.</p>]]></content:encoded></item><item><title><![CDATA[Rapid Small Grants for BlueDot Course Participants]]></title><description><![CDATA[After completing the learning phase of our courses, many participants work on independent projects of their choosing.]]></description><link>https://blog.bluedot.org/p/rapid-grants-for-bluedot-projects</link><guid isPermaLink="false">https://blog.bluedot.org/p/rapid-grants-for-bluedot-projects</guid><dc:creator><![CDATA[Joshua Landes]]></dc:creator><pubDate>Tue, 25 Nov 2025 05:45:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUc5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b9c29b1-3ff5-4ef6-8c04-e91c608ec10e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After completing the learning phase of our courses, many participants work on independent projects of their choosing. We want these projects to be excellent - and we try to remove barriers wherever we can.</p><p>One common barrier is lacking the resources to do the project well. Today we&#8217;re launching an updated version of our rapid small grants program for participants and facilitators on our courses. It&#8217;s aimed at people who, without funding, couldn&#8217;t reasonably afford what they need for their chosen project.</p><p><strong>If you&#8217;re unsure whether to apply: </strong><a href="https://airtable.com/appMVNtdBtvtJvu5E/pag9G3oF4DYAyassX/form">apply</a><strong>.</strong></p><h2><strong>How it works</strong></h2><p>At all times, only assume we&#8217;ll cover costs we&#8217;ve confirmed in writing for your specific project. If you&#8217;re ever uncertain, contact us before spending money you expect us to reimburse.</p><ol><li><p><strong>Submit a proposal</strong> <a href="https://airtable.com/appMVNtdBtvtJvu5E/pag9G3oF4DYAyassX/form">here</a> (under 15 minutes for most applications). We typically respond within 5 working days with one of three outcomes:</p><ul><li><p><em>Accepted</em>: We&#8217;ll confirm exactly what spending we&#8217;ve approved.</p></li><li><p><em>Clarification needed</em>: We&#8217;ll ask follow-up questions to evaluate your request.</p></li><li><p><em>Not approved</em>: We don&#8217;t think this meets our criteria, or we&#8217;re unable to approve for another reason.</p></li></ul></li><li><p><strong>Do your project</strong>, confident you can spend on what we&#8217;ve approved.</p></li><li><p><strong>Claim reimbursement.</strong> We process most claims within 5 working days. If you didn&#8217;t end up spending the money, no problem - just don&#8217;t submit a claim.</p></li></ol><h2><strong>What we fund</strong></h2><p>We expect most grants to fall between <strong>$50 and $1,500</strong>. Amounts toward the higher end typically require evidence of initial traction - a strong proof of concept or promising preliminary results.</p><p><strong>We fund work that is already in motion!</strong> If you haven&#8217;t started, begin with what you have and come back when you hit a blocker to moving fast or moving at all. </p><p>The following examples are illustrative; all decisions are ultimately at our discretion:</p><ul><li><p><strong>Compute</strong> for a technical AI safety project (API costs, cloud GPU, training runs)</p></li><li><p><strong>Access to paywalled resources</strong> - articles, research papers, datasets, or textbooks</p></li><li><p><strong>Conference travel</strong></p><ul><li><p>We have a high bar for conference travel funding. We typically fund travel only when:</p><ul><li><p>You&#8217;re presenting research or leading a session (not just attending)</p></li><li><p>The conference is directly relevant to AI safety or biosecurity</p></li><li><p>You&#8217;ve exhausted other funding routes (conference travel grants, institutional support, community funding)</p></li><li><p>You can articulate a specific deliverable or outcome beyond &#8220;networking&#8221;</p></li><li><p>We typically don&#8217;t fund conference attendance for networking, professional development, or career exploration. Many conferences also offer need-based travel scholarships.</p></li></ul></li></ul></li><li><p><strong>Project-specific software tools</strong> that save significant time on your specific project</p></li><li><p><strong>Project-specific equipment</strong> (e.g., a quality microphone for an AI safety YouTube channel)</p></li><li><p><strong>Participant recruitment</strong> for empirical experiments</p></li><li><p><strong>Hosting costs</strong> for your application or tool</p></li></ul><h2><strong>What we don&#8217;t fund</strong></h2><ul><li><p><strong>Compensation for your time</strong> on the project</p></li><li><p><strong>Equipment</strong> you&#8217;d reasonably already have (laptops, phones, external drives, etc.)</p></li><li><p><strong>General productivity subscriptions</strong> (ChatGPT Plus, Claude Pro, Cursor, Grammarly, etc. unless this is highly leveraged)</p></li><li><p><strong>Personal expenses</strong></p></li><li><p>Other funders may cover some of these - see <a href="https://www.aisafety.com/funding">this resources page for AI safety funding opportunities</a>. For larger grants ($10-50k), you&#8217;ll need to apply to and complete one of our <a href="https://bluedot.org/courses/incubator-week">Incubator Weeks</a>.</p></li></ul><h2><strong>Before you apply</strong></h2><p>Ask yourself:</p><ol><li><p><strong>Have I already started?</strong> What have I built, written, or tested so far? </p></li><li><p><strong>What specifically is blocking me?</strong> Can I name the exact resource and why I need it?</p></li></ol><h2><strong>Eligibility</strong></h2><ul><li><p>You must be a current or past participant or facilitator on a BlueDot Impact course.</p></li><li><p>We reimburse via bank transfer (Wise) or PayPal (UK), so we cannot send payments to sanctioned countries.</p></li></ul><p>Questions or feedback? <a href="mailto:team@bluedot.org">Contact us</a>.</p><h2><strong>Apply</strong></h2><p><a href="https://airtable.com/appMVNtdBtvtJvu5E/pag9G3oF4DYAyassX/form">Submit your proposal here</a> - it takes under 15 minutes for most applications. We aim to get back to you within 5 working days.</p>]]></content:encoded></item><item><title><![CDATA[How to avoid the 3 mistakes behind most rejected Technical AI Safety applicants]]></title><description><![CDATA[I&#8217;ve reviewed ~1,000 applications for our Technical AI Safety course.]]></description><link>https://blog.bluedot.org/p/avoid-technical-ai-safety-application-mistakes</link><guid isPermaLink="false">https://blog.bluedot.org/p/avoid-technical-ai-safety-application-mistakes</guid><dc:creator><![CDATA[Li-Lian Ang]]></dc:creator><pubDate>Tue, 11 Nov 2025 05:43:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUc5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b9c29b1-3ff5-4ef6-8c04-e91c608ec10e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve reviewed ~1,000 applications for our <a href="https://bluedot.org/courses/technical-ai-safety">Technical AI Safety</a> course. I&#8217;ve found that most rejected applicants made at least one of these three mistakes:</p><ul><li><p>misunderstood the course&#8217;s purpose,</p></li><li><p>lacked technical readiness or</p></li><li><p>did not sufficiently demonstrate commitment to making AI safer</p></li></ul><p>Based on this, here&#8217;s some advice to help future applicants improve their chances of success!</p><p>Also, read our <a href="https://bluedot.org/blog/avoid-alignment-application-mistakes">analysis of AI Alignment application mistakes</a>. Almost all the advice there applies here too, especially:</p><ul><li><p>Have a clear path to impact</p></li><li><p>Make your application easy to understand</p></li><li><p>Strike a balance with response length</p></li><li><p>Highlight impressive or relevant experience, even if it&#8217;s not a &#8216;formal&#8217; qualification</p></li></ul><h2><strong>Mistake #1: Misunderstood the course&#8217;s purpose</strong></h2><p>The course is focused on technical approaches to preventing catastrophic risks from AI, like power concentration, disempowerment, critical infrastructure collapse and bioengineered pandemics. It helps you to understand current safety techniques, where the gaps are and how you can contribute to plugging them.</p><p>Strong applicants demonstrated they understood this by:</p><ul><li><p>Articulating specific risks they&#8217;re concerned about (not just &#8220;making AI safer&#8221; broadly)</p></li><li><p>Recognising how transformative AI could be, for better or for worse</p></li><li><p>Showing they want to contribute to pushing the frontier of safety techniques</p></li></ul><p>If you&#8217;re new to AI safety, we&#8217;d recommend our <a href="https://bluedot.org/courses/future-of-ai">Future of AI course</a> to start.</p><p>If you don&#8217;t have a good sense of what it means for &#8220;AI to go well&#8221;, we&#8217;d recommend completing our <a href="https://bluedot.org/courses/agi-strategy">AGI Strategy course</a> first to have a big picture understanding of how you can contribute.</p><h2><strong>Mistake #2: Lacking technical readiness</strong></h2><p>Strong applicants demonstrated a background in ML, either through formal experience, education or personal projects.</p><p>They demonstrated that they have sufficient understanding of how LLMs are trained/fine-tuned to keep up with technical discussions that build on this. It&#8217;s hard to critique technical proposals for training safer AI if you don&#8217;t understand the basics of how they are trained in the first place!</p><p>We&#8217;re looking for evidence that you&#8217;ve engaged deeply with the concepts, not just consumed content about them. Watching videos about AI safety and reading LessWrong is a start, but it doesn&#8217;t show us that you can work with these ideas.</p><p>Some strong signals from non-technical backgrounds include but are not limited to:</p><ul><li><p>Writing explainers of relevant technical concepts</p></li><li><p>Facilitating discussions that require you to explain technical concepts</p></li><li><p>Building and training your own simple neural networks (even if the code is messy!)</p></li></ul><p>This course does not select against you if you don&#8217;t have a CS degree or work in tech. We&#8217;ve had successful applicants from philosophy, policy, biology, and business backgrounds. What matters is that you&#8217;ve put in genuine effort to understand the technical foundations.</p><h2><strong>Mistake #3: Insufficient evidence of commitment</strong></h2><p>We need people who&#8217;ll act on what they learn, not just learn for learning&#8217;s sake.</p><p>Some signals from strong applications include, but are not limited to:</p><ul><li><p>Identifying specific organisations and roles that match your concerns about AI</p></li><li><p>Organising discussions, reading groups, or events around AI safety topics</p></li><li><p>Setting aside a non-trivial amount of time / resources to transition into AI safety</p></li><li><p>Building prototypes or tools related to AI safety</p></li></ul><p>The top 20% of applicants showed they&#8217;re ready to take bold action (e.g. founding new initiatives, making significant career pivots, or leveraging unique positions of influence). But these aren&#8217;t the only paths. We&#8217;re looking for evidence that you&#8217;ll act on what you learn, whatever form that takes in your context.</p><p>It&#8217;s not about having the perfect plan. It&#8217;s about showing you&#8217;re already moving toward action, even if you&#8217;re still figuring out the specifics.</p><h2><strong>Applying to our course</strong></h2><p>The last common mistake is not applying at all, or forgetting to do so by the deadline! Now you know how to put your best foot forward, <a href="https://bluedot.org/courses/technical-ai-safety">apply to our Technical AI Safety course today</a>.</p>]]></content:encoded></item><item><title><![CDATA[Our experiments to support founders to protect humanity]]></title><description><![CDATA[Summary]]></description><link>https://blog.bluedot.org/p/startup-studio</link><guid isPermaLink="false">https://blog.bluedot.org/p/startup-studio</guid><dc:creator><![CDATA[Dewi Erwan]]></dc:creator><pubDate>Mon, 03 Nov 2025 22:54:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUc5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b9c29b1-3ff5-4ef6-8c04-e91c608ec10e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Summary</strong></h2><ul><li><p>BlueDot is recruiting founders to solve critical problems threatening civilization.</p></li><li><p>Our approach: deep problem scoping, targeted recruitment, and capital coordination.</p></li><li><p>First focus: pandemic PPE stockpiles.</p></li></ul><h2><strong>Context</strong></h2><ul><li><p>BlueDot is a non-profit building the workforce that protects humanity.</p></li><li><p>We&#8217;re a 5-person team. We train 1,000s of people each year in AI safety and biosecurity, and we place the highest-potential people into impactful jobs at AI companies, governments and non-profits.</p></li><li><p>Since 2022, we&#8217;ve raised $34M.</p></li><li><p>Today, ~2,000 people work full-time on AI safety. We believe this needs to increase to 100,000 by 2030 to prevent worst-case scenarios.</p></li><li><p>To help scale the workforce, we&#8217;re testing a &#8220;startup studio&#8221; model alongside our courses. We&#8217;re attracting leaders, builders and entrepreneurs, and helping them start and scale the most important projects that protect humanity.</p></li><li><p>We&#8217;re building this in collaboration with <a href="https://www.linkedin.com/in/weinbaumjonah/">Jonah Weinbaum</a> and <a href="https://www.linkedin.com/in/hugo-walrand-%F0%9F%94%B8-a3a602207/">Hugo Walrand</a>.</p></li></ul><h2><strong>Initial approach: AGI Strategy course</strong></h2><ul><li><p>Two months ago, we launched our <a href="https://bluedot.org/courses/agi-strategy">AGI Strategy course</a> alongside a <a href="https://www.linkedin.com/posts/dewierwan_%F0%9D%97%AA%F0%9D%97%B2%F0%9D%97%BF%F0%9D%97%B2-%F0%9D%97%BD%F0%9D%98%82%F0%9D%98%81%F0%9D%98%81%F0%9D%97%B6%F0%9D%97%BB%F0%9D%97%B4-%F0%9D%9F%AD%F0%9D%97%A0-%F0%9D%97%AF%F0%9D%97%B2%F0%9D%97%B5%F0%9D%97%B6%F0%9D%97%BB-activity-7371900990804877314-_t1t?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACbIf-gBFME8b4TnTtFav4aksVgs2HhVDaA">$1M fund</a>.</p></li><li><p>We&#8217;ve attracted 15 founders to the course who&#8217;ve raised $1M+ each, and ~$50M in total.</p><ul><li><p>We&#8217;ve invited a handful of them to join our in-person &#8220;<a href="https://bluedot.org/blog/a-bluedot-incubator-v1">incubator weeks</a>&#8221;, where we&#8217;ve helped them improve their ideas and launch new AI safety companies.</p></li><li><p>We&#8217;ve deployed $130k in grants so far to 1) build a risk engine for AI insurance and 2) improve the cybersecurity of critical infrastructure.</p></li></ul></li><li><p>This approach works well for founders who already have a particular area of expertise, project ideas that leverage that expertise, and are motivated to work on AI safety.</p></li><li><p>To test what it takes to drive founder-energy towards a shortlist of high-priority ideas, we&#8217;re running a second experiment in parallel.</p></li></ul><h2><strong>Second experiment: Problem scoping and serial entrepreneurs</strong></h2><ul><li><p>Compared to other AI safety incubators, we&#8217;re taking a different approach:</p><ul><li><p>Doing deep in-house problem scoping before recruiting anyone, and</p></li></ul><ul><li><p>Targeting serial entrepreneurs rather than junior talent.</p></li></ul></li><li><p>We&#8217;ve spent the past 3 weeks going deep on pandemic PPE stockpiles.</p><ul><li><p>We&#8217;ve done 50+ expert interviews, asked for intros from everyone, read many papers and blog posts, and shared a commentable doc with everything we&#8217;ve learnt and our understanding of the situation with those experts.</p></li><li><p>We&#8217;re writing a &#8220;landscape analysis&#8221; to understand the current situation, and help us develop a strategy for how this problem will get solved.</p></li></ul></li><li><p>We&#8217;re headhunting serial entrepreneurs with relevant experience in the problem area we&#8217;re focused on, and making the most compelling pitch that we can for why they should work on solving this problem.</p><ul><li><p>Founding and scaling a successful company is exceptionally hard, and the best predictor of future success is past success.</p></li><li><p>We make the pitch personal and concrete:</p><ul><li><p>&#8220;Here&#8217;s the threat and its scale. Your expertise in X is exactly what&#8217;s needed to make Y happen. This is why we believe it&#8217;s solvable. Here&#8217;s the funding situation (validated business model or $100M+ in philanthropy). Here are the experts we can introduce you to. Your work could save millions of lives.&#8221;</p></li></ul></li></ul></li></ul><h2><strong>Why we&#8217;re exploring pandemic PPE stockpiles first</strong></h2><ul><li><p>We wanted to start with a high-priority, tractable problem that is well-scoped but is still nowhere near to being solved.</p></li><li><p>We believe bioweapons are one of the most tangible and early ways a global catastrophe could occur due to AI - see <a href="https://bluedot.org/blog/how-ai-could-enable-catastrophic-pandemics">this blog post</a>.</p></li><li><p>In the event of a catastrophic pandemic, PPE stockpiles are essential for protecting critical workers and keeping civilization functioning.</p><ul><li><p>Let&#8217;s imagine a bad actor releases 10 pandemic-potential pathogens into the population, each with a similar R0 to COVID-19, and a 20% infection fatality rate.</p></li><li><p>Once this is detected, governments lockdown their populations akin to March 2020.</p></li><li><p>The lockdown only holds if people continue to have food, water and energy delivered to their homes.</p><ul><li><p>If people are cold, thirsty and hungry, they will leave their homes and get infected en-masse.</p></li></ul></li><li><p>To maintain the lockdown, workers in the food production, distribution, water and energy sectors need to continue working.</p><ul><li><p>If they believe they&#8217;ll get infected if they go to work, and that they could then infect and kill their families, they won&#8217;t go to work!</p></li><li><p>Critical workers need high-quality respirators that protect them from infection.</p></li></ul></li></ul></li><li><p>Existing PPE stockpiles are woefully inadequate.</p><ul><li><p>In the US, there&#8217;s enough N95 masks to protect healthcare workers for <a href="https://fas.org/publication/resilience-caches-reusable-respirators/">10-20 days</a>.</p><ul><li><p>N95 masks do reduce transmission, but they don&#8217;t provide adequate protection to prevent infection in this scenario.</p></li><li><p>There are no stockpiles for critical workers in other industries.</p></li><li><p>If the above scenario happened today, critical industries would collapse, the lockdown would fail, and a significant fraction (30%+?) of the population would die.</p></li></ul></li><li><p>We need a large stockpile of reusable respirators (e.g. elastomeric half-mask respirators, EHMRs).</p><ul><li><p>They&#8217;re much better at reducing transmission AND they cost less than N95s when accounting for multiple months of usage.</p></li><li><p>The filter in an EHMR only needs to be replaced every 6-12 months.</p></li></ul></li></ul></li></ul><h2><strong>Who&#8217;s working on it</strong></h2><ul><li><p>The field is extremely small &#8212; we know of &lt;10 people working on it full-time:</p><ul><li><p><a href="https://protectiveequipment.org/">Protective Equipment</a> (1-2 FTE) - designing and manufacturing better EHMRs</p></li><li><p><a href="https://blueprintbiosecurity.org/works/ppe/">Blueprint Biosecurity</a> (2 FTE on PPE) - researching the best PPE to stockpile and advocating government for it</p></li><li><p><a href="https://amododesign.com/projects/#:~:text=Study%20on%20the%20performance%20of%20respiratory%20PPE.%20100%20public%20tests%20and%20counting">Amodo Design</a> (2-3 FTE on PPE) - engineers testing PPE</p></li><li><p><a href="https://centerforhealthsecurity.org/our-work/research-projects/increasing-respiratory-protection-for-the-next-pandemic">Centre for Health Security</a> - reusable respirator research and advocacy</p></li></ul></li><li><p>Compare this to the scale of the problem: for North America, it could cost <a href="https://blueprintbiosecurity.org/u/2024/05/BB_Next-Gen-Report_PRF9-WEB-1.pdf#page=48">$1-20B</a> to build the pandemic PPE stockpiles.</p><ul><li><p>We believe this money could come from some combination of:</p><ul><li><p>Government procurement via the Strategic National Stockpile (SNS),</p></li></ul><ul><li><p>Investors investing into a PPE stockpile company offering pandemic insurance for critical industries, and/or</p></li></ul><ul><li><p>Philanthropic grants</p></li></ul></li><li><p>Recent momentum includes <a href="https://80000hours.org/podcast/episodes/andrew-snyder-beattie-four-pillars-biosecurity-pandemic/">a podcast discussion</a> with Andrew Snyder-Beattie from Open Philanthropy, and the OpenAI Foundation&#8217;s <a href="https://openai.com/index/built-to-benefit-everyone/">$25B commitment to societal resilience</a>.</p></li><li><p>By default, we still believe very little money will be spent in the US on effective pandemic PPE stockpiles.</p><ul><li><p>To change this, we&#8217;ll need serial entrepreneurs attacking this problem (perhaps using market mechanisms), effective government advocacy campaigns, and sustained outreach to foundations and high net worth individuals.</p></li></ul></li></ul></li></ul><h2><strong>Meta learnings</strong></h2><ul><li><p>We aim to build a repeatable playbook for building scalable companies that address the greatest civilisational challenges.</p></li><li><p>We&#8217;ve landed on three questions that are critical to answer for this type of project:</p><ul><li><p><strong>1) What needs to be done?</strong> What does success look like? What are the critical challenges and hurdles?</p><ul><li><p>PPE: design masks fitting diverse populations, scale manufacturing and reduce unit costs, establish rapid distribution systems for pandemic-time.</p></li></ul></li><li><p><strong>2) Who will pay for it?</strong> Are there feasible business cases? What would it take to get government funding? Could philanthropists cover it?</p><ul><li><p>PPE: Most likely philanthropic grants (e.g. early employees at AI companies, OpenAI Foundation, Open Philanthropy), but pandemic insurance business models might be possible. We&#8217;re still testing this.</p></li></ul></li><li><p><strong>3) Who will make it happen?</strong> Who are the serial entrepreneurs, leaders, motivated generalists and experts who will take responsibility to drive this work?</p><ul><li><p>PPE: Serial entrepreneurs with design, engineering and manufacturing experience.</p></li></ul></li></ul></li><li><p>Our next blog post will share our preliminary landscape analysis, detailing what work is needed, where the gaps are, and how it might get funded.</p></li><li><p>We&#8217;ve also narrowed our target audience to three main personas:</p><ul><li><p><strong>1) Serial entrepreneurs</strong> who build and scale new companies.</p></li><li><p><strong>2) Smart, motivated generalists</strong> who do research, run experiments, co-found and join as early employees.</p></li><li><p><strong>3) Domain experts</strong> who provide guidance to the entrepreneurs and generalists, help to validate what&#8217;s possible, and join as employees/executives.</p></li></ul></li></ul><h2><strong>What we&#8217;re doing now</strong></h2><ul><li><p>Completing our landscape analysis.</p></li><li><p>Trying to recruit serial entrepreneurs to work on biodefense, to test if this target profile is viable.</p></li><li><p>Evaluating funding options by doing customer interviews with philanthropists, investors and government lobbying groups.</p></li><li><p>Building an expert network throughout the PPE space.</p></li></ul><h2><strong>How to get in touch</strong></h2><ul><li><p>We&#8217;re looking for:</p><ul><li><p>Serial entrepreneurs interested in biodefense</p></li><li><p>Domain experts who are excited to advise us on how to make this happen</p></li><li><p>Philanthropists interested in supporting this work ($1-10B needed!)</p></li><li><p>Smart generalists who want to jump into the deep end</p></li></ul></li><li><p>If you want to contribute, get in touch <a href="mailto:dewi@bluedot.org">via email</a> - we&#8217;re excited to hear from you!</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Announcing Incubator Week v2]]></title><description><![CDATA[Last month, we ran our AGI Strategy Course turned Incubator. Seven participants spent five days in our London office. From this batch we backed Exona - a new startup building dynamic risk pricing for AI models - with a &#163;50k grant. They&#8217;ve since raised more, work from our co-working space, and are already hiring.]]></description><link>https://blog.bluedot.org/p/announcing-incubator-week-v2</link><guid isPermaLink="false">https://blog.bluedot.org/p/announcing-incubator-week-v2</guid><dc:creator><![CDATA[Joshua Landes]]></dc:creator><pubDate>Mon, 03 Nov 2025 05:37:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUc5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b9c29b1-3ff5-4ef6-8c04-e91c608ec10e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last month, we ran our <a href="https://bluedot.org/blog/a-bluedot-incubator-v1">AGI Strategy Course turned Incubator</a>. Seven participants spent five days in our London office. From this batch we backed Exona - a new startup building dynamic risk pricing for AI models - with a &#163;50k grant. They&#8217;ve since raised more, work from our co-working space, and are already hiring.</p><h3><strong>Today we&#8217;re announcing v2 of incubator week.</strong></h3><p>At BlueDot, we&#8217;ve trained over 5,000 people since 2022. We launched our courses to build the workforce needed to make AI go well. But training isn&#8217;t enough. There simply aren&#8217;t enough organizations with the ambition, scale, and speed to make sure AI goes well. We need new organizations, not just more people flowing into existing ones. Incubator week means we can help the best founders build the companies and organizations the world needs.</p><h3><strong>How it works:</strong></h3><p>We&#8217;re inviting the strongest, most entrepreneurial participants from our AGI Strategy Course - people who&#8217;ve already proven they can think rigorously about AI risks. If selected, we fly you to London and host you for the week, all expenses paid.</p><p>Here&#8217;s the (preliminary) structure:</p><ul><li><p>Monday: You&#8217;ll develop step-by-step models of how threats to the future might develop. We&#8217;ll focus on deep problem space exploration, not surface-level takes. You&#8217;ll identify top experts in your area.</p></li><li><p>Tuesday-Wednesday: You&#8217;ll build and iterate on your intervention ideas. What can actually move the needle on the problems you&#8217;ve identified? What can be implemented and scaled? By the end of Tuesday, we expect you to have called top experts working on this problem. On Wednesday we&#8217;re hosting a broader community social in the evening.</p></li><li><p>Thursday: You&#8217;ll create your pitch for a high-impact new organization.</p></li><li><p>Friday: You&#8217;re pitching to us for funding.</p></li></ul><p>Throughout the week, you&#8217;ll work from our office at LISA alongside Apollo Research, Workshop Labs, and other leading organizations and researchers. We&#8217;ll also be bringing in founders, funders, and experts to accelerate your work.</p><h3><strong>How you can still join:</strong></h3><p>The <a href="https://bluedot.org/courses/agi-strategy">AGI Strategy Course</a> and <a href="https://bluedot.org/courses/technical-ai-safety">Technical AI Safety course</a> are our feeders. So far we&#8217;ve received over 2,000 applications. When it comes to incubator week we&#8217;re looking for people with strong buildery energy, clear problem thinking and a focus on mitigating AI risk.</p><p>Our next incubator week runs November 17-21. v3 will be running from December 1-5.</p><p>The future will arrive sooner than most expect, but there&#8217;s still time to shape it. We&#8217;re putting our money where our mouth is and backing people to build. If you want in - we&#8217;re saving 1-2 spots for standout talent - <a href="https://web.miniextensions.com/9Kuya4AzFGWgayC3gQaX">apply to the AGI Strategy Course</a> and show us what you can do.</p>]]></content:encoded></item></channel></rss>