BlueDot Impact
Subscribe
Sign in
Home
Blog
Projects
Stories
Courses
Events
Archive
About
Latest
Top
Discussions
Is the Frontier AI Governance course right for you?
3 questions to help you decide.
Apr 5
•
Joshua Landes
5
1
March 2026
Quitting FinTech for AI Safety — Milos Borenovic
Milos is the Chief Product Officer and Parternships Lead at Lucid Computing
Mar 25
•
Harrison Wood
2
High-Stakes Activation Probes on Indonesian
This project was submitted by Ivan Wiryadi. It was an Outstanding Submmission for the Technical AI Safety Project Sprint (Jan 2026). Participants worked…
Mar 25
•
Ivan
Interpreting Latent Reasoning in the Depth-Recurrent Transformer
This project was submitted by Tristan von Busch. It was a Top Submission for the “Technical AI Safety Project” prize (Jan 2026). Participants worked on…
Mar 24
Rapid Small Grants for the BlueDot Technical AI Safety Project Sprint
Small grants for BlueDot participants to build their portfolio
Mar 17
•
Sam Dower
3
Why you should vibe-code your AI safety research sprint project
Your research hours are limited. Don't spend them coding.
Mar 13
•
Sam Dower
8
1
1
aegish: Using LLMs To Block Malicious Shell Commands Before Execution [>] Email
This project was submitted by Guido Bergman. It was an Outstanding Submmission for the Technical AI Safety Project Sprint (Jan 2026). Participants…
Mar 11
•
Guido Bergman
3
2
Measuring Moral Sycophancy Is Harder Than It Looks: Auditing and Extending the ELEPHANT Benchmark
This project was submitted by Alexis Wang. It was an Outstanding Submmission for the Technical AI Safety Project Sprint (Jan 2026). Participants worked…
Mar 11
•
Alexis
2
Quaver | Lovable for Evals and Benchmarks
What I Learned Building 2 AI Benchmarks From Scratch (And Why I Automated the Third)
Mar 11
•
Faw
2
Reproducing FAR.AI's Study on LLMs' Persuasion Attempts at Harmful Topics
This project was submitted by Mutalib Begmuratov. It was an Outstanding Submmission for the Technical AI Safety Project Sprint (Jan 2026). Participants…
Mar 11
•
Mutalib Begmuratov
3
MANTA: Evaluating Nonhuman Welfare Reasoning in AI Models
Evaluating Nonhuman Welfare Reasoning in Frontier AI Models
Mar 11
•
Allen Lu
3
Multilingual Safety Alignment Is Not “Just Translate the Prompt”
For people fine-tuning reasoning LLMs for non-English users
Mar 11
•
Zahraa Al Sahili
2
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts