Discussion about this post

User's avatar
Habeebb's avatar

This really resonates. What you’re describing here is basically a signal problem, not necessarily a skills problem.

A lot of people have taken strong AI safety / technical courses, can code, and genuinely want to contribute — but they’re stuck because they lack credible, real-world proof-of-work in high-trust settings.

I’m currently working on a concept called Skills4Impact that tries to address exactly this gap: creating a structured pathway where early-career talent works on real, non-urgent backlog projects from mission-driven orgs, produces concrete artifacts (code, reports, tools), and gets formal verification/reference — not as “volunteering” or consulting, but as a way to turn learning into legible signal.

If this problem resonates with you (or if you’ve seen it from the org side), I’ve linked a draft concept note here and would really value critique, pushback, or suggestions. Link to draft: https://docs.google.com/document/d/1r7Nn5O4rEesQRwNP8InKg6LMBrEIzg5P_fYdHig9miA/edit?usp=sharing

No posts

Ready for more?