This project was submitted by Belén Sánchez Hidalgo. It was the winner of the “Best Interactive Deliverable” prize in our AI Governance course (August 2024). Participants worked on these projects for 4 weeks. The text below is an excerpt from the final project.
IntimaGuard offers a unique look at how GPT-4, Claude, and Cohere handle romantic and emotionally charged conversations. Through concise scenario-based tests and side-by-side comparisons, you can explore how each AI balances empathy, boundary respect, and clarity. Disguised user feedback ensures unbiased evaluations, revealing which model delivers the most ethically supportive responses.
Beyond these comparisons, IntimaGuard provides an ethical framework that can be applied to the same scenarios—showcasing how guardrails shape more ethical, safe, and aligned AI interactions. This framework reinforces transparency, respects user autonomy, and prevents manipulation, complemented by features like sentiment awareness and daily interaction limits for balanced engagement. With IntimaGuard, see how AI can be insightful and caring—without compromising well-being.
To demo the full project submission, click here. To read more about it, click here.
