The Problem

AI systems are increasingly used to guide healthcare decisions. But what if they've learned our biases?

100M+
People use symptom checkers annually
40%
Of US hospitals use AI in clinical decisions
?
Studies on name-based bias in medical AI

What We Already Know

  • Obermeyer et al. (2019) in Science: Healthcare algorithms showed racial bias, underestimating illness severity for Black patients
  • Hoffman et al. (2016) in PNAS: Medical professionals exhibited racial bias in pain assessment
  • Schulman et al. (1999) in NEJM: Identical cardiac cases received different referral rates based on race and gender

If humans show bias, and AI learns from human data, does AI perpetuate that bias?

Our Approach

Rigorous methodology. Open methods. Verifiable results.

🔬

Matched-Pair Testing

Submit identical symptoms to AI systems, varying only the patient name. If outputs differ, the name is the cause.

📊

Statistical Rigor

Effect sizes (Cohen's d), significance testing with Bonferroni correction, pre-registered analysis plan.

🔓

Open Science

All methods, code, and data publicly available. Anyone can replicate our findings.

🤝

Responsible Disclosure

Share findings with AI developers before publication. Goal is improvement, not attack.

Research Status

Protocol Design

Pre-registered methodology based on peer-reviewed frameworks

Test Materials

50+ name pairs, 20+ symptom profiles developed

Data Collection

Testing consumer AI systems and LLMs

Analysis & Disclosure

Statistical analysis, responsible disclosure to developers

Publication

Public findings release

Don't Trust. Test. Verify.

Our methodology is designed so anyone can understand it, replicate it, or extend it. Healthcare AI fairness isn't our problem to solve alone—it's everyone's.