See Bias In Action
An educational simulation showing how our matched-pair methodology reveals healthcare AI bias.
How This Works
This is an educational simulation based on documented patterns from published research. It demonstrates the matched-pair testing concept—not live AI systems.
Select a Clinical Category
Choose the type of medical scenario you want to explore.
Understanding the Demonstration
What This Shows
The matched-pair method isolates the name as the only variable. When AI recommendations differ between Patient A and Patient B, the name caused that difference.
Research Basis
Patterns shown here are based on documented findings: Hoffman et al. (2016) on pain disparities, Obermeyer et al. (2019) on algorithm bias, Strakowski et al. (2003) on psychiatric diagnosis.
Important Caveat
This is a simulation for educational purposes. Our actual research involves testing real AI systems with proper controls and statistical analysis.
From Published Research
- Hoffman et al. (2016), PNAS: Half of medical trainees believed Black patients feel less pain than white patients
- Obermeyer et al. (2019), Science: A healthcare algorithm used on millions of patients was less likely to refer Black patients for additional care
- Strakowski et al. (2003): African American patients more likely to be diagnosed with schizophrenia vs. bipolar disorder for similar presentations
- Schulman et al. (1999), NEJM: Identical cardiac cases received different catheterization referral rates based on race and gender
If AI systems learn from data containing these biases, they may perpetuate them at scale.
Want to Go Deeper?
See our actual findings with real data, or explore the full methodology.