Our Core Commitment

This research exists to improve healthcare AI, not to attack it. We believe AI can be a powerful force for healthcare equity—but only if we honestly examine and address its limitations.

Ethical Principles

🔬

Synthetic Data Only

All symptom profiles are completely fictional. No real patient data is used. Every "patient" in our study is a synthetic construct designed to test system behavior.

📚

Established Methodology

Our methods are based on peer-reviewed research published in top journals (Science, PNAS, NEJM). We're applying proven techniques to a new domain.

🎯

Constructive Intent

Our goal is to help AI developers improve their systems. We want healthcare AI to work well for everyone—bias detection is the first step toward bias correction.

🤝

Responsible Disclosure

We share findings with AI developers before public release. They get 90 days to respond. Their responses are included in our publication.

What We Do NOT Do

  • We do not use real patient data
  • We do not attempt to break or exploit AI systems
  • We do not publish findings without responsible disclosure
  • We do not exaggerate or sensationalize results
  • We do not name systems without giving them opportunity to respond
  • We do not use this research to attack companies or individuals

Human Subjects Considerations

No Human Subjects Involved

This research does not involve human subjects. All data consists of:

  • Synthetic symptom profiles (fictional)
  • Name pairs selected from published research
  • AI system outputs (text responses)

No real patients are queried. No personal health information is collected or used.

Terms of Service Compliance

Using Systems as Intended

Our research involves using consumer AI systems exactly as they're designed to be used—entering symptoms and receiving recommendations. We:

  • Do not circumvent access controls
  • Do not overload systems with automated requests
  • Do not scrape or extract proprietary data
  • Use only publicly available interfaces

Responsible Disclosure Process

1

Complete Analysis

Finish all data collection and statistical analysis. Ensure findings are robust and reproducible.

2

Developer Notification

Contact each AI system developer with specific findings related to their system. Provide full technical details.

3

Response Period

Allow 90 days for developers to respond, investigate, and potentially address issues.

4

Include Responses

Incorporate developer responses into the final publication. Their perspective matters.

5

Public Release

Publish findings with full methodology, data, and developer responses.

Why This Research Matters

The Stakes Are High

Healthcare AI is being deployed at scale. If these systems exhibit bias:

  • Millions of users receive inconsistent recommendations
  • Existing healthcare disparities may be amplified
  • Trust in AI-assisted healthcare erodes
  • Underserved populations bear disproportionate burden

Identifying bias is the prerequisite to fixing it.

Our Hopes

For AI Developers

We hope our findings help you build fairer systems. Bias often exists unintentionally—awareness enables improvement.

For Healthcare Providers

We hope this research helps you understand the limitations of AI tools and when to apply clinical judgment.

For Patients

We hope this contributes to a future where AI-assisted healthcare serves everyone equally.

For Researchers

We hope our methodology enables continued scrutiny and improvement of healthcare AI fairness.

Contact

Questions about our ethical framework? Concerns about our methodology? We welcome dialogue.

For AI developers: If you believe your system is included in our research and wish to engage, please contact us. We are committed to collaborative improvement.

Learn More

Explore our research protocol or see the methodology in detail.