Your Patients’ Data Is Worth Millions — But Who Gets Paid?

Hamza Asumah, MD, MBA, MPH

Here’s an uncomfortable question: if pharmaceutical companies are making billions using your patients’ health data, shouldn’t your patients get paid?

Welcome to healthcare’s most contentious emerging issue.

The Hidden Value Extraction

Your health data is extraordinarily valuable. More valuable than your credit card number. More valuable than your social security number in many contexts.

Why? Because it’s longitudinal, predictive, and permanent.

Your health history doesn’t change. If you were diagnosed with diabetes at 40, that’s in your record forever. And that information can predict your future health risks, your likelihood of needing certain medications, your insurance costs, your employability in some contexts.

Companies will pay substantial sums for these insights.

Pharmaceutical companies use anonymized patient data to identify populations for clinical trials, understand real-world drug effectiveness, and develop new treatments. That’s genuinely valuable research advancing medicine.

Tech companies use health data to train AI diagnostic models, predict patient deterioration, and build decision support tools.

Insurers use aggregated health data to price risk, design benefit structures, and identify fraud.

All legitimate uses. All creating enormous economic value.

But here’s the problem: patients typically don’t know their data is being used this way. They didn’t explicitly consent. They’re not being compensated. And they can’t opt out.

The Trust Crisis

According to PwC’s 2024 healthcare trust survey, 75% of patients fear their health data is being misused. Three out of four.

That’s not paranoia. That’s rational concern based on actual data breach patterns, privacy violations, and opaque data-sharing arrangements.

Healthcare data breaches cost an average of $10.9 million per incident according to IBM’s 2024 Cost of a Data Breach Report—the highest of any industry. And these breaches are increasing in frequency and severity.

But the trust issue goes beyond security. It’s about transparency and control.

Most patients have no idea:

  • What data about them is collected and stored
  • Who has access to their data
  • How their data is being used commercially
  • Whether they can revoke access or delete their information

The healthcare system has operated under an implicit assumption: “Trust us, we’re the experts, we’ll handle your data responsibly.”

That implicit trust is eroding. Fast.

The Ownership Question

Who actually owns your health data?

Ask ten legal experts and you’ll get ten different answers. It’s genuinely unclear.

The hospital says: “We collected it, we stored it, we secured it—we own it.”

The EHR vendor says: “It’s in our system, under our license—we control it.”

The patient says: “It’s information about MY body, MY health, MY life—I should own it.”

Privacy advocates say: “Health data is so sensitive it shouldn’t be ‘owned’ by anyone—it should be protected as a fundamental right.”

Legally, the current framework is messy. Patients have rights to ACCESS their data under HIPAA. But they don’t have clear ownership rights or control over secondary uses.

Once your data enters an EHR system, you can request a copy. But you can’t control who else sees it, how it’s used for research, whether it’s de-identified and sold to third parties, or how long it’s retained.

The Monetization Dilemma

Here’s where it gets ethically complicated.

Should healthcare data be monetized? Strong arguments on both sides:

The case FOR monetization:

  • Healthcare organizations need revenue to sustain operations and invest in improvements
  • Data-driven research requires access to large datasets
  • Secondary use of data has led to genuine medical breakthroughs
  • Prohibiting monetization might stifle innovation

The case AGAINST monetization:

  • Health data is deeply personal and sensitive
  • Patients typically don’t consent to commercial use
  • The economic benefits flow to corporations, not patients
  • Monetization creates perverse incentives to collect unnecessary data

The pragmatic middle ground: transparency and consent.

If organizations are going to monetize patient data:

  1. Tell patients explicitly what data is being collected
  2. Explain how it will be used commercially
  3. Allow meaningful opt-in/opt-out mechanisms
  4. Consider revenue sharing with patients whose data creates value

Right now, this isn’t happening at scale. And that’s the trust problem.

The Algorithmic Bias Crisis

Data isn’t neutral. And AI trained on biased data perpetuates and amplifies bias.

We’re already seeing this in healthcare AI:

Diagnostic algorithms that perform excellently on white male patients but miss critical findings in women and minorities because training data was predominantly white and male.

Pain assessment tools that systematically underestimate pain in Black patients because historical clinical notes reflected racial bias.

Risk prediction models that under-refer minority patients to care management programs because the training data reflected unequal access patterns, not actual health needs.

This isn’t theoretical. A 2019 Science study found a widely-used healthcare algorithm exhibited significant racial bias, affecting millions of patients.

The problem: AI learns patterns from historical data. If that historical data reflects systemic inequities, the AI replicates them—often with mathematical precision and at scale.

The solution requires:

  • Diverse, representative training datasets
  • Rigorous testing across demographic groups
  • Ongoing monitoring for bias after deployment
  • Transparency about algorithm performance across populations
  • Human oversight and ability to override algorithmic decisions

But implementing these safeguards costs money and slows deployment. And not all organizations are willing to make that investment.

The Regulatory Reckoning

HIPAA was written in 1996. It’s woefully outdated for the modern healthcare data ecosystem.

It doesn’t address:

  • AI and machine learning
  • Health data collected by consumer apps and wearables (not covered by HIPAA)
  • Data brokers aggregating health information
  • Cross-border data flows
  • Patient rights to deletion or portability

There’s growing momentum for “HIPAA 2.0″—a comprehensive update that would:

  • Expand patient rights to control their data
  • Tighten security requirements
  • Impose stricter penalties for breaches and violations
  • Address AI-specific risks
  • Cover more entities in the health data ecosystem

Europe is ahead with the EU AI Act, which sets strict requirements for high-risk AI applications in healthcare: transparency, human oversight, documentation, and bias testing.

In the U.S., we’re seeing a state-by-state patchwork: California’s CCPA, Virginia’s CDPA, Colorado’s CPA—each with different requirements. This creates compliance complexity for national healthcare organizations.

Trust as Competitive Advantage

Here’s the business case for getting this right:

Data breaches cost an average of $10.9 million per incident. But the reputational cost can be even higher.

When patients don’t trust you with their data:

  • They withhold information, making clinical care less effective
  • They avoid seeking care, reducing your patient volume
  • They switch providers to competitors they trust more
  • They leave negative reviews and damage your reputation

Conversely, organizations that build trust through transparency and strong data stewardship create durable competitive advantages:

They attract more patients who value privacy and security.

They retain patients longer because trust drives loyalty.

They can leverage data more effectively because patients consent to beneficial uses when they trust the steward.

They avoid regulatory penalties and litigation that destroy value.

The Path Forward

Healthcare is becoming a data business whether we like it or not. The question isn’t whether data will be used—it’s whether it will be used ethically.

Organizations that win long-term will:

Prioritize transparency: Tell patients what data you collect, how you use it, who you share it with, and give them meaningful control.

Implement strong governance: Audit algorithms for bias. Test AI tools rigorously. Maintain human oversight. Document everything.

Invest in security: Data breaches are expensive and trust-destroying. Security can’t be an afterthought.

Share value: If patient data generates revenue, consider mechanisms to share that value with the patients who created it.

Build a culture of data stewardship: Everyone in your organization should understand they’re custodians, not owners, of patient data.

Because in the end, data without trust is worthless. You can have the most sophisticated AI tools, the most valuable datasets, and the most promising business model.

But if patients don’t trust you, they’ll take their data—and their care—somewhere else.

hasumah Avatar

Published by

Categories:

Leave a comment