Hamza Asumah, MD, MBA, MPH
Picture this: You’re sitting across from a government health director who controls a $20 million annual budget. Your digital health platform could transform care delivery across her 150 facilities. You’ve prepared the perfect pitch, complete with beautiful slides about health equity and universal health coverage.
Then she asks the question that determines everything: “Show me evidence this actually works.”
You freeze. Your pilot ran in three facilities. You collected some user feedback. You have testimonials from enthusiastic nurses. But rigorous evidence? The kind that survives scrutiny from the Ministry of Health, the county assembly, and the treasury? That’s when you realize testimonials don’t cut it.
This moment repeats constantly in African healthtech. The companies that prepared for this question—that invested early in evidence generation—go on to raise millions, scale across countries, and influence national policy. Those that didn’t remain stuck in perpetual pilot mode, pitching the same three success stories to increasingly skeptical audiences.
The difference isn’t luck. It’s a deliberate choice to treat data as a core business function from day one, even when—especially when—resources are tight.
Why Evidence Actually Matters
Evidence isn’t academic navel-gazing. It’s the currency of every conversation that determines your company’s fate:
With governments: Policy decisions require justification. A county health director recommending your $150,000 contract needs ammunition to defend that choice in budget hearings. Without evidence, your solution competes against ambulances, vaccines, and infrastructure—all supported by robust data. With evidence, you’re not an expense; you’re an investment with documented returns.
With investors: African healthtech investors have been burned by beautiful narratives unsupported by reality. They’ve watched companies with compelling origin stories fail to achieve product-market fit. Your Series A pitch needs more than growth projections—it needs proof that your model works, that health outcomes improve, that users actually adopt your solution. Companies that raised Transform Health Fund’s average $3 million tickets all shared one characteristic: compelling evidence of impact.
With development partners: Foundations allocating grants face their own accountability pressures. They need evidence to justify funding decisions to boards and donors. Grand Challenges Canada or USAID Development Innovation Ventures won’t fund based on potential alone. They need baseline data, clear metrics, and credible measurement plans.
With strategic partners: When negotiating partnerships with pharmaceutical companies, insurance providers, or large healthcare networks, evidence is your leverage. You’re not asking for favors; you’re presenting documented value that aligns with their strategic priorities.
Companies that invested in monitoring and evaluation from the start positioned themselves differently in every conversation. They spoke with confidence backed by data. They anticipated questions before they were asked. They turned skeptics into believers by presenting irrefutable results.
The Shoestring M&E Framework
The objection is always the same: “We can’t afford proper M&E. We’re a seed-stage startup.”
This reflects a fundamental misunderstanding. You can’t afford not to do M&E. Every dollar spent generating evidence returns multiples in fundraising success, faster government adoption, and strategic positioning.
But you’re right that traditional M&E is prohibitively expensive. Academic research studies cost $50,000 to $200,000 and take years to complete. Professional evaluation firms charge $30,000 to $100,000 for comprehensive assessments. You don’t have that budget or timeline.
The solution is building evidence generation into your operational DNA from day one, using affordable tools and smart frameworks that deliver investment-grade data without academic price tags.
Level 1: Foundational Data Infrastructure ($2,000-$5,000)
Start with basic infrastructure that captures everything:
Implement comprehensive logging: Every user interaction, every clinical decision, every system event should be logged automatically. This costs almost nothing—storage is cheap—but creates a permanent record for future analysis. When an investor asks about user engagement six months from now, you’ll have historical data, not vague recollections.
Use open-source analytics: Google Analytics is free and sufficient for early-stage web platforms. Mixpanel offers free tiers for mobile apps. These tools track user behavior, session duration, feature utilization, and dropout points. Budget $0 to $500 annually.
Build automated dashboards: Metabase (open source) or Google Data Studio (free) turn raw data into visual dashboards. Spend one week of developer time (roughly $800 to $2,000 contractor cost) building dashboards that automatically update. Now you have real-time visibility into key metrics without manual reporting overhead.
Standardize data collection: Create simple forms for routine data capture. If you’re working in health facilities, ensure every patient interaction captures standard demographics, presenting complaints, diagnoses, and outcomes. Use CommCare (free tier available) or KoboToolbox (free) for digital data collection. Train facility staff on consistent data entry. Budget $1,000 for training materials and initial capacity building.
Total investment: $2,000 to $5,000 in setup costs, minimal ongoing expenses. This foundation supports everything that follows.
Level 2: Outcome Measurement ($5,000-$15,000)
Now layer on actual health outcome measurement:
Define your theory of change: What specific health outcome does your intervention improve? Be precise. “Improving healthcare” is too vague. “Reducing neonatal mortality through earlier identification of birth complications” is measurable. “Increasing medication adherence for chronic disease patients from 40% to 70%” creates clear targets.
Work backwards from that outcome. What intermediate outcomes would indicate you’re on track? If reducing neonatal mortality is your goal, intermediate outcomes might include percentage of complicated deliveries identified, time from complication identification to facility arrival, and availability of emergency obstetric care when needed.
Document this logic model. It guides your measurement strategy and demonstrates sophisticated thinking to funders.
Implement baseline and endline measurement: Before implementing your intervention, measure current state. What’s the baseline neonatal mortality rate? Current medication adherence levels? Average time to diagnosis?
Baseline measurement requires systematic data collection. Budget $3,000 to $8,000 for this. Hire local enumerators to conduct structured surveys or facility assessments. Six enumerators working three days can collect baseline data from 50 health facilities at roughly $50 per enumerator per day ($900 total), plus $500 for transportation and $300 for data management. Add $2,000 for survey design and analysis support from a local consultant or graduate student.
Repeat this measurement after your intervention runs for six to twelve months. The comparison between baseline and endline is your core evidence of impact.
Control groups matter: If possible, implement in some facilities while measuring similar facilities without your intervention. The difference between intervention and control facilities is far more compelling than simple before/after comparisons.
This requires minimal additional cost—you’re measuring facilities regardless. Selecting appropriate controls requires thought but not money. Match facilities by size, patient volume, and geography. This methodological rigor transforms your evaluation from suggestive to convincing.
Partner with local universities: Graduate students need thesis projects. Academic faculty need research publications. Your pilot needs rigorous evaluation. This alignment creates opportunity.
Reach out to public health, health informatics, or clinical departments at universities near your implementation sites. Propose collaboration: you provide data access and implementation support; they provide evaluation design, data analysis, and academic credibility. Structure this as co-authored research with publication potential.
Budget $5,000 to $10,000 to compensate academic partners’ time, cover research assistant costs, and fund publication fees. You receive evaluation worth $30,000 to $50,000 on the open market, plus co-authorship on peer-reviewed papers that validate your approach.
Total investment: $5,000 to $15,000. Output: rigorous baseline and endline measurement, control group comparisons, and academic validation.
Level 3: Cost-Effectiveness Analysis ($3,000-$8,000)
Health system decision-makers think in cost per outcome. Your evidence is incomplete until you demonstrate value for money.
Calculate cost per patient served: Sum all costs—technology development, staff time, training, ongoing support—then divide by patients reached. Be honest about
attribution. If 5,000 patients passed through facilities where you implemented, but only 2,000 actually used your specific intervention, your denominator is 2,000.
Compare to alternatives: Research costs of alternative approaches to the same problem. If your telehealth intervention costs $12 per patient consultation versus $35 for in-person specialist visits, that $23 saving is compelling. If your diagnostic algorithm costs $8 to administer but catches diseases that would cost $500 to treat at late stages, the cost-effectiveness is overwhelming.
Calculate return on investment: This is where you win government partnerships. Take your cost per patient served, multiply by health outcome improvements (lives saved, hospital days avoided, productivity gains), and translate to monetary terms using government’s own costing data.
For example: Your intervention costs $40,000 annually to operate in a district with 200,000 people. It reduces hospital admissions by 800 cases annually. Government data shows hospital admissions cost an average of $180. Your intervention saves $144,000 in hospital costs, a 3.6x return on investment. This calculation takes half a day and costs nothing beyond your time.
Hire a health economist consultant for two days ($1,500 to $3,000) to ensure your methodology is robust and create a formal cost-effectiveness report. This document becomes the centerpiece of government negotiations.
Total investment: $3,000 to $8,000. Output: cost-effectiveness analysis demonstrating ROI that changes government conversations from “Can we afford this?” to “Can we afford not to do this?”
Level 4: Publication and Dissemination ($2,000-$5,000)
Evidence hidden in folders is worthless. Strategic dissemination multiplies impact:
Publish in peer-reviewed journals: African Health Sciences and BMJ Global Health are accessible and respected. Open-access journals cost $1,000 to $2,500 in publication fees but ensure global accessibility. Your university partners handle the writing; you cover publication costs.
One published paper transforms your positioning. You’re no longer a vendor making claims; you’re a validated intervention cited in literature.
Create policy briefs: Distill your findings into two-page policy briefs targeting government decision-makers. Use clear language, compelling visuals, and specific policy recommendations. Design costs $300 to $500 (hire a freelance designer on Upwork), printing 500 copies costs $200.
Distribute at Ministry of Health events, county health forums, and parliamentary hearings. Leave behind something tangible that summarizes your evidence.
Present at conferences: East African Health Summit, Africa Health Conference, and national health sector reviews all accept abstracts. Registration runs $150 to $500. Presentation slots provide credibility and networking opportunities with decision-makers.
Leverage media: Partner with health journalists to cover your results. A feature in Kenya’s Daily Nation or The Guardian’s global development section reaches audiences you couldn’t access otherwise. Work with communications professionals ($1,000 to $2,000 for press release writing and media pitching) to secure coverage.
Total investment: $2,000 to $5,000. Output: published evidence, policy engagement, and media visibility that position you as a sector leader.
Building Dashboards That Impress
When that government director or Series A investor asks for evidence, you need to present data instantly and professionally. This requires dashboards designed for decision-makers, not data analysts.
Key dashboard principles:
Lead with outcomes: The first thing visible should be your core health outcome metric. “1,247 lives saved through earlier TB diagnosis” or “43% reduction in maternal complications” immediately communicates impact.
Show trends over time: Line graphs demonstrating continuous improvement are more compelling than single data points. Even six months of data showing consistent positive trends suggests sustainability.
Include comparison points: Display your results against baseline, control facilities, and national averages on the same visualizations. When viewers see your intervention facilities performing 30% better than controls, the causal link is obvious.
Incorporate user testimonials: Alongside quantitative data, include rotating testimonials from health workers and patients. Video clips (recorded on smartphones) add emotional resonance to numbers.
Make it interactive: Allow stakeholders to filter by geography, facility type, or time period. This accommodates different interests—county officials want to see their specific facilities, investors want trends across all sites.
Brand professionally: Invest $500 to $1,000 in professional visual design. Dashboard aesthetics matter—they signal operational excellence and attention to detail.
Ensure mobile responsiveness: Decision-makers view dashboards on phones during meetings. Ensure everything displays properly on small screens.
Host dashboards publicly (with appropriate privacy protections). Your URL becomes a resource shared widely, building credibility and generating inbound interest.
From Evidence to Policy Influence
The ultimate value of evidence is policy influence. Companies that shape national health strategies and county implementation plans achieve competitive advantages that last years.
Engage in technical working groups: Most countries have digital health task forces, supply chain coordination groups, or disease-specific technical committees. Join them. Participate actively. Present your data. These forums influence national strategies and procurement frameworks.
Contribute to national strategies: When governments draft digital health strategies, supply chain master plans, or health sector strategic plans, offer input. Your evidence about what works grounds these documents in reality. When the final strategy reflects approaches you’ve validated, implementation naturally favors your solutions.
Brief parliamentary committees: Health oversight committees in national and county assemblies hold hearings on health system performance. Request opportunities to present evidence. Parliamentary interest creates political pressure for adoption.
Respond to policy consultations: Governments periodically release draft policies for public comment. Submit detailed responses supported by your evidence. This costs nothing but positions you as a thoughtful stakeholder.
Partner with advocacy organizations: Civil society groups like AMREF, IntraHealth, or local patient advocacy groups have established government relationships. Partner on advocacy campaigns supported by your data. They provide access and advocacy expertise; you provide evidence.
Transitioning from Pilot to Scale
Evidence transforms pilot successes into scaling opportunities:
Document implementation processes: Beyond outcomes, document how implementation actually worked. What training was required? How long until facilities achieved proficiency? What support was needed? What problems emerged and how were they solved?
This operational evidence addresses the “Can this scale?” question that follows every successful pilot. When you demonstrate not just that outcomes improved, but that implementation was feasible even in resource-constrained facilities, you’ve proven scalability.
Calculate economies of scale: Show how unit costs decrease with volume. If your cost per facility served is $2,000 for five facilities, $1,200 for twenty facilities, and projected at $800 for a hundred facilities, you’ve demonstrated that scale improves efficiency.
Map prerequisites for success: Identify minimum conditions needed for success. Do facilities need reliable electricity? Minimum staffing levels? Basic infrastructure? When you clarify prerequisites honestly, governments can target expansion to facilities meeting those conditions while developing others.
Create replication playbooks: Document your implementation process as step-by-step playbooks. “Facility Implementation Toolkit,” “County Rollout Guide,” and “Training Manual” demonstrate that scaling is operationalized, not theoretical.
These materials support government expansion decisions and become valuable assets for franchise-style scaling models.
The Ultimate Competitive Moat
In African healthtech, evidence is the ultimate competitive moat. Technology can be copied. Business models can be replicated. But evidence takes years to generate, requires methodological rigor, and creates path dependencies.
When you’ve published peer-reviewed research, presented at international conferences, and influenced national policy with your data, competitors can’t simply copy your approach. They lack your evidence base. Government officials know and trust your data. Investors have watched you deliver on evidence-backed promises.
This defensibility compounds over time. Each new implementation adds to your evidence base. Each publication strengthens your academic credibility. Each government partnership validates your approach. The gap between you and competitors widens not through technology or marketing, but through accumulated proof that your solution works.
Meanwhile, competitors without evidence remain stuck presenting the same pitch, answering the same skeptical questions, and failing to convert pilots into partnerships.
That’s why investing in evidence generation during those early, cash-strapped days isn’t a luxury—it’s the most important product development work you’ll do. Your evidence is your fundraising tool, your government relations strategy, your competitive positioning, and your path to sustainable scale.
The $10,000 to $30,000 you invest in evidence generation will return millions in funding, contracts, and strategic value. But only if you start now, instrument everything, measure rigorously, and tell your impact story with data that no one can dispute.
Your next investor meeting, government pitch, or partnership negotiation will reveal whether you made that investment. The difference between “we think this works” and “here’s published evidence proving this works” is the difference between pilot purgatory and profitable scale.
Choose evidence. Choose scale. Choose long-term competitive advantage built on irrefutable proof that you deliver what you promise.

Leave a comment