Two Things Can Be True: Part 2 - Elizabeth Holmes Didn’t Lie Alone, and the System Still Can’t Listen
Settlement Checks and Data In the Dumpster
There is always more. More farts. More f-bombs. More data.
Memories
The act of doing the tedious is often when I stumble into mind-expanding wormholes. This time it was taxes. In my 2024 ledger, I saw the Theranos settlement checks—two of them. Blood money, if you’ll pardon the pun. A refund for trusting a system built on lies—but not hers alone.
In 2017, Theranos agreed to pay $4.65 million to refund over 175,000 Arizona customers who had purchased its blood tests between 2013 and 2016. The average refund? About $26 per person, reflecting the actual amount paid for the tests.
Separately, Walgreens—the retail giant that partnered with Theranos to offer those tests—settled a class-action lawsuit in 2023 for $44 million. Affected consumers received double the cost of their original tests, plus a $10 base payment. A subclass of individuals who underwent the so-called “tiny” blood draws using Theranos’s Edison devices received up to $1,000 each to settle medical battery claims.
Now comes the news that Elizabeth Holmes’s husband, Billy Evans, raised $20 million (so far) for a new blood-testing startup—Haemantheus. What’s insane about this is she still controls the narrative, even from jail, and it’s ruffling the feathers of the righteously indignant. Rightfully so—someone needs to keep people honest. But Holmes has become the convenient scapegoat for a fantasy we were all sold: that healthcare could be disrupted with marketing alone, that targeted care could be delivered with a single drop of blood, that the old guard would collapse under the weight of innovation.
But she wasn’t the only one who lied.
The people who funded her, who endorsed her, who turned their eyes away from lab results they didn’t understand—those people are still in power. Those people are funding her again. And they’re still funding a system that claims to know what’s best for our health.
So yes, I got the checks. But where did my data go?
Where the data went doesn’t even matter.
The data is incidental without the narrative. After Theranos collapsed, I moved on, as people do. I worked for another health-tech company, promising a different flavor of transformation. More metrics. More specialists. Faster triage. Same economics. Same system. Different tools that don’t fundamentally address the root causes of healthcare’s systemic issues. And through it all, one thing became painfully clear: these systems collect data, but they do not see people.
They say they do—with the best intentions—but they don’t. And they won’t. Because the incentives are misaligned.
Why?
Theranos’s promise of revolutionizing blood testing hinged on collecting vast amounts of personal health data. But the inaccuracy of their tests exposed the deeper risk: when sensitive health information is mishandled, the damage isn’t just financial—it’s physical, emotional, and public.
This case isn’t unique. It’s emblematic.
It’s the same limitation AI faces today. Without multimodal input and long context windows, there’s no real understanding. But damnit if the tech oligarchs won’t try—designing slick hardware and marketing campaigns to convince you that you need the latest object to commune with an all-knowing AGI. (Which, to be clear, I’m not opposed to. I’m just cursed with skepticism instead of glib optimism.)
You can’t diagnose a human with lab values alone, just like you can’t understand intent from a sentence without history, tone, or gesture. The data these companies collect is shallow, fragmented, and divorced from the lived experience of the patient. It’s efficient, scalable, profitable—but not personal.
And that’s why healthcare is still broken.
We are designing for what’s easy to measure—not what matters.
Biology Doesn’t Lie—but It Does Forget
I asked my physical therapist why missing a few sessions made me feel so weak so fast, and he said it takes 3–6 weeks to build strength in atrophied muscles—but just 6 days to start losing it again. Six days. That stuck with me, not just as biology, but as metaphor: our bodies aren’t static systems—they adapt, regress, regenerate, and respond to context, memory, trauma, and care. Predictive models that treat people like machines miss this entirely. It's not just bad math—it’s a structural misunderstanding of what it means to be human.
So let’s talk about that one drop of blood. The idea that you can diagnose everything from it isn’t just bad science—it’s seductive fiction. Elizabeth Holmes didn’t invent the healthcare system’s flaws; she exploited them. She knew we crave simplicity, certainty, and control in a system that offers very little of it. But the body doesn’t play by those rules. As my physical therapist reminded me, it takes weeks to build strength—and just six days to lose it. Biology is always in motion, shaped by stress, sleep, food, memory, and more. Without real-time, continuous monitoring, that one drop is a snapshot, not a story. Pretending otherwise doesn’t just mislead investors—it distorts care, diverting resources from real, pressing health crises in favor of magical thinking.
The question I can’t help but ask investors backing health tech is simple: has healthcare gotten quantifiably better with technology?
Here’s what I usually hear:
Yes, healthcare has gotten better with technology—when applied thoughtfully.
Diagnostics: Imaging (MRI, CT, PET scans), genetic testing, and wearable sensors have dramatically improved our ability to detect disease early and monitor chronic conditions in real time.
Treatment precision: From robotic surgery to targeted cancer therapies, tech has enabled more accurate, less invasive, and more effective treatments.
Access and convenience: Telehealth, e-prescriptions, and patient portals have made care more accessible—particularly for people in remote or underserved areas.
Data interoperability (when done right): Shared electronic health records (EHRs) can prevent errors, streamline treatment, and enhance coordination between providers.
Preventive insights: Algorithms trained on large datasets can flag patterns and suggest early interventions—for some.
But no, technology hasn't made healthcare better for everyone—or even for most.
Cost hasn't gone down. In many cases, it’s gone up. Fancy diagnostics and biotech can be prohibitively expensive and are often passed on to patients.
Equity has suffered. Tech-driven models often assume digital literacy, language proficiency, and access—leaving out many who can’t or don’t use the latest devices or apps.
Care has become fragmented. More tools = more silos. Many systems still don’t talk to each other, and patients fall through the cracks.
People feel less seen. The more screens between doctors and patients, the more transactional the care can feel. Empathy can’t be templated.
Data without context = noise. Most health tech companies are built to scale, not to listen. They optimize around what can be measured, not what matters.
So, has it gotten better?
Technology can enhance healthcare—but only when it reinforces relationships, not replaces them. It needs to support caregivers, not just quantify care. And most importantly, it must be designed with—not just for—the people it aims to serve.
The False Promise of Predictive Care
Predictive care sounds good—until you realize what it's actually built on. Past behavior. Incomplete data. Statistically smoothed assumptions. If your health history isn’t rich with detail (most of us fall into this camp), if your care network doesn't speak your language, if your pain isn’t "normal" by the model's definition, then what, exactly, are we predicting?
That you’ll be underserved?
That you’ll fall through the cracks?
That’s not care. That’s data triage.
And here’s where data sovereignty becomes critical. We’re generating the raw material—our blood, our genomes, our movement, our stress levels—but we don’t own it. We don’t control how it’s stored, combined, resold, or modeled. Companies do. Governments might. Researchers sometimes. You? Rarely.
And in that disconnect lies the danger. Your health data may predict a condition you’ll never develop. It may trigger preventive action that doesn’t apply to your life. Worse, it may inform a policy that excludes people like you entirely.
The promise of predictive healthcare without patient agency is a surveillance system, not a service. And luckily, our surveillance systems are mostly used for one thing: ad targeting. Not healing. Not equity. Not even insight. Just more noise—pushed at you in the form of wellness products, supplements, insurance upcharges, or biohacked promises you didn’t ask for. It’s a feedback loop that pretends to know you, but only knows how to monetize your patterns.
The real question isn’t can they model us? It’s should they—if they won’t listen to us first?
The 23andMe Paradox
The next news flash that emerged that same week was the 23andMe acquisition. I read the headlines with a sinking familiarity: another company with massive datasets handed off to a pharmaceutical giant. Millions of us offered our DNA in good faith—for ancestry reports and medical insights. Now, that information sits on a Regeneron server, destined to be parsed by siloed specialists who will use it to develop generalized drugs for “targeted care.”
In May 2025, Regeneron Pharmaceuticals announced its acquisition of 23andMe for $256 million, following 23andMe's Chapter 11 bankruptcy filing. This acquisition includes access to the genetic data of over 15 million users, raising significant privacy concerns. Regeneron has pledged to uphold 23andMe's existing privacy policies and comply with legal safeguards concerning sensitive information.
However, the acquisition has sparked public and regulatory scrutiny. Senator Ron Wyden labeled the situation as "one of the biggest threats to Americans' personal data in decades," urging 23andMe customers to delete their data to protect it from potential misuse.
What exactly are they targeting?
I got the warnings to delete my data from their portal. Thought to myself, Well, shit, I better listen to the lemmings. But I got distracted by pickup duty and forgot. Then I found out the deal was done—and Regeneron is now my new genomic data daycare.
Maybe I rationalized it. They don’t know my lifestyle, my diet, my daily stress, my trauma history, my race, gender identity, or my medical mistrust, I thought. It’s probably harmless. But they do know a genotype. And while that’s not care—it’s more triangulation—they can still target me with some other marketing noise that would leave me none the wiser.
It’s not that they don’t have the tools. It’s that they don’t ask the right questions.
Or maybe they don’t think they need to.
There’s that recurring omnipotent power theme again. Oh, the patriarchy.
Systemic Myopia
This is the paradox we’re stuck in: we say we want personalized medicine, but we keep funding scalable, general-purpose solutions. We treat healthcare like software—find a pain point, build a point solution, ship it. But the human body isn’t a use case. It’s a context.
Healthcare professionals are trained in silos. Tech is built in silos. Data is analyzed in silos. No wonder the output is so often alienating. The more data we collect, the less we seem to understand the person it came from.
The Healthcare Paradigm: Treating Symptoms vs. Personalized Care
The Theranos saga highlights a broader issue in healthcare: the tendency to focus on treating symptoms rather than providing personalized, patient-centered care. This approach often overlooks individual patient needs, leading to generalized treatments that may not be effective for everyone. The allure of rapid, one-size-fits-all solutions, as promised by Theranos, can detract from the nuanced understanding required for effective healthcare delivery.
The Paradox of Innovation and Systemic Change
While technological advancements hold the promise of improving healthcare, they can also lead to deeper systemic issues if not implemented thoughtfully. The Theranos case demonstrates how innovation, when driven by profit motives without adequate oversight, can exacerbate existing problems rather than solve them. True systemic change requires a commitment to equity, transparency, and a deep understanding of the communities served.
Gary Tan’s Call for Founders with a Design Background
Another emergent bubble was Gary Tan’s supposed call to action that he’s looking for design founders. The truth is, founders of an idea and designers aren’t all that different if they’re just chasing revenue. And when you’re beholden to a board pressuring you for profitability and a 10x ROI, you behave exactly like the incumbents you claim to disrupt. Shake it like a Polaroid picture all you want—if the incentives are misaligned, you get the same behavior. Same complicity. The theater just looks newer.
We need to think differently—and dasein in the possibilities here. What if we rebranded “design thinking” into functional principles, modeled after the body’s systems? What if we hired not based on pedigree, but on a person’s natural propensity for inquiry? I’ve heard the phrases “they need more teeth” or “they lack backbone”—a corporate dialect for control and conformity. But what does that mean when someone thrives outside the theater of performative leadership? These statements are more often indicators of someone else's insecurity, projected as critique. And it hit me—maybe I drank the Kool-Aid by taking the job, but I never swallowed.
What good are designers who end up designing themselves into a box?
Reflections on the Healthcare Paradigm
My observation about the healthcare industry's tendency to treat symptoms rather than providing personalized care is not unique. I know people are frustrated while they are complicit. These are the two things that are true. We are all both of those things because there aren’t other choices for most of us. The current system often prioritizes scalable solutions over individualized treatment, leading to a paradox where the pursuit of innovation inadvertently creates deeper systemic issues.
The people in power, often removed from the day-to-day realities of patient care, may implement solutions that fail to address the nuanced needs of individuals. This disconnect can result in a cycle where systemic change is promised but not effectively realized, perpetuating inequities in healthcare delivery.
And those inequities are not accidental. Health disparities are not unfortunate outcomes—they are designed into the structure. Structural racism, economic inequality, inaccessible language, and geographic exclusion are embedded in the algorithms, incentives, and institutions. If your community has never been a data priority, don’t be surprised when it isn’t a policy priority either.
Meanwhile, the caregivers themselves—the nurses, the aides, the support staff—are burning out. Not just from overwork, but from the moral injury of watching care degrade into a billing code. When healing becomes transactional, the soul of medicine erodes. You can’t build a human-centered system by alienating the humans who hold it together.
A Forbes article recently broke down AI adoption into philosophical terms that can be broadly applied to all operations, including healthcare:
“Teleology: What is the AI actually trying to accomplish?
Ontology: How does the system define its world?
Epistemology: What knowledge informs these definitions?”
It’s elegant—until you realize most of our healthcare models don’t even pass these filters. We’re training AI on ontologies that erase the individual. We’re building systems with teleological confusion. We’re accelerating, not aiming. And when AI needs defined buckets to function, we force people into them.
We didn’t fix the problem. We just made it faster. Like Wonder Woman spinning so fast she opens a wormhole. The only question is whether we’re traveling through time, or just dizzy.
What I Learned From the Check
When I held that check in my hand, I didn’t feel vindicated. I felt implicated. I had believed. Not just in her, but in the premise—that we could hack care. That the system would self-correct. That if we fed it enough data, it would finally see us.
It doesn’t. It sees a trend line.
A Call for Equitable Systemic Change
We must reimagine healthcare not as a product, but as a relationship—one that honors memory, context, nuance. One that listens before it predicts. That asks before it acts.
To move forward, we need systems designed around care, not control. Around inquiry, not compliance. Around equity, not efficiency. If we are the raw material, we must also be the authors.
Remember, the real question isn’t: Can they model us?
It’s: Should they—if they won’t listen first?
On Sunny Balwani
I once asked Sunny why the lab director said a null result on an estrogen test should only occur in 1% of the population. I asked, “1% of what population?” He told me I asked too many questions. I said, “If I don’t ask, it’s a disservice to the end users.” He replied, “I am your end user.”
The flags were there. I stayed anyway—until I had to shut down the LA office.
The problems weren’t isolated to Theranos. They’re structural. And they still exist.
If there’s going to be meaningful innovation in healthcare, it won’t come from VC funding because the incentives haven’t changed. It will be bottoms-up—led by people who ask too many questions, who refuse to accept misaligned incentives as a given, who are tired of designing themselves into boxes.
Conclusion
What Elizabeth Holmes did was dishonest. But what the system is doing now is negligent. Because we know better—and we still build worse.