Health economist Jane Sarasohn-Kahn breaks down the recent report from Bertelsmann Stiftung and explores the implications of digital tech on health equity and ethics.
By Jane Sarasohn-Kahn, MA, MHSA
A recent study, Tech Giants in Healthcare, from Bertelsmann Stiftung, an independent foundation in Germany, explores how global technology companies are entering the healthcare sector. The study provides a comprehensive landscape of the world’s 16 largest technology companies, and looks at the advantages and the risks associated with technology companies playing a role in healthcare.
In the nearly 200-page report, the first half discussed the work these so-called “tech giants” do in the healthcare ecosystem, first focusing on the five corporate titans that make up the acronym “GAFAM”—that is, Google (parent company Alphabet), Apple, Meta (formerly Facebook for the “F”), Amazon and Microsoft. In addition, the study also examined the healthcare activities of Samsung, Huawei, Alibaba, Sony, Intel, Tencent, Siemens, IBM, SAP, Philips and Nvidia (listed in order of 2020/21 revenue).
These companies were identified on the basis of their scale, company commitments and investments in healthcare, “pursuing their own economic interests” as well as, ultimately, leveraging digital innovations to improve healthcare for individual and public health, the authors gleaned from the companies’ accounts.
The report explores four areas of digital health, including health-related products for patients and users; health-related products for healthcare professionals; science, research and development; and healthcare system projects for hospitals, insurance plans, and other industry segments. Specific applications addressed wearables and apps, virtual systems and avatars, augmented reality and virtual reality, cloud computing, blockchain, robotics, mobility and logistics, among others.
That’s the content summary for the first half of the study, well worth your read.
But it’s the second half of this research paper that resonated so strongly in the context of the current state of U.S. healthcare, health policy and politics, economic downturn, and social schisms: the “Ethical Analysis.”
Lessons on Digital Health Ethics From Dr. Topol
“Health technologies affect fundamental values such as life, health, liberty and justice,” the introduction to the Ethical Analysis begins. “A clinical decision support system that uses AI could help improve the evidence basis for a treatment decision in a case of cancer, and thus significantly prolong the patient’s life. However, if used unreflectively or with bad underlying data, the same tool could also lead to treatment decisions that endanger the patient’s life,” linking to The Topol Review.
The UK’s National Health Service (NHS) commissioned Dr. Eric Topol, the founder and director of The Scripps Research Translational Institute in San Diego, California, to advise the world’s largest publicly-funded health system on how to best prepare and organize to make the most of digital technologies. Dr. Topol and team called out ethical considerations early in the Review, identifying several priorities for the NHS to attend to: patient safety, data governance, equality and fairness, and respect for human dignity.
The Eight Ethical Frames
For their Ethical Analysis, the Tech Giants in Healthcare authors examined eight values and principles “enshrined” in the German national Data Ethics Commission report which synthesized rights and freedoms from several European Union (EU) documents on human rights. The eight pillars are:
- Human dignity
- Freedom and self-determination
- Justice and solidarity
While emanating out of Europe’s socio-political structure, each of these is relevant to the current environment and challenges facing U.S. healthcare. Taken together, they can provide a template for an ethics-by-design model that bolsters health equity and access for all U.S. health citizens.
Human dignity, the report notes, is “the first, most fundamental principle,” based on EU documents as well as the UN Declaration on Human Rights. For equity-by-design principles, technology should serve humans rather than humans being subservient to technology.
Freedom and Self-determination
Freedom and self-determination are intertwined. For the digital world, this translates to digital self-determination that enables people to decide on the disclosure and use of their personal data. This is based on the individual’s own personal values for how to manage their own data. While EU health citizens are covered by the General Data Protection Regulation (GDPR), which protects peoples’ digital self-determination across all of their personal information, U.S. health citizens nationally lack that comprehensive privacy blanket—with states taking on their own approaches to covering residents, notably in California, Colorado, Connecticut, Virginia and Utah (with legislation pending in five other states as of July 2022).
In 2006, the World Health Organization’s Constitution stated that all people have the right to health—essential for attaining peace and security—without distinction of race, religion, political belief, economic or social condition. With this proviso, WHO was calling out the social determinants of health and their direct role in shaping peoples’ overall well-being.
The right to privacy, the Ethical Analysis explains, is intertwined with human dignity and self-determination. Simply put, this is the right to control those who can access which personal information the person chooses to share. “According to the conception of privacy stemming from the so-called Age of Enlightenment still valid today, every person simply has the right ‘to be let alone,’” Warren and Brandeis wrote in 1890 in their landmark Harvard Law Review article, “The Right to Privacy.” 132 years later, the essay feels incredibly current.
Security takes on forms for physical, emotional and environmental safety. This also pertains to the protection of privacy related to the collection and use of data, as well as human/machine interaction and system resilience to attacks and misuse (i.e., cybersecurity).
Digital developments can also serve sustainability by contributing toward achieving ecological, economic and social sustainability goals. Technology-based innovations can bolster sustainable goals, helping to make natural resource use more efficient and equitable to protect future generations.
Democracy depends on open societal debate. “Digital technologies are of systemic relevance to the flourishing of democracy,” the report asserts. “They make it possible to shape new forms of political participation, but they also foster the emergence of threats such as manipulation and radicalization.” In ethics-by-design, digital health developers should be clear-eyed about safeguarding peoples’ fundamental rights and constitutional protections, while eliminating digital divides between different population groups.
Justice and Solidarity
Finally, justice and solidarity advocates that “digitalization [should] foster participation in society and thereby promote social cohesion.” In legal terms, the report explains, the principle of equality prohibits treating equal entities unequally. Solidarity can bring a sense of belonging or the creation of a group that stands up for a common goal (such as a public health goal to beat a pandemic).
For digital healthcare, technology can help strengthen solidarity, but can also undermine and weaken it, the authors warn.
From Solidarity to Community Via Digital Tech
“The COVID-19 pandemic’s societal, economic, environmental, and political ripple effects deeply impacted—and continue to impact—consumers’ mental health,” the 2022 Health Care Insights Study from CVS Health observed.
CVS Health’s consumer research published in July 2022 found that most Americans experienced stress from the political climate, evolving public health measures during the pandemic, societal issues like civil rights and climate change, and safety and neighborhood security looking back a year.
People drew a direct connection between how their social and personal lives were impacting their overall health and well-being. Nearly 1 in 2 people in the study said they were not as socially connected as they were prior to the pandemic.
Technology can surely help connect people, as many learned through Zoom and FaceTime meetups during COVID-19 pandemic lockdowns and quarantine periods. As an example, this article from AARP coached older people onto videoconference platforms.
Wunderman Thompson’s report on Inclusion’s Next Wave talks about “meta-inclusion,” noting that the metaverse is primed for ultra-connective engagement. In that discussion, Christina Mallon, director of inclusive design at Microsoft, notes that as much as this is an opportunity, “it really falls on the companies that want to partake in the metaverse that they do the research and put the time in to ensure that it’s accessible.”
The Next Digital Health Ethical Challenge: The Metaverse
Accenture’s annual 2022 Digital Health Technology Vision Report is all about the “metaverse continuum” for healthcare. Accenture found that virtually every healthcare executive in its study believed that “continuous advances in technology” were becoming more reliable than economic, political or social trends for informing their organizations’ long-term strategy.
Citi joined Accenture’s bullish forecast earlier this year talking about the Metaverse and Money, discussing various use cases for healthcare and citing clinical trials, training, augmented surgical procedures, pain management and other promising applications.
The National Law Review detailed ethical and legal issues prompted by these optimistic expectations for the metaverse in healthcare in a May 2022 assessment by a legal team from Sheppard, Mullin, Richter & Hampton LLP.
“Underlying the metaverse is a potential massive collection of user data. As users ‘exist’ in the metaverse as avatars performing activities, various types of data, including some that may be deemed ‘personal’ or ‘sensitive’ (by law), may be generated,” the legal team warned.
They set out three legal considerations regarding the metaverse market for digital health:
- Data privacy and security, recognizing the comprehensive laws in the five states noted above, along with the FTC’s increasing interest in health information that is outside of the Department of Health and Human Services’ purview. The lawyers also caution that healthcare providers that are subject to HIPAA may find themselves navigating new issues in an evolving technology space.
- Medical device regulations, the province of the FDA, which has been clarifying its role for applications and technologies that could be classified as medical devices in the metaverse. The FDA is expanding its guidance for medical devices that embed AI, machine learning, and extended reality in healthcare.
- Healthcare laws, such as HIPAA, how healthcare providers are licensed, or how would fraud and abuse laws (such as the Stark law) operate in the metaverse?
The metaverse might be the next-new-thing in healthcare in the third decade of the 21st century, but we can turn back to Warren and Brandeis on the right to privacy from 1890. They presciently wrote that, “Political, social, and economic changes entail the recognition of new rights, and the common law, in its eternal youth, grows to meet the demands of society.”
With the emergency of promising digital health tools and applications come new life-flows and consumer demands—increasingly data-driven. This calls for collaboration between private and public sector stakeholders, together forging a more equitable future for all. Equity-by-design should be our template and True North if we wish to live and operate in a society that’s fair to all health citizens.