Can we use technology to fight COVID-19 without surrendering our privacy?
By Ben Ruxin, MBA’21 & CASI student leader
As the world rushes to address the challenges of Covid-19, many leaders from both the public and private sectors are calling for the use of surveillance technologies to enable effective response to the pandemic. Apple and Google have announced a partnership to build “privacy-preserving” contact tracing technology. Some countries, including Singapore, have pushed their citizens to download contact tracing apps to their smartphones, to help public health officials track down disease clusters before they become outbreaks. In the midst of this early adoption, however, privacy experts and civil liberties advocates are raising concerns about exactly how this technology will be built and governed—especially after the pandemic has ended.
On May 19, 2020, the Corporations & Society Initiative and the Center for Internet & Society convened a panel of experts to talk about the trade-offs associated with surveillance technology in the context of Covid-19. Our panelists included:
- Inder Singh, CEO of Kinsa
- Dr. Doug Fridsma, former President and CEO of the American Medical Informatics Association
- Jon Callas, Senior Technology Fellow at the ACLU
- Gretchen Greene, Senior Advisor at the Hastings Center
Al Gidari, Director of Privacy at the Center for Internet & Society, moderated the discussion.
Below is our summary of the key topics and ideas that they discussed, along with some of the open questions that they are still trying to answer.
There are a few possible applications of data collection technology that could help constrain the spread of Covid-19—and they don’t have to be perfect to work.
Public health experts agree that fast, accurate detection of the virus in a community is essential to preventing the outbreak from spreading out of control. Many experts—including our panelists—agree that there is tremendous potential to apply technology to the painstaking process of contact tracing, which is currently an incredibly manual and labor-intensive process. The level of data collection and sharing that smartphones and bluetooth technology enable has tremendous potential to help speed up the process of identifying potential Covid-19 cases. Even with limited adoption, contact tracing applications have the potential to help communities reduce their R0, the critical measure of how many other people will become infected from someone who has the disease.
Contact tracing, a hot button topic of the pandemic, is not the only public health application that could benefit from the use of technology. Allocation of health resources is also a critical component of pandemic response, and being able to predict when and where the next outbreak will occur can help officials better distribute healthcare workers and equipment where they’re needed most.
Inder Singh talked about the power of early detection and shared some of the work that his team has been doing at Kinsa to curb the spread of infectious illness through earlier detection and response. Data from Kinsa’s 1.5 million smart thermometers currently can predict flu incidents 12 to 20 weeks in advance, on a city by city basis (while the CDC still can only predict flu incidents on a multi-state-level two to three weeks in advance). The company is already working to make similar predictions about Covid-19 outbreaks by looking at unusual fever patterns in data from their thermometers. Given researchers’ recent findings that shutting down key outbreak locations even just two weeks earlier could have saved tens of thousands of lives, Singh’s goal of early detection seems like a critical way to mitigate damage caused by the pandemic.
Data collection for pandemic response carries big risks, including reducing data privacy, building false confidence, and promoting structural inequities within our healthcare system.
While many of our panelists expressed hope for the promise of technology to facilitate the public health response to Covid-19, they were also emphatic about the risks that data collection and “surveillance” technology pose. The obvious risk is that sensitive data—personally identifiable individual health and location data—could be stolen or misused by bad actors and institutions with limited oversight.
In addition to these data privacy concerns, however, our panelists also expressed concern over several additional risks that come with leveraging technology for public health. One of the biggest concerns is that data might be misinterpreted or used to drive the wrong incentives. Jon Callas warned us that in cases where technology is a gatekeeper for the ability to work, people might be tempted to “cheat” the tech—for example, leaving their contact-tracing phone at home when going out to meet with friends. Additionally, the false sense of security that potentially faulty or inaccurate technologies can bring to the public may result in an increase of preventable infections.
The other critical issue that came up during our panelists’ conversation was equity. Not every community in the United States has access to technology—19% of Americans don’t own a smartphone, and 27% of Americans don’t have access to the internet at home. Marginalized communities are already disproportionately impacted by Covid-19, due to the existing structural inequities in American society. A technology-forward pandemic response might risk leaving certain communities behind.
Greene, however, made the powerful comparison of using contact tracing applications to wearing a face mask—both approaches are as much about protecting the people around you as much as they are about protecting you from the disease. If technology can be used to flatten the overall curve of disease, then even those who do not have direct access to technology could benefit from this response. Fridsma and Singh also emphasized the importance of meeting people where they are, rather than making communities—particularly marginalized communities—come to the technology or the healthcare system.
The pandemic may make more surveillance tolerable, but panelists agreed it must be opt-in and temporary.
Changing circumstances require changes to how societies evaluate privacy issues. As Greene pointed out, “society changed forever after 9/11”, particularly with regards to security and privacy. Coronavirus represents a similarly transformative moment, and the panelists were in agreement that two principles must underlie any new uses of surveillance technology as a public health tool: 1) these tools must be opt-in, and 2) these tools should be rolled back after the crisis passses.
These technologies must be opt-in for both cultural and efficacy reasons. Culturally, Americans’ values of freedom and autonomy will lead to strong pushback against national surveillance requirements. As Singh put it bluntly, “If you go top down in a culture like the US, it’s just not going to work”. This backlash is in turn likely to damage the efficacy of a mandated solution, pushing people to subvert the government instead of willingly and voluntarily participating in public health measures.
Many security measures enacted after 9/11 have not been rolled back. In fact, the Senate in March just voted to reauthorize the Patriot Act. In order to avoid repeating this situation, Greene stressed the need for a framework to roll back increased surveillance once the coronavirus crisis is contained.
Overall, panelists agreed that technology is a tool to augment public health, but not to replace it—and also not to replace the social safety net.
In a crisis like the one we are facing today, there is temptation to search for silver bullets. However, this is the wrong way to think about technology solutions. Technology instead should be applied as part of a much deeper and more comprehensive solution to the problems that coronavirus has laid bare, from fighting the virus, to bringing essential services like food and healthcare to those in need. Callas summed this sentiment up, pointing out that, “Technology isn’t a substitute for having a good social system.”