Dr. Shobita Parthasarathy is a professor and director of the Science, Technology, and Public Policy program at the University of Michigan, Ann Arbor. She has also written two books: Building Genetic Medicine: Breast Cancer, Technology, and the Comparative Politics of Health Care (MIT Press 2007; paperback 2012), which influenced the 2013 U.S. Supreme Court case challenging the patentability of human genes; and Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe (University of Chicago Press, 2017) which won the Robert K. Merton Prize from the American Sociological Association. She is also the co-host of The Received Wisdom podcast with Jack Stilgoe, talking to experts from around the world about governing science and technology to make the world a better place.
Her research is wide-ranging, focusing on women and gender, comparative politics in science and technology, as well as policy innovation for a more socially just tech ecosphere.
Recently, Carnegie Mellon University announced it was thinking of allowing its police to deploy facial recognition technology to combat crime. Why is this such a bad thing?
In 2020 my research team published a report about the use of facial recognition technology in K-12 schools. At the time there were one or two schools considering it, but that number has been rapidly rising as school shootings and other problems are on the rise. There has been a lot of literature about why facial recognition is not accurate or, at least, less accurate for anyone who isn’t a white, middle aged man.
One of the things we also noticed is that it is also less accurate among younger people who are still growing and changing - you can imagine the same for transgender or disabled people as well. The technology is trained on actual data, and that data tends to be homogenous so it’s less accurate on other populations.
The issue is that it’s not just about the AI, it exists in a socio-technical system. It’s how facial recognition technology is used, when it’s deployed, what its implications are, what is the history of it, etc.
Do you think it takes away from the mission of an educational institution?
One of the things we’ve seen with the use of facial recognition technology is that it tends to get used disproportionately against already marginalized communities, pathologize their behaviors - they increasingly become the object of scrutiny and then their behaviors become criminalized. Universities are places where these students are learning how to become adults, independent, figure out who they are, exercise their First Amendment rights.
I can easily imagine that if they engage in behavior that would simultaneously be about learning how to be independent and somehow a challenge to the university, and then the university could make a policy disallowing that behavior. Using this technology is about surveillance but also it becomes about interfering with the development of young people.
Is it just a matter of needing better data? Or should we abolish the use of facial recognition altogether?
I think there are more than two sides. There is the camp that says the use of it just needs to be in the appropriate regulatory framework. There’s certainly the other side that thinks we just need better, more inclusive data. There is a technical fix for deploying it. The third camp relates to the fact there is an overwhelming sense that this is a good technology that will help reduce crime, which will improve economic growth. One instance that comes to mind is Project Green Light in Detroit.
It’s also a procurement issue, not necessarily a political one. If a pharmaceutical rep comes by to sell a medicine that promises relief or a cure, hospitals will buy it. Like office supplies or something, it’s a line item in a budget. And those people don’t necessarily know about the debate surrounding the use of the technology, they are listening to the developers and wanting to keep their community safe.
Changing gears, what are the dangers involving our data privacy in this post-Roe world?
The loss of the right to privacy over our bodies also can mean the loss of privacy over our data.
There has been a lot written lately about figuring out whether someone is considering an abortion through period-tracking apps, but it’s a red herring. You don’t access to that kind of app data to track it. As much as people try to say no to cookies, use safer browsers, have security in place, we put our data out there every single day for companies who do not have a strong track record of maintaining our privacy to see.
They then sell the data to other companies so a subpoena could be anywhere along that ‘supply chain.’ In other words, there are lots of ways our data can be used against us even without the use of period-tracking apps. There have already been cases where search histories for abortion drugs or clinics have led to women being prosecuted for seeking or getting an abortion in certain states in the south.
There are also what are called geofencing warrants, which law enforcement officials can use to track the location of those who are suspected of seeking an abortion across state lines. The question becomes: to what extent is that breaking state law?
What do you think of this line of thinking from the New York Times ‘On Tech’ newsletter? “On the other hand, it is an example of people, bypassing elected officials and instead looking to a powerful tech companies to address their anxieties about law policy and accountability.”
You could have been quoting me there! I don’t know why this is, maybe because it’s easy to hate or blame industry and the way our political economy is set up. It’s definitely handing over a lot of power to companies and almost kind of erasing the state and elected officials as regulators.
I’m old enough to remember when Google started saying it wasn’t evil, but I don’t care whether a company is good or evil. It’s not Google’s job to provide public benefit regardless of what they - and we shouldn’t be looking to them for that. Even if they are behaving badly, they are acting naturally as a company and we should approach that with a heavy dose of skepticism and look towards the institutions which are tasked with providing public goods and maintaining the public interest.
It’s a consistent problem in tech policy to focus on the ‘bad apples,’ but that hands over power to industry to say that was just an isolated situation.
What is your ideal version of a tech ecosystem?
This is a tough one to keep short! I think it has to be deeply democratic and democratically accountable. I think that so many of the decisions now are being made by a small handful of extremely powerful companies, and there's really no proper public engagement. The second piece is that I think that there are a lot of humanists and social scientists who have a deep understanding of tech and the internet who are almost always passed over in conversations about what regulations and what the internet should look like. Because tech is kind of running the show now, they can create an ethics system which they can then control.
Tech designs for who they assume their user is and then adds on appendages for everyone else so I think the third thing is that the needs of marginalized communities need to be at the center of what technology and the internet should look like because if they are not explicitly at the center, then those needs will be forgotten because they are not the assumed user or the ‘norm.’