From the Field

Charting an Ethical Course for Technology’s Future

A philosopher, a policy expert and a computer scientist walked into a classroom, and what emerged was a distinctive approach to teaching the ethical and public policy concerns around technology. The course—Ethics, Public Policy and Technological Change—has moved beyond the Stanford University campus to classes for Silicon Valley professionals, resources for other universities, and a new book released this month, System Error: Where Big Tech Went Wrong and How We Can Reboot. FutureEd associate director Phyllis Jordan spoke with the three Stanford professors: ethics scholar and FutureEd research advisor Rob Reich; computer science professor Mehran Sahami, and political scientist Jeremy M. Weinstein.

You’ve hit on some of the biggest tech issues of the day in your course and your book: facial recognition, social media algorithms, autonomous cars through different lenses. What is it you hope to accomplish by bringing your expertise together?

Rob Reich: We wanted to try to bring about a cultural shift on campus and then perhaps even more broadly in Silicon Valley by ensuring that the enormous flow of students who come through Stanford University and major in computer science—the largest major on campus both for men and women—are equipped with the social and ethical frameworks for thinking about the important decisions they make. This is not only for those filling the hallways of tech companies but also for people who go off into policy work or into civil society organizations to have some of the technical skills that allow them to understand the transformations that have been wrought by Silicon Valley.

Jeremy Weinstein: The three perspectives when brought together speak to the essential elements of governing a society being transformed by technology. The first element you have to understand is the technology itself, that it’s not value-neutral, that encoded in the design of technologies is the prioritization and choice of some values over others. And then you need to recognize that given that technology isn’t value-neutral, it matters what you value, and that it’s important to engage in conversations about what you value and what values are in tension with one another. And then the social science and policy component is to recognize that your values may be different than other people’s values. And when you begin to grapple with that question, it raises the obvious issue; well, how do we reconcile these disagreements that we have about our values? And that’s what brings us to democracy. That’s what brings us to a discussion of the politics of the technology revolution.

How did you conceive this? Did each of you in your own discipline see the need for this course?

Mehran Sahami: There’s a bit of a history: Stanford’s had a class in technology and ethics for about 30 years. But there has been a transformation in about the last 15 years, a shift in student dynamics and major choices. The number of computer-science majors has increased about fourfold at Stanford, a trend we see nationwide. And there are a lot of students who are driven to a particular field, some by intellectual curiosity but some also by potential financial reward. And when you would ask about why someone is going into a field, they would say things like, “I want to do a startup company.” And you would say, “Well, why? What do you want to? What’s the problem you want to solve?” And there wasn’t one. Their only interest was starting a company for the sake of starting a company. They need to understand that if they’re going to get into technology that’s going to have an impact on people’s lives, they need to understand more deeply into what that impact will be.

Weinstein: What we’ve also seen in the last five or 10 years is that the bloom is off the rose of Silicon Valley. Whereas technologists once possessed a kind of techno utopianism, or we’re going to solve all the world’s problems, we’ve, in fact, found that technologies created a number of problems for us. We saw the impact on the 2016 election with misinformation. We’re still seeing that now with COVID. We’re seeing the impacts of algorithms that are making potentially racial- or gender-biased decisions on things like who gets health care or who is granted bail in the criminal justice system. These are issues that we need to have a broader conversation about.

How do you teach these issues?

Reich: We have custom written case studies, where we hired journalists who conducted interviews and created materials for us. To explore algorithmic decision making, we use the well documented problems with algorithms deployed in the criminal justice system that help decide who gets released on bail and who doesn’t. We invite students to explore an algorithm about recidivism and to weigh what it would mean to make the predictions simultaneously fair and as accurate as possible. For our unit on data collection and privacy, we let students see the data actually collected by Stanford about them—a staggering amount—and consider the implications. For autonomous systems, we ask students to develop a regulatory framework for a hypothetical shift to campus shuttles without drivers. And for the free speech and private platform unit, we have students experiment with an algorithm for a large social network. We also have speakers come in from Silicon Valley businesses working with these technologies.

Are most of the students in your course computer science majors? Or do you get a decent showing from the humanities?

Sahami: It’s cross-listed in philosophy, political science, public policy, communication, and Ethics in Society, which is an honors program at Stanford. I’d say well over half come from computer science. And the rest are either from a social science or humanities background, or first-year or second-year students, so not yet declared.

What sort of assignments do you give to that mix of students, and how do different sorts of students react to them?

Reich: The integration of these three different lenses yielded, from the students’ point of view, a single class in which students are asked to do a technical assignment, a policy memo, and a philosophy paper. And it’s the only class, to the best of our knowledge, in which something like that is true.

There’s a familiar phrase that every student on Stanford’s campus knows: the divide between the “techies” and the “fuzzies.” The techies are the ones who do the engineering majors. There are problem sets and coding assignments. They work together in groups, and they’re up all night. The stereotype is that they work really hard, and there are answers that are either right or wrong. The grading is objective. And the fuzzies are the ones who do the social sciences or the humanities, where if you write an essay, it’s subjective. You toss it off the night before, and it’s not that difficult. And one of the things which we encountered in this class, which we didn’t quite anticipate, was that the computer science students were, in general, scared of writing a philosophy paper and collaborating and writing a policy memo. They had to write more pages than they’d likely ever been asked to write before in a computer science class. They had to read things they weren’t used to reading.

And then, of course, the inverse is what happened with the students from the philosophy or political science department. They had to take Computer Science 106A, which is the introduction to computer science class as the prerequisite for our course. So they had to have a little bit of programming background. But then we got on the student evaluations people saying those technical assignments were so difficult. And the computer science students tended to say those technical assignments were a total breeze.

Sahami: The majority of students actually said that the assignment they learned the most from was the philosophy paper. So you are actually getting this cross-fertilization where they’re learning from the other disciplines, which is exactly what we want.

Weinstein: What challenges students with technical backgrounds is that these are questions for which there is not a right answer. It’s not about developing a technical skill and being able to work through a set of exercises that get you the right or the wrong answer. In fact, there are many answers. Some answers are better than others. And part of the enterprise is arriving at your own view on what the best possible answer is. And that is core to the disciplines that are in the humanities and the social sciences. It’s more unfamiliar in the world of engineering.

And you’re taking this beyond the college classroom now. What sort of outreach are you doing to Silicon Valley businesses?

Reich: In partnership with a venture capital firm called Bloomberg Beta, we tried out classes with professionals in the tech community.

Weinstein: We were overwhelmed and encouraged by the response. We were tapping into an unmet demand of people in critical roles inside companies, either people who are designing technologies or managing products or working on the regulatory side, to really engage in safe spaces about the dilemmas that they feel that they confront every day. And that speaks in part to the challenges of engaging on those tangents within companies themselves, where you’re concerned about your own trajectory, your ability to get promoted. People needed an environment in which they could engage with peers, people who occupy similar roles. They could question choices that they’d seen their company leadership make but not fear some retribution for taking those positions.

Are you seeing other universities pick up either the curriculum or the approach?

Sahami: We’ve advertised the fact that there’s materials that we’ve developed that are Creative Commons-licensed. There are case studies we’ve developed for the class, for example, that we’ve made available to anyone. People have asked us for discussion guide questions. We took videos of expert panels and have made them available on line.

Weinstein: There’s a broader community in computer science and in computer science education that is taking seriously these intersections of ethics and technology. Sometimes the phrase that’s used for this is “responsible computer science.” And so even in developing our own course, we’re looking to some of the leading thinkers who are pushing for fairness, accountability, and transparency around machine learning.

Reich: And there’s a complementary effort afoot here at Stanford – and Harvard, MIT, and the University of Toronto – to insert ethics modules into the core courses of the computer science major. It’s called “embedded ethiCS,” and the goal is to make a meaningful encounter with ethics unavoidable as a computer scientist. We hope to hold a national conference of all the players in the effort and produce materials for anyone to use.

We read a lot about diversity and inclusion in the tech world, that it’s often too male, too white. What sort of lessons do you teach students about that?

Reich: We’ve woven that issue into the entire course, and there’s also a module about it. We focus attention on several different dimensions. First, there’s concern about a diverse and inclusive pathway into the tech profession and the statistics about how the male-female ratio in the tech world is so skewed. The minute amount of venture capital investment in founders or startups from people of color is also a huge problem. Who gets to participate in the decision making in Silicon Valley, and who is benefitting from the enormous power and wealth creation? But there’s a second issue that is less about fairness and equality of opportunity and more about what a philosophy would call epistemology. What kinds of products, what kinds of questions about technology would we have with a more diverse and inclusive set of founders and technologists? As Mehran often says, there’s a glut of start-ups in Silicon Valley who sole purpose is to solve the problem of delivering lunch to technology companies. That’s not exactly a technological solution to a deep social problem.

Sahami: And we really raise up voices of a much more diverse set of people who’ve been working in this area for a long time. So in the class and in the book, we talk about things like the work of Latanya Sweeney, who’s a professor at Harvard, around data privacy. She showed how simple it could be to confound data anonymization strategies and re-identify actual individuals from de-identified data sets. We talk about the facial-recognition work that Joy Buolamwini and Timnit Gebru have done in documenting the racial and gender bias in the accuracy of a facial-recognition tools. And we mention that it’s no coincidence that it’s many people of color or many women who are the ones who have been at the forefront of raising the flag on these issues, because they’re the ones most affected by it.

Did the pandemic make these tech issues more urgent?

Reich: Big Tech was powerful before the pandemic. And they are emerging from the pandemic even more powerful, with skyrocketing market valuations. The reason is simple: during the pandemic we’ve relied upon digital tools and products to help remain connected at school, at work, and with our loved ones. Economically speaking, big tech has won the pandemic. The valuations of these companies are through the roof because we’ve become even more dependent on them than we were before the pandemic. So the task now is not to adopt some utopian or dystopian attitude but rather to bring many more voices to the table inside and outside the tech companies so that we can steer the technological future in a direction that harnesses the great benefits and minimizes the potential harms. And in the process, we can re-invigorate our democratic institutions themselves.

[Read More FutureEd Interviews]