Making Decisions When Values Conflict or Are Prioritized Differently, with Paul Root Wolpe

May 10, 2022 89 min listen

In this Artificial Intelligence & Equality podcast, Carnegie-Uehiro Fellow Wendell Wallach sits down with Emory University's Professor Paul Root Wolpe for a thought-provoking conversation about the truth of ethical decision-making, the challenge of regulating new technologies whose impact is uncertain, the intrinsically fragmenting nature of social media and AI, and the dilemmas of neuroscience and neuromarketing.

WENDELL WALLACH: One theme that has become very important for the Carnegie Council for Ethics in International Affairs and for the Artificial Intelligence and Equality Initiative (AIEI) is re-envisioning ethics and empowering ethics for the information age. That theme cuts across many of our podcasts.

Our guest today, Paul Root Wolpe, is certainty a leader in that respect, and therefore I'm deeply pleased to have him as a guest. Paul Root Wolpe is the Raymond F. Schinazi Distinguished Research Chair in Jewish Bioethics at Emory University and a professor in the departments of medicine, pediatrics, psychiatry, neuroscience and biological behavior, and sociology at Emory. In addition, he directs the highly regarded Emory University Center for Ethics. Paul has written about a broad range of subjects and is perhaps best known for work in neuroscience and is one of the founders of neuroethics. Interestingly, he spent 15 years as senior bioethicist for the National Aeronautics and Space Administration (NASA). Hopefully, we'll return to discuss that later.

Paul, while many scholars approached ethics from philosophy or religious studies, you started out as a sociologist, and you have said that your approach as a sociologist has perhaps helped form your unique perspective on bioethics and ethics more generally. Can you please share with our listeners what you mean about that?

PAUL ROOT WOLPE: First of all, thanks so much for inviting me onto the podcast. It's a real pleasure for me to be here.

Yes, I'm a sociologist. I spend my life around ethics. I know almost all the ethics and bioethics centers in the United States and many abroad. There are only a handful of social scientists who really dive deeply into ethics as their area of interest, and I have found over and over again that the perspective I bring in is different than that of my colleagues who are philosophers and religious studies scholars. I think there are a few ways that it's different.

One is even though we all know this, the dynamics of it become interesting. That is, ethics emerges from social conversations and changes and kind of roils through societies over time. If you tried to articulate what the ethics of the United States is, we could generally articulate some things that we think are ethical principles, we could generally articulate how they might have changed since the 1950s and then again since the 1990s perhaps. We might understand that there are subgroups in our country that have different ethical priorities—and that includes both ethnic and religious subgroups but also includes regional subgroups and political subgroups—but at the end of the day it's really kind of a mystery how these ethical ideas change and move and convince societies and then regress in societies. That's what I'm interested in, is how ethical ideas become prominent, how arguments in the public square win or lose the day.

I think that's a different way of asking the question about ethics because my ultimate goal is descriptive rather than prescriptive. While my philosophy and religious studies colleagues are often talking about "What is the right ethical decision?" my question is, "What are the decisions people are actually making and how and why are they making those particular decisions in the real world rather than some theoretical model of what the right decision should be?"

That I think is a really quick synopsis of why I think my social science perspective is a little different, and it results in some ideas about ethics that I also think bring a slightly different lens to the way we think about social issues.

WENDELL WALLACH: Give us a few examples of those.

PAUL ROOT WOLPE: Before I give specific examples let me say just a word about the ideas behind it to show what I'm trying to suggest by what I just said.

One of the truths of ethics that we all know but that when we think about it more deeply begins to generate some new ways of thinking about ethics is we tend to think about ethics as a judgment in any particular case of what a right and a wrong course of action is, and certainly there's truth to that. When we have an ethical dilemma, there are a lot of wrong decisions we could make.

But I think there are two mistakes we make. One is that there is one right decision, when in fact there is almost always a spectrum of right decisions, and that's why three ethicists arguing can be arguing different decisions but very rarely are they going to be arguing one of these very wrong decisions, but they are arguing within the spectrum of right.

That's because the truth of ethics is that ethical dilemmas are not about right vs. wrong. Sometimes they are—I mean that's how we teach children not to steal the candy bar or hit your sister; we try to give them some basic ethical principles—but when you are a mature person, the ethical dilemmas you face that aren't simple are always about desirable values in conflict.

I have taught medical students during my whole career, and so we give them those kinds of vignettes all the time.

Do I privilege patient autonomy? This patient wants me to do this thing. I don't think it's the right thing for them to do. I have a responsibility to do what's in the best interest of my patient, so do I honor their autonomy or do I honor my responsibility? They are both positive values, they're both things that we want to honor, they're both great things—we love patient autonomy, we love doctors' responsibility to do what's in the best interest—but they can't both be honored right now.

That's what an ethical dilemma really is. There's no right or wrong there, it's not about right or wrong. It's about two things we value that can't both be honored, sometimes three things we value that can't all be honored.

When you begin to think about ethics as conflicting values, all kinds of new ways of understanding it come out that are very different than if you think of it as: What are the right decisions versus the wrong decisions? What happens then is you begin to understand how societies generate ethical points of view, they do it by arguing values and which values they think should be predominant right now.

You understand why it is that different cultures can come to a different ethical decision about the same ethical dilemma. It's not because, as some people try to say, ethics are relative. Ethics aren't relative, and I can talk a little bit more about why that's true. It is because in that weighting of values what weight we give to different values, one culture—we're all talking about the same values, human ethical values are universal. Duty, freedom, individual responsibility, community responsibility, honesty—they're in every human culture, they're fundamental to the human condition—but how we weight them can be very different. Let me give a couple sort of trivial examples.

If you look at the way the West weights individual liberty versus community good, in the United States we weight individual liberty more highly than community good. We like community good, we think it's an important value, we honor it all the time, but when those two things come in conflict we tend to privilege individual liberty. While in some Southeast Asian countries, for example, when those two things come in conflict they say, "No, we think community good is more important than individual liberty." Neither side is right nor wrong. They are two different models of weighting values to make ethical decisions and the two different cultures make them differently.

We do this all the time. You know, a friend comes up to you with their newborn baby and they hold it up in your face, and you think to yourself, My goodness, that's one of the homeliest babies I've ever seen. The mother says, "Isn't she beautiful?" You say, "Yes, just gorgeous." What are we doing there? We're weighting honesty versus compassion in a sense.

We do this all the time. We don't think it explicitly—we're very skilled at it because we have to do it all the time, so it's not a conscious thought—but we are consciously having to take values in conflict and decide how we are going to honor them.

I think it's an important insight because it explains why ethical decisions change over time, it explains why different cultures and subcultures make different kinds of ethical decisions, but it's also very important to understand it also explains why ethics is not arbitrary. Societies don't just arbitrarily decide on values. Values are very deep and rooted in societies and they change over time relatively slowly, and violating those values is considered by society a great transgression.

We can describe what the values are of a society in general, what kinds of things are going to violate them, and even though they may change in time and place to some degree, that's not arbitrariness. That is the normal evolution of human societies as they think about the nature of values and as their priorities change.

WENDELL WALLACH: Let's go into that a little bit because an awful lot of ethical conversation is largely about conflict between values, different prioritization of values, inherent tensions that arise with them, and perhaps tradeoffs that take place as we consider different responses to ethical challenges. That's kind of the sociological analysis of what's going on. But how do you see that? What does that mean as we actually try to resolve these tensions and tradeoffs?

PAUL ROOT WOLPE: It's interesting because it means different things depending on who's trying to resolve them. Let me give you two quick examples.

One of the faculty members in my Center just left the Center. Karen Rommelfanger was the head of our Neuroethics Program. One of the things she does is bring all seven national brain initiatives together—in the West there's the United States, Canada, Australia, and the European Union; and in the East there's Japan, South Korea, and China—to get together in South Korea and try to come up with a global set of ethical principles for neuroscience research.

What's fascinating about watching that conversation is lots of values in common and some kinds of issues there's just no conflict about, but there are also deep cultural differences—and not only between East and West so to speak, but even within South Korea versus China, or even Australia or the European Union versus the United States—and you can see differences in the nature of those values.

There's no way to argue "my values are better than your values." All we can do is try to find common threads within those differing values that we can all honor. You look for the underlying principles that we can all agree upon and then you try to build from them without getting too stuck in the values that are in conflict. This is what basically political negotiation is about; it's not just about scientific issues or technological issues, but about political issues as well.

That's one point, which is that ethics is a negotiation. In the kind of ethics that I do and that people like me do it's about trying to understand the things that people value the most and then trying to find ways to have conversations that allow compromise so that the greatest shared values can be expressed. It's not easy to do, and you do get to points where values are incompatible.

If you look at the abortion debate in the United States, you've got two fundamental values incompatible in one sense. The pro-life group cannot compromise on their values because they think it's a fundamental value. The pro-choice group that wants women to have control of their bodies, that's a fundamental value. It's a very difficult set of values to find some commonality around, though there are some.

Sometimes—not in that case perhaps—it's only the extreme ends of value conflict that can't compromise and you can find a lot of compromise in the middle. That's true about a lot of technological ethics questions too. In neuroscience, artificial intelligence, and machine learning there's a pretty big middle ground there that we can work with and find ways to come up with compromise ethical systems that people can work within.

WENDELL WALLACH: Before we move to the technological issues, which have captivated both of us now for years, let's stay with these process concerns.

In many respects democracy at least came into being as a popular form of political organization in the Western Enlightenment Era because what the Enlightenment did was in a certain sense pull countries away from a kind of monosubmersion in a Christian/Aristotelian scholasticism, which dominated all of Europe, and democracy was seen as a political vehicle for resolving tensions when you had value conflicts, which were then going to be inevitable.

Now we are also concerned about the vulnerability of democracies, and I wonder how you see that in terms of the search for compromise. I grew up in a world where democracy was all about compromise, and now of course we're in a world where compromise somehow has become the enemy.

PAUL ROOT WOLPE: Right.

It's a really complicated problem of course. Democracies have always argued and have always had divisions. You and I grew up either in or in the wake of the 1960s, which was a very divisive time in our country. But there was a difference then: It was divisive but it wasn't polarized. I think those are different political states of being. That is people sometimes were intractable about the division around issues, they were very committed to their differences, but there were also a lot of areas of commonality and compromise—if you look at Congress at that time, there was a lot of bipartisan work—because the idea of being divided was not itself a value.

The difference now is the idea of being divided is itself a value, that is, if you are on one side of this debate, the attitude is often that what is wrong is not the opinions, positions, and values of the other side of the debate—yes, that's all wrong—but what is wrong is being a person on that side of the debate. So it doesn't have to do with debating ideas, it has to do with personal identification in a camp and that's the value to be guarded and about which we won't compromise.

That's what I mean by polarization. When you turn the other camp into a group about which there can be no compromise because their very intrinsic nature is wrong—not their opinions, not their attitudes, not their values because those are all wrong too, being liberal or being far right is itself wrong—then you can't have political compromise, and then you end up with this kind of polarization that we have here. We don't even talk about centrist positions anymore because we have defined the political debate about the extremes and you have to pick one camp or the other.

It's very, very dangerous, and everyone recognizes how dangerous it is. And by the way, it is 100 percent ethically unsupportable, that is, ethics itself is about debating and weighing values and trying to come to decisions about how we should behave when values are in conflict in the way that maximizes the good for everybody, and in a polarized system you can't even get that started because it isn't about values in conflict, it's about people in conflict.

I wish I as an ethicist had the brilliant answer to the current condition of the American polity, but I'm afraid I don't know what the magic spell is that will break this polarization any more than anyone else does.

WENDELL WALLACH: Moving to the technology, of course part of our concern is that at least some of the digital technologies are inherently fragmenting and therefore do empower those who find that fragmenting, creating communities that live in their own cocoons or their own self-reinforcing viewpoints becomes very easy with social media and some of the other digital technologies.

I guess that brings us around to something that happened this past week, which is the board of directors of Twitter voted that they would accept Elon Musk's bid to buy Twitter. Those who are most conservative among us are cheering that decision because they see that allows them to continue with at least fragmenting approaches that feed into particular biases or allow them to engage in misrepresentation, sometimes outright lies, of what is accurate within those platforms.

Do you have any insights or anything you'd like to throw into that conversation in terms of whether or how we might regulate these technologies?

PAULL ROOT WOLPE: Let's talk about Twitter itself for a second before we talk about Elon Musk and Twitter.

Twitter is a fascinating platform because old curmudgeons like me—and this was said a lot at the beginning of Twitter, but I think it has really turned out to be true—have said you can't have a deep, profound conversation where you try to work out the subtleties and the differences of positions in 140 or 280 characters; you just can't do it.

So what Twitter becomes is a quip mechanism, where people can quip—and quips are fun, I like to quip myself sometimes—but to have serious political conversations by pithy sayings is not the way democracy was ever supposed to work.

What Twitter does intrinsically, with or without Elon Musk, is it by its very nature reinforces people's already existing opinion because, like anything else, a devastating cut at the other guy's opinion is emotionally satisfying even if it's not intellectually useful. So Twitter just becomes this emotionally satisfying platform where we watch people we like devastate people we don't like.

I think the very nature of Twitter tends to increase polarization. Even before we get to hate speech, even before we get to all the problems that Twitter has with allowing misinformation, I think it is intrinsically polarizing in that way because it doesn't allow deep and profound or subtle conversation.

WENDELL WALLACH: Before you go ahead, let me just throw in. This of course belies the claim that technologies are neutral, and that in some sense certain technologies have politics built into them.

PAUL ROOT WOLPE: Right.

WENDELL WALLACH: We are in effect saying that Twitter by its very structure is political, and part of its political nature is to serve those who are most interested in fragmenting and undermining the kind of reflection and dialogue that is central to ethical reflection. It's almost saying that Twitter is a somewhat unethical kind of technology. Though I don't think either you or I would want to say for that reason it should be outlawed—and certainly, if we believe in free speech, we don't want to outlaw any kind of speech in a rash way—but it does cry out for certain forms of regulation.

PAUL ROOT WOLPE: Absolutely. And there are ways to mitigate that, and people have done it themselves. If you think about the beginning of Twitter, if you remember back then, one of the things I noticed over time were far more links and tweets as time went on. At the beginning, if you could get a sheet with 10,000 early tweets, you would not see a lot of links in them, it really was a quip mechanism, but as people began to use it and tried to get more sophisticated and more important questions out there and realized the limitations of the form, they tried to modify and expand the form by making their point and then linking to another platform—YouTube, Facebook, or just an online platform that allowed the expansion of their point outside of Twitter. They tried to overcome the inherent limitation of the technology by linking it to another technology. And then, of course, another way they've tried to do it is through linked tweets where you write ten tweets in a row so that you can overcome the limitations.

So people saw that and they tried to get around the technology in those ways, but as you point out, there's only so much getting around the fundamental nature of a technology. You're still stuck within the paradigm of that technology.

There are things that can be done regulatorily to try to blunt in some ways or mitigate that, and I think the objection to Elon Musk's purchase of Twitter is the fear that he doesn't really see the problem that so many of us see and that the kinds of solutions we would like to see are an anathema to his basic nature and to his libertarian political views and to what he sees as his belief that everybody is as discriminating about information they get as he thinks he is.

I think it is a real problem. The power of Twitter as a platform in our society and the way it was magnified, for example, during President Trump's administration, cries out for at least some degree of trying to temper its worst tendencies.

WENDELL WALLACH: Of course whether we can temper those kinds of things or there's a political will to is not at all clear. All sides seem to now say that they want some forms of regulation of social media, but it's not clear they will be able to agree on what that regulation should be, particularly if they see that the regulations being proposed put their political interests at a disadvantage.

I think, unfortunately, we're getting steeped in that, probably also steeped in it because we're caught up in a cult of innovation where tampering with innovation is also seen as bad because it undermines productivity, but that again then gets used by the corporations as a way of feeding into the fragmentation that the politics allows for and undermining the introduction of good regulations.

PAUL ROOT WOLPE: It is a real problem. It isn't just a political ploy of course. I mean premature regulation is just as problematic as lack of regulation, and the sort of dance that you have to do between innovation and regulation is a very complex and difficult one.

Earlier in my career I did a lot of work with artificial reproductive technologies. I was working with the American Society of Reproductive Medicine (ASRM), which is one of the really big organizations that work on these things, and I remember a whole forum where the people in that organization from fertility clinics were arguing: Every day there's a new technique, and if you start to try to regulate, the regulation is always going to be behind the innovation and it may stop new innovations that seem to violate the old regulations but actually are advances that you couldn't have foreseen in the old regulations. And they were right.

Then the regulatory people would say: "Well, because of that argument, there is no regulation at all"—I used to say American fertility practice is less regulated than bowling alleys and liquor stores, and it was true—"and that's equally problematic."

We all know regulation tends to be conservative. It tends to be always a little bit behind the time. It tends to be underdetermined, that is, it can't anticipate all the possible variations and iterations of the things it is regulating, so it constrains usually rather than allows. So it is a real problem.

In a system where regulation isn't flexible—and in our system regulation tends not to be flexible—it can be, but it very often isn't— trying to get it changed in changing times is sometimes very difficult. You have to have really judicious legislators, and I think we all know as we look around wherever we are—people listening to this who are in the United States, and probably anywhere else that they're sitting right now—judiciousness is not often legislators' most obvious quality.

WENDELL WALLACH: And not only not judicious, but very seldom do they understand the technologies that they are trying to regulate, and furthermore, they are now under pressure to regulate a vast array of emerging technologies.

PAUL ROOT WOLPE: Right, I think that's exactly true.

I remember one of the things when I was at the University of Pennsylvania and we were very into the ethics of genetics, we were asked to go talk to the state association of judges about genetic technologies because they were beginning to see more and more cases that involved genetic technology in front of them. The questions that they asked us were more frightening than anything else. So fundamental was their lack of understanding about anything having to do with genetics that a quick sort of Genetics 1.01 course was not going to in any way raise them to a level of sophistication needed to understand the subtleties of those arguments at all. They were getting genetic patent cases, they were getting suits of genetic laboratories over mistakes.

And that's the judicial branch, never mind the legislative branch, where I don't think they ever did get any training whatsoever as they began to try to make these regulations. So yes, I think that's a fundamental problem of the political system, especially around technology.

WENDELL WALLACH: In the judicial aspect you were among those who were underscoring how problematic neuroscience was, that lawyers were increasingly introducing neuroimagery and genomic studies that suggested that "maybe my genes made me do it," and of course the judges had no idea even how to handle the introduction of those concerns into the court, let alone how they should rule on their introduction.

PAUL ROOT WOLPE: And it's not just the judges of course. One of the things that really became obvious is you could have ten psychiatrists say something about the mental condition of a defendant and then just have one person who in addition to saying it held up a brain scan that nobody could read, that the jury didn't understand a thing about it, but because of the very fact that they had a brain scan in their hand, the jury dismissed the first ten psychiatrists and would believe that one person.

We did some work and we wrote a paper that became part of the recommendations of a professional society about forensic testimony of their members for that very reason, that is, it was so easy to sway juries using various kinds of neuroimaging props that we had to come up with saying, "If you're a professional neuroscientist and you're a physician, the use of these props as a tool in trying to win the argument for your side is itself an unethical professional act."

It just shows how powerful those kinds of things can be. The promise of technology is very convincing even if the actual technology you're using doesn't really say what you're claiming it says.

WENDELL WALLACH: Right, that's such a big issue.

A lot of attention is being paid to the problem of AI bias. You with your sociological analysis have pointed out that there are three kinds of AI biases, and maybe you can tell us what those three distinctions are.

PAUL ROOT WOLPE: There are three, and my claim is that we talk about two of them much more than the third and third is in some ways perhaps even more important than the first two. The two we always talk about and that everyone knows about who listens to these kinds of podcasts probably are aware of.

The first is bias in AI, that is, the kind of bias that programmers worry about, that businesses that create AI products worry about. Most machine learning and AI are to some degree decision-making technologies. Every decision we make, even the most trivial decisions, are made on the basis of some set of values. As artificial means of decision-making become more and more powerful, we have to interrogate how they are making those decisions—whether they are issues of data-driven bias that we've talked a lot about, or rule-driven bias, whatever it might be. That's the first kind, the bias inherent and intrinsic in the technology.

The second is the bias of the results of the technology, that is, we use that technology to make decisions, we use that technology that in some ways affects people's lives, and there are biases in how it is applied and to whom we apply it, and then when you mix that with the intrinsic bias, we know many examples that we like to talk about of really skewed decisions—mortgage programs that redline neighborhoods, recidivism programs that overestimate African American recidivism and underestimate white recidivism—and we can go on and on. They are very legion.

But there is a third kind of bias that we are not really looking at that I think in some ways is more powerful than the others, and a lot of people are trying to solve the first two. This is a more fundamental question. It is: What kinds of AI are we creating, what kinds of AI innovation are we funding, and what kinds of problems are we asking AI to solve?

There the question is: How much of the AI we are creating is being dedicated to trying to work on, look at, or solve problems of structural equality that are already existing in our society, or are we creating AI to help businesses do their HR and do their inventory and those kinds of things? Where are our priorities in using this powerful technology to solve problems in our society, and are we really taking full advantage of its talents to apply them to try to help us solve some of the thorny problems of inequality in human society?

Now, some are, no question about it. I've seen some wonderful examples of using AI to solve all kinds of different social problems—and not just problems of human social inequality, but problems of ecological disasters, problems with our animal friends and habitat destruction—yes, AI is being used in those ways.

But the conversation around what we should use AI for is a fundamental conversation about social value, and we are not having it with the kind of depth that I think the power of this technology deserves.

WENDELL WALLACH: Paul, you have also raised the question of what kind of responsibility do we want to give AI. This of course is something that gets discussed in so many different contexts, but again with your sociologist's cap, you have broken down the kinds of responsibilities into a number of different categories. Perhaps you can share that also with our listeners.

PAUL ROOT WOLPE: One of the big questions of AI, as we all know, is: How much autonomy and decision-making power do we give to AI? This argument is being debated across many different AI types of platforms.

One of the areas I think where, for example, it is really, really crucial is in lethal automated weapons systems (LAWS) or autonomous weapons systems, where we have to decide whether we are actually going to allow AI-based weapons to go into a war theater and make a decision without human agency intervening at any point about which human beings to kill. There are arguments pro and con about that, but the world as a moral agent seems to be arguing we can't ever let that happen. I think that's a very wise argument to make, that we should never fully remove human beings from the decision to kill another human being.

WENDELL WALLACH: [Audio glitch] has been convinced that we should not do that, but the security analysts in nearly every major nation have stood in the way of there being any kind of a treaty that would restrict the development of autonomous weapons systems.

PAUL ROOT WOLPE: And there you go with the regulation-versus-innovation tension right there.

We need to have this conversation, and in order to have it we have to create kinds of conceptual categories about how we think about the ethical autonomous status of AI categorically to analyze.

I think we only think of two basic categories instead of four, and here is the way I have broken it down. I don't think this is the perfect breakdown, but I think it is helpful if people haven't thought about this in this particular way.

If we are going to allow AI to make decisions that we think of as ethical decisions or decisions of profound ethical responsibility, the first level is what I call the "service animal level." Arleen Salles in Europe has written about this, I think interestingly. That is, what kind of ethical decision-making do we let police dogs make, do we let other kinds of service animals make? They do make decisions. We train them, then we let them out, and we let them go, we release them, and then we hope that they will do the right thing. We don't hear a lot about it, but sometimes they don't do the right thing, and then we try to retrain them. We give them a limited autonomy under human supervision, that's how we can think of service animals.

If we make that analogy to machine learning and AI, the first level might be limited, highly defined decision-making always then subsequently or in a grouped way subsequently under human review to refine and refine and refine.

That's a very restricted AI decision-making capability, and though there's a lot emotionally satisfying about that, in the sense that it would never let AI get too far out of hand, it's also extremely restrictive and would not allow us to apply AI to some of the areas that we really would like to apply it to because they would have to violate that.

But that's probably the perfect model for some AI, and part of what I'm arguing for is there isn't just one ethical standard for all of AI, we have to discriminate which AI needs what kind.

The second one is a further iteration, that we let AI make ethical decisions, and it isn't constant supervision, it's constant review. We have a range of ethical decisions it can make, and we as human beings get alerted when a more complex problem arises. Here we are defining the kind of parameters of AI's ethical decision-making and there's a point at which they kick it to the humans. They say, "Okay, this is too complex or this has an element that kicks in my exclusion criteria."

I think that's always going to be true to some degree, even number 3 and number 4, but this could be the fundamental principle of some group of AI.

The third and fourth we've talked about a lot. One is rules-based AI decision-making where smart people sit down and to the best of their ability create incredibly sometimes complex decision trees where they try to include all possible parameters that the AI might encounter plus the learning element with some set of reward parameters that allow AI to learn and know when it is done, it's made the right kinds of decisions.

The fourth is not rules-based but data-driven, and we all know the biases problem with data-driven ethical decision-making. That being said, one of the things that has always fascinated me and that I would love to see someone do in a really sophisticated way is a data-driven ethics-based machine learning or AI platform that looks at thousands, millions, billions of human ethical decisions, even contradictory ethical decisions, and tries to help us understand the underlying nature of ethical decision-making in ways that perhaps we human beings ourselves can't discern. But in any case, that data-driven is the fourth.

I think we need to define carefully these kinds of phases and stages of decision-making sophistication that we are going to allow AI to have and understand that different kinds of AI are going to need different kinds of constraints.

I think the lethal example weapons system example is the perfect one: Are we going to decide that LAWS should be number two—that is, we are going to let it make certain kinds of decisions and then when it steps outside of that decision-making parameter it calls a human and says, "I'm about to fire. Press the button and give me permission to fire," where the human always makes the final decisions about whether it's going to kill another human, or are we going to make it a number three or number four and say, "It understands enough that in certain circumstances, or even in many circumstances, we are going to let it take the autonomous right to fire and kill another human being?"

I think it's the starkest in some ways example of where we are going to draw that line, but that line is going to end up being important to be drawn in many, many different kinds of AI products.

WENDELL WALLACH: Of course, self-driving vehicles is the other key example in that area.

PAUL ROOT WOLPE: Right. But the problem with autonomous vehicles is we all understand that the kinds of decisions that we are most worried about them making are not decisions where a human being can step in and make the decision. So when that vehicle is about to have an accident and needs to decide "do I hit the car, do I hit the wall, or do I hit the woman with the baby stroller?" it can't stop and say, "Okay, Wendell, you're the driver of this car, press the button and tell me which decision to make." While the idea with lethal autonomous weapons systems, even though we could imagine situations where that was the case, we can say, "always in those situations you don't fire," so we lose some of our weapons systems and they get destroyed because we've decided we're not going to just let them fire whenever they want.

The fascinating thing about autonomous vehicles is if we are going to have them, we can't make the decisions for them, we can't even do an equivalent of what happens with the LAWS systems, where we say, "Okay, we'll allow it to be destroyed rather than give it autonomy to kill another human being." These cars have to make those decisions, there's no other choice, or else we can't have those cars. That's our choice. That's really fascinating in a slightly different way I think.

WENDELL WALLACH: I don't think we want to really go into great depth around either driving vehicles here or autonomous weapons systems, although some of the debate on the United Nations Convention on Certain Conventional Weapons has been whether there are a good number of situations where you have to let the machine make the decision and whether we can set acceptable parameters as to when that is and is not appropriate.

PAUL ROOT WOLPE: Right.

WENDELL WALLACH: Let's turn to some other topics. Paul, as you know, I and some of our other guests have focused on the development of moral machines, how we can make computational systems sensitive to human ethical considerations and factor them into their choices and actions.

One of those areas on which you have commented has been the importance of the emotive aspects of ethics and whether we can bring that into machine ethical decision-making.

Machines are the perfect Stoics, they are what Stoics have wanted for thousands of years: dispassionate reasoning. You are among the critics of dispassionate reasoning at the same time as you are certainly one of the advocates for more deliberative reflection in ethical decision making.

PAUL ROOT WOLPE: Yes. I'm a bit of a heretic about something, at least among my philosophical colleagues—again I'm generalizing here and there are clearly lots of philosophers who wouldn't accept what I'm about to say, characterizing philosophers because great philosophers, like any group, are not monolithic.

If you look at philosophy as a field, the great arguments around philosophy have often been about creating the best system to make an ethical decision, and fundamental to that argument is you have to use one system. You don't get to say, "Okay, I'm going to be a deontologist in this particular ethical decision and a consequentialist in that one, and a virtue ethicist in this third" and think that you have done something philosophically justifiable because philosophy would say, "No, you don't just get to pick and choose."

While I as a sociologist say exactly the opposite: Ethics should be a toolbox. That is, we have a lot of different ways to make ethical decisions and I think different situations and different kinds of environments call for different kinds of tools to make ethical decisions. That's a bit heretical in philosophy, but I think it fits very well in social science because I think that's exactly what people almost always do.

There are very few pure deontologists in the world. Kant may have been one, but that isn't how most of us make our decisions. Most of us make some intuitive decisions, some emotional decisions, some rational decisions, and of course the very famous Trolley Problem was designed to show exactly that, that people can make rational decisions in one kind of situation and then an emotional decision in another.

So the question that comes—and again this isn't new, but I don't know that it gets enough thought among people who are designing AI systems and thinking about AI systems—is: What kinds of ethical decision-making strategies outside of the purely rational ones are we going to try to figure out how to put into AI platforms?

We don't know how to make AI platforms emotional; maybe we will come to that someday and maybe we won't. We don't know what a virtue ethics would look like, a proper character would look like for an AI platform. Right now we really only have one approach—we have multiple approaches within this approach, but we only one have one approach—and that is, as you say, the kind of Stoic approach, a rational ethical decision-making approach.

I have spent a long time outside of AI having nothing to do with machine learning critiquing the idea that any of those rational approaches can ultimately make the kinds of ethical decisions that most of us think are morally right. We see when you try to take consequentialism to its extreme, when you even try to take deontological systems to their extreme, you either come up with bad decisions or you sneak in pieces of other systems to try to temper the worst tendencies of the system you are using but try not to call them pieces of other systems.

The point I'm trying to make here is: How do we create an interesting, robust conversation around bringing in the emotional, intuitive part of what makes human beings human beings if we are going to use AI to make decisions that have a profound impact on human beings?

As a very quick example of that, I have done some work with the Dalai Lama, who has spent the last few years of this life going around the world pushing his idea that he has written in a couple of books, one called Beyond Religion, in which he says: "Forget religion for a second, most human systems, whether religious systems or philosophical systems, say a good place to start in your relationships with other people is a sense of compassion, is a sense of 'I need to understand what hurts you, what you care about, and I need to take that into consideration in my relationship with you.'" He says: If we can all agree that that's a good place to start, can we build an ethic together based first on this idea of compassion? That's the Dalai Lama's argument, which he considers to be a non-religious argument. That's why he calls it "secular ethics," not instead of religion but religion and non-religion together.

If we take that seriously as an argument—and there are reasons to take it seriously, you may not agree with it, but it's a serious argument about ethics—we could never do that with machine learning at all. At this point in our understanding of AI, we can't teach AI compassion because that's an emotional state. It is an attempt for me not just to understand intellectually that you are in pain but to try to feel an emotional connection to your pain that makes me feel a sense of almost physiological—that's why I talk a about mirror neurons and how we feel those kinds of things.

Let's assume for a minute that we, after many years of conversation, decide that the Dalai Lama's approach is a great one, maybe the best one of all that we have come up with as human beings—I'm not arguing that it is; I'm trying to make a point—let's imagine that, after a lot of debate, we decided that. We would be years, decades, centuries perhaps—who knows?—away from ever being able to use that as our model for machine learning.

My point is I think we need to interrogate this idea a lot more than we do. I think a lot of the time we just kind of default to the rational system of moral decision-making with machines and say, "Okay, we'll perfect it, we'll figure it out, we'll study more data, we'll make more rules."

Maybe the answer will never be just getting more and more sophisticated, rational decision-making in machines, but some creative, innovative way to get the essence of these intuitive emotional systems into machines.

WENDELL WALLACH: Some people have argued that the rational side of compassion is the Golden Rule and that something like the Golden Rule exists in nearly every religious tradition and even in philosophical traditions with Kant's categorical imperative. But that still belies the question of whether the Golden Rule even works as a rational system alone, whether you need to have compassion in order to apply a Golden Rule in many contexts.

PAUL ROOT WOLPE: You can't say to a machine: "What is hateful to you do not do to another machine" because what does "hateful to me" mean as a machine? I think the Golden Rule is based, at least in part, on a sense of compassion and human connection that a machine can't always translate.

WELDELL WALLACH: And I think many of us would argue it's not just machine-to-machine, it's also in our relationship to machines, that at least as long as they have no somatic feelings we do have a right to do to them things that we might not do to another or we would not want to condone for another.

PAUL ROOT WOLPE: Right. But what's so wonderful is the work of the MIT Media Lab and Sherry Turkle and people like that, showing how we actually we are actually loathe to do that to machines because this powerful sense of compassion that we have we often project onto machines. I won't mention who it is because it will embarrass her, but someone very close to me in my life feels guilty if she doesn't do what the GPS tells her to do, if she decides to take a left when the GPS tells her to take a right, she almost apologizes to the GPS.

We as human beings, with this powerful sense of compassion that we have and this sense of connection we have with other human beings, we tend to project that onto machines. The problem is they don't project it back onto us, and then we get into really interesting questions.

I was just talking the other day with a group about things like the pup seals or the dog robots that are being used to comfort people with Alzheimer's and other things and is there a fundamental ethical problem with the assumption that they have that they are giving love and receiving love when we know that they're only giving love and receiving a simulacrum of love.

Those questions of the transfer of emotions in the two directions between human beings and machines I think is going to become a bigger and bigger issue.

WENDELL WALLACH: A great point.

As we transition to some other topics that you have delved into and that I think our listeners would love to hear you talk about a little bit, let me take one topic that combines both neuroethics and AI.

You've recently written about neuromarketing and AI, and I wonder if you could share some of your thoughts on that.

PAUL ROOT WOLPE: Neuromarketing, boy what a fascinating idea!

A fundamental thing I believe about neuroscience that makes it different—and this is just about neuroscience, it's nothing to do with AI yet—is throughout all of human history until recently, without a single exception ever, any time in the long history of our species, if you wanted to get information from me, you could only get it through the peripheral nervous system. Whether it's language, whether it's skin tone, blushing, whether it's galvanic skin response, whatever it is, you could not get information directly from the brain of any meaningful sort.

People tried it with craniometry and with phrenology to see if shapes of the brains and sizes of the brains would give them some information about people, and we all know of course where that went. If they worked, they'd still be around.

So the answer is no, you can't get any information about me directly from my brain until maybe 20, 30 years ago. Then, when something fundamentally changes in human history that way, that profoundly, we have to start asking questions about what its implications are.

What does it mean that I might be able to stick you in a functional magnetic resonance imaging (fMRI)— the saving grace of this is that if you decide not to cooperate, there's not much I can do about it in an fMRI—but if I could get you to cooperate, I might be able to learn something about you directly from watching your brain activate that I could not learn from you unless you told me. There have been some studies that show that that's true.

When that became possible, marketers immediately jumped on it and said, "Well, maybe we can get information from people's brains that we can't get from their mouths about what their preferences are in the world and we can use that for commercial purposes."

I was extremely skeptical about it and wrote negatively about it at the beginning, and I think at the beginning a lot of it was really, really nonsensical. You would do these really complex fMRI studies that had Ns of five and then claim you could tell something about whether a person liked chocolate chip cookies or not. It was like "Ask them," that seemed to be a lot easier than spending millions of dollars to look at their brain.

But as we got more sophisticated around fMRI, it turned out that you could perhaps learn something different about the way people think from looking at brain reactions than you could by asking them in any way.

Again, I think the Trolley Problem, though it's not a neuromarketing problem, is a great example of that. The question of the Trolley Problem had been argued over and over and over again by philosophers, and then when Josh Greene and Jon Cohen first did their famous studies, people said "Oh, maybe looking directly at how the brain responds to this there might be a possibility of information there that we hadn't gotten in years and years just by asking people about it." When I saw this, I thought, Maybe there is generalized information we can get about consumer behavior.

But the promise of neuromarketing at the beginning was about individual choice, and I think that that's where neuromarketing made its great mistake. What neuromarketing then became and what it is now is not "I'm going to tell you whether Wendell Wallach likes chocolate chip cookies or oatmeal raisin," it is "I'm going to look at 1,000 Wendell Wallachs and I'm going to tell you something about why people prefer Republicans or Democrats, why they prefer Coke or Pepsi, or why they tend to like shiny displays for this kind of product and subtle displays for that kind of product." So it became a way to say some very generalized things about people.

My bottom line about it is I still think there's a lot of hype around it and that a lot of the things that it comes out with and says "Look at what we've discovered" are things that we either already knew or could have easily found out by asking people.

If you are going to bypass people's opinions and go right to their brains, you have to have a really sophisticated question that takes advantage of the unique qualities of the brain versus being able to ask people, and I am sure many, many neuromarketers are not that sophisticated about it and they are just trying to get clients and say something. But I think there is some potential there for some interesting findings.

WENDELL WALLACH: What do you see as the role of AI in this?

PAUL ROOT WOLPE: The role of AI in neuroscience research itself is changing right now. It's very interesting. AI was always part of neuroscience in one sense.

How does something like fMRI work? You usually take these images of blood activation in the brain, assuming that greater areas of blood oxygenation means great areas of activations of the brain. But then what you end up with are thousands and thousands of these pixels that then have to be analyzed, iterated, and reiterated, then colorized, and all that, and all of that was done not by human beings with little colored markers but by programs that looked through it. So AI of some kind was always part of neuroscience.

As AI gets more sophisticated there is an opportunity for AI—and as, by the way, the imaging technology has gotten more sophisticated you can get smaller and smaller units that you can analyze—and AI can begin to see things and understand things that we as human beings would not have recognized or seen.

I mean what is that AI is so great at? It is great at taking massive amounts of data and seeing patterns that human beings can't see—that's one of the great advantages of AI—and that is what neuroscience data is, millions and millions of data points that human beings then try to aggregate into meaningful data.

So I actually think—I was much more a skeptic early on—the technology has caught up to my skepticism to some degree and that all of the weaknesses of the early studies of not only neuroscience but many technologies that made claims that were far beyond our technological power to carry them out, may now be coming around.

One example that I wrote a lot about was using brain imaging for lie detection. That became a big thing for a while. It never worked. I mean it worked better than other kinds of lie detection, but there are all kinds of problems with that we don't have time to get into now.

However, there may come a time when the sophistication of AI to interpret perhaps brain imaging, perhaps other kinds of human behavior—there is lie detection looking at faces, Paul Ekman's microexpressions—we might actually come up with a system that using the incredible power of AI's discernment we might actually come up with a lie-detection technology at some point that actually does well enough to be useful for military, forensic, perhaps even judicial purposes.

WENDELL WALLACH: One of the problems for neuroethics has been that so much of it is focused around the acceptability of speculative possibilities that may or may not be realized technologically.

I wondered if there are one or two areas beyond neuromarketing where you see breakthroughs taking place right now that demand our ethical consideration and perhaps some action.

PAUL ROOT WOLPE: I think the examples I've given are some of them.

I think that one place where we have made extraordinary progress over time—and this is another place where I was a skeptic until the technology just blew past my skepticism—is in the apprehension of subjective thought. I said earlier that throughout all of human history the way you got information from me was through my language or skin response or something, but if there was a piece of information that I wanted to keep in my brain you couldn't get it out no matter what you did unless I finally let it out because you were torturing me or something, but you couldn't get it out except by breaking through to the autonomous nervous system.

WENDELL WALLACH: This is largely happening experimentally, though.

PAUL ROOT WOLPE: It's all experimental at this point.

WENDELL WALLACH: It's happening in contexts where you have human subjects who have given their consent to the research. The cost of research is extremely high. Do you foresee breakthroughs in technology where that might take place without it being restricted to human subject research which has its own ethical parameters?

PAUL ROOT WOLPE: Yes. It's a great question. Of course, people have tried to do it. Some people looked at ultraviolet and other ways of being able to find reflections of blood flow in the frontal cortex without using imaging machines. People have tried exactly that and for the most part they have failed.

The answer is no to brain imaging in that kind of public way. It still takes a very large machine, putting you in it, and having you do what I ask you to do. If you're singing songs to yourself or if I ask you a question and you say in your head "Yes, no, yes, no, yes, no" when you're answering me, you can foil the machine so easily in some ways that they're not really that useful.

That all being said, the place where things might become useful in lie detection is in some combinations of technologies using what we've learned from fMRI, microexpressions, the ways in which eyes move when people talk, putting them all together with a really powerful machine-learning algorithm, and then analyzing faces and being able to say with surprising accuracy whether someone is telling the truth or not. That is a very plausible—I don't know that anyone has actually done that yet, but I think that's very plausible. That's not necessarily ten, fifteen years in the future, that may be two or three years if someone really cracks that code.

But remember that all the time these technologies are getting better. One last thing about that. The measures that we have to really understand brain function in this way are surrogate measures, that is, we use flood flow to understand brain activation. We don't use brain activation to understand brain activation because you don't know how to detect neurons firing in groups the way we can detect oxygenation of the blood to indicate that those neurons are firing. We can't just stick someone in a machine and say, "This group of neurons is firing." We don't have that technology.

We do have that technology with EEG, but the problem with EEG is a spatial problem. We have a temporal problem with brain imaging and a spatial problem with EEG. If we ever crack that and we could directly look at brain activation in all of its areas simultaneously the way we can with fMRI and blood oxygenation, then I think we're in a new place and I think that we will understand and be able to make predictions about people's thoughts and things like that in ways that we are just imagining now.

As I've written about lie detection, it is really very scary to think about it. If we ever get to the point where we have a technology that can actually apprehend people's subjective thought or a technology that is a really, really effective lie detector where you can't lie, imagine what that would mean. Structured lying is a normal, and even admirable in some cases, part of human interaction. If we take away that possibility, I think we profoundly diminish the nature of human life.

WENDELL WALLACH: One of the interesting possibilities that has arisen for me while listening to you is that at the very least the experimental research that can be generalized from the neuroscientific evidence might actually also show up in correlates to other kinds of information we have and that AI might be able to track those correlations.

PAUL ROOT WOLPE: Absolutely.

WENDELL WALLACE: Then you use correlated features and presume that therefore these generalizable features that came up in the neuroscientific research are applicable.

PAUL ROOT WOLPE: Right.

One of the things I always joked about is that a fundamental law of the universe that we don't know might be that no level of sophistication can ever understand itself, that is, the human brain can never fully understand the workings of the human brain because it can never be complex enough to understand its own working and you need to make it more complex, but then it couldn't understand that level of complexity. That may just be a made-up law of the universe.

But let's imagine for a minute that it is a real law. There may come a point at which AI can understand how our brains work but we can't. That's not just sort of a science fiction scenario, that's actually a logical conclusion from the premise that we may not have sophisticated enough brains to ever understand exactly how our brains actually work. Imagine what that would mean.

WENDELL WALLACH: Of course, that's one of the things that the believers in superintelligence presume will be possible, that the AI will be so much smarter than we can ever be in certain respects.

PAUL ROOT WOLPE: I'm talking about a nuance of that claim. I'm talking about our own understanding of the brain since we're talking about neuroscience. That would be an interesting moment of course, and part of the question there is: Would AI ever be able to explain the function of our brains to us in a way that we could understand if it understood it better than we do?

The development and the sophistication of neuroscience research right now I think is far beyond in some ways where most people think it is. Neuroscience is soon going to be able to tell us things and understand things about the working of our brain that I think are going to shock people.

WENDELL WALLACH: Fascinating.

I think our listeners would find me remiss if I didn't ask you about the 15 years you spent as NASA's chief bioethicist. Tell us a little bit about that and the kinds of issues they turned to you for your input on.

PAUL ROOT WOLPE: That was a fascinating period of my life. I was the first person to have that role, and it came about in an interesting way. It came about because NASA realized that it was out of compliance with federal regulation in that anyplace that it did human subjects experimentation it needed to have what we call a multiple project assurance (MPA) where it basically says to the federal government, as every university that gets federal funds must, "Even though the federal regulations control how we do human subjects research for those studies that they pay for, we promise we will take those standards and generalize them to any human subjects research we do." That's the promise that every university makes to the federal government, and that's the promise NASA had to make to the federal government, but then it realized it was out of compliance with that.

In a brief sense, that's why they first called me in, was to help them bring themselves into compliance. That was the reason. But what quickly became apparent was that they had fascinating and very different kinds of bioethical problems. Again, I as the bioethicist for NASA, not the ethicist for NASA. So I didn't work on things like fair commerce on the moon or anything like that. I worked on human beings.

There were four cells in the diagram that I worked on: space-based and terrestrial, and research and clinical care. So I didn't only help them with the fun stuff, astronauts and space. I also consulted with them about their 90,000 employees and occupational health and all that.

The fun stuff that everybody wants to hear about is space-based. There were issues of space-based clinical care and issues of space-based research. In the bioethics biz clinical ethics is slightly different than research ethics, they come under different standards and things, and the same thing was true in space, so clinical ethics questions were different from research ethics questions. Let's take quickly just a few examples of each just to show people what kinds of stuff.

For clinical ethics there were questions of how we treat people in space. The truth of the matter is we don't know a lot about how the body in microgravity metabolizes drugs. All kinds of physiological changes happen in microgravity. There are changes in the immune system.

There are a lot of changes in the circulatory system. For example, most people don't think about the question of how we get blood from our feet back up to our hearts. Gravity is pulling that blood down, so how do your feet—your heart is only the size of your fist, it doesn't have the strength to pump blood from subarteries down to millions of capillaries, then back into veins, back up to the heart, it's a tiny little pump. We have micropumps in our vasculature that help push that blood back up to the heart. They work great when you're on Earth, but when you've got no gravity for them to work against they're just shooting all that blood up into your head and into the upper part of your body with nothing counter-dragging it back down.

So astronauts get fluid loading in the tops of their bodies. They get a sense of never wanting to drink because the way your brain knows that you're thirsty is by testing the fluids, and if all the fluids are going up into your brain it thinks, "Oh, we're nice and satiated all the time," so they have to force themselves to drink.

There all those kinds of questions, and one of them is about drug metabolism in that circumstance. That leads to ethical questions about how you should treat people, who you should treat, what should you do if someone is injured on a space station?

But most interestingly what I was working on in the last few years that I was there was: What happens when we really try these long-duration space flights—going to Mars for example, which could be 18 months—what kind of formulary do you put together? You can't put every drug that a human being could possibly need in 18 months onto a ship, so how do we make decisions on how to equip the ship? You can't have an X-ray machine on the ship. You could probably have ultrasound. How do we think about that? How do we train a ship? How many doctors should be on that crew? Should we have at least two in case one of the doctors gets killed or sick?

If we have five drugs that treat five diseases well and one drug treats all five of those diseases poorly but treats them, do we choose that one drug over the five? Who gets to give drugs to people? There are all those kinds of questions.

There are questions of communication. You're halfway to Mars and your spouse is diagnosed with cancer or the Twin Towers get hit with airplanes, what information is proper is proper and improper to give astronauts and at what time?

What about private information? So we're all on our way to Mars. All the astronauts on the space station get private time with their physicians back on Earth, and that's unmonitored conversation, they get to talk privately with their physicians. It's their right, all of our rights. What happens if I discover something medical that might threaten the mission but I don't want to tell my crewmates about it? What's my obligation? What's the doctor's obligation?

What happens if—by the way as actually happened in one of the research stations in Antarctica, which is the closest analog we have to a spaceship because they're completely isolated—they had someone who went crazy and attacked his compatriots in the Antarctic research station with an axe and they ended up putting him in a straitjacket and tying him to a chair for a couple of months or six weeks—I can't remember exactly how long it was—until they could get him out of there?

What happens if someone gets a brain injury? There are all these clinical kinds of questions that we have to solve.

Then there are research questions because astronauts are normal research subjects. They have all the rights that other research subjects have. They have the right to refuse to do any kind of research. You have an N of three in some of these research projects, some of them you have a N of one, and an astronaut can just say, "I won't do it," and the company that competitively bid to get this thing up onto the space station, spent millions of dollars inventing—I don't know, a heart monitor that can work in microgravity or something—and then the astronaut just says, "I'm not going to do it," which is their right.

These are all the kinds of questions that we had to deal with. Some of them were pretty powerful.

I'll just end by saying this. I was there when the Columbia disaster happened and NASA as an organization went into absolute crisis mode trying to figure out what happened. It was a powerful moment where they made use of all of their resources, and where ethical questions came up that were unlike any that you might see anywhere else.

It was a really fascinating and I think instructive and powerful position to have for a while and I was very grateful to have learned from it.

WENDELL WALLACH: Wow, this is fascinating. Maybe we're going to have to have you back to go into it in greater depth.

But we have a final question that we like to put to many of our guests and I would like to ask you: What makes you hopeful or inspires you for the future?

PAUL ROOT WOLPE: There are a lot of things that make me hopeful—I'm an optimist by nature—but I'll pick one. I have spent more time around young people since I've spent my career at universities. I just find a sense of mission and drivenness and desire to help the world in young people who now have these models of social entrepreneurship that many of them go into. Very often when I talk to them about their motivation—yes they'd like to make some money, who doesn't like to make some money?—and there's this real social mission that many of them have. They want to use the tools of modern entrepreneurship—AI, our incredible technological sophistication, our social media platforms—they really want to use those to try to do some repair of problems in our social, political, and economic life.

That sense of optimism and that desire to use the best tools we have to help other people really lives in our younger generations and it gives me hope.

WENDELL WALLACH: Wonderful. This has been a truly intriguing conversation, Paul. Thank you ever so much for sharing your time, your deep insights, and your expertise with us.

Thank you to our listeners for tuning in, and a special thanks to the team at the Carnegie Council for hosting and producing this podcast.

For the latest content on ethics in international affairs be sure to follow us on social media at @carnegiecouncil.

My name is Wendell Wallach, and I hope we have earned the privilege of your time. Thank you.

AIEI Podcast Feed

Visit the Connect page to subscribe

You may also like

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 21, 2024 Podcast

Prepare, Don't Panic: Navigating the Digital Rights Landscape, with Sam Gregory

Senior Fellow Anja Kaspersen speaks with Sam Gregory, executive director of WITNESS, about the challenges and opportunities presented by synthetic data, AI-generated media, and deepfakes.

JAN 23, 2024 Podcast

When Science Meets Power, with Geoff Mulgan

In this special episode, Senior Fellow Anja Kaspersen speaks with University College London's Professor Geoff Mulgan on the trends shaping technology's impact on society.