Transcript: Ezra Klein Interviews Ted Chiang

The March 30 episode of “The Ezra Klein Show.”

The Ezra Klein Show

The March 30 episode of “The Ezra Klein Show.”

March 30, 2021

Every Tuesday and Friday, Ezra Klein invites you into a conversation about something that matters, like today’s episode with Ted Chiang. Listen wherever you get your podcasts.

Transcripts of our episodes are made available as soon as possible. They are not fully edited for grammar or spelling. Click here for the episode page and Ezra’s thoughts on the interview. Click play below to listen to the episode.

The Ezra Klein Show Poster

Why Sci-Fi Legend Ted Chiang Fears Capitalism, Not A.I.

The award-winning author and Ezra Klein discuss A.I. suffering, free will, Superman’s failures and more.

transcript

transcript

Why Sci-Fi Legend Ted Chiang Fears Capitalism, Not A.I.

The award-winning author and Ezra Klein discuss A.I. suffering, free will, Superman’s failures and more.

[MUSIC PLAYING]

ezra klein

I’m Ezra Klein, and this is “The Ezra Klein Show.”

For years, I have kept a list of dream guests for the show. And as long as that list has been around, Ted Chiang has been on top of it. He’s a science fiction writer, but that’s underselling him. He writes perfect short stories — perfect. And he writes them slowly. He’s published only two collections, the “Stories of Your Life and Others” in 2002, and then, “Exhalation” more recently in 2019. And the stories in these books, they’ve won every major science fiction award you can win multiple times over — four Hugo’s, four Nebula’s, four Locus Awards. If you’ve seen the film “Arrival,” which is great — and if you haven’t, what is wrong with you — that is based on a story from the ‘02 collection, the “Story of Your Life.” I’ve just, I’ve always wondered about what kind of mind would create Chiang’s stories. They have this crazy economy in them, like not a word out of place, perfect precision. They’re built around really complicated scientific ideas, really heavy religious ideas. I actually think in a way that is not often recognized, Chiang is one of the great living writers of religious fiction, even though he’s an atheist and a sci-fi legend. But somehow, the stories, at least in my opinion, they’re never difficult. They’re very humane and propulsive. They keep moving. They’re cerebral, they’re gentle. But man, the economy of them is severe. That’s not always the case for science fiction, which I find, anyway, can be wordy, like spilling over with explanation and exposition. Not these. So I was thrilled — I was thrilled — when Chiang agreed to join on the show. But one of the joys of doing these conversations is, I get to listen to people’s minds working in real-time. You can watch or hear them think and speak and muse. But Chiang’s rhythm is really distinct. Most people come on the show — and this goes for me, too — speak like we’re painting in watercolor, like a lot of brush strokes, a lot of color. If you get something wrong or you have a false start, you just draw right over it or you start a new sheet. But listening to Chiang speak, I understood his stories better. He speaks like he’s carving marble. Like, every stroke has to be considered so carefully, never delivering a strike, or I guess, a word, before every alternative has been considered and rejected. It’s really cool to listen to. Chiang doesn’t like to talk about himself. And more than he doesn’t like to, he won’t. Believe me, I’ve tried a couple of times. It didn’t make it into the final show here. But he will talk about ideas. And so we do. We talk about the difference between magic and technology, between science fiction and fantasy, the problems with superheroes and nature of free will, whether humanity will make A.I. suffer, what would happen if we found parrots on Mars. There’s so many cool ideas in this show, just as there always are in his fiction. Many of them, of course, come from his fiction. So relax into this one. It’s worth it. As always, my email is [email protected]. Here’s Ted Chiang.

So you sent me this wonderful speech questioning the old Arthur C. Clarke line, “Any sufficiently advanced technology is indistinguishable from magic,” what don’t you like about that line?

ted chiang

So, when people quote the Arthur C. Clarke line, they’re mostly talking about marvelous phenomena, that technology allows us to do things that are incredible and things that, in the past, would have been described as magic, simply because they were marvelous and inexplicable. But one of the defining aspects of technology is that eventually, it becomes cheaper, it becomes available to everybody. So things that were, at one point, restricted to the very few are suddenly available to everybody. Things like television — when television was first invented, yeah, that must have seemed amazing, but now television is not amazing because everyone has one. Radio is not amazing. Computers are not amazing. Everyone has one. Magic is something which, by its nature, never becomes widely available to everyone. Magic is something that resides in the person and often is an indication that the universe sort of recognizes different classes of people, that there are magic wielders and there are non-magic wielders. That is not how we understand the universe to work nowadays. That reflects a kind of premodern understanding of how the universe worked. But since the Enlightenment, we have moved away from that point of view. And a lot of people miss that way of looking at the world, because we want to believe that things happen to us for a reason, that the things that happen to you are, in some way, tied to the things you did.

ezra klein

I’ve heard you refer to the pivotal moment in the history of science as the emergence of the discipline of chemistry over that of alchemy. Why was that important?

ted chiang

So, one of the things that people often associate with alchemy is the attempt to transmute lead into gold. And of course, this is appealing because it seems like a way to create wealth, but for the alchemists, for a lot of Renaissance alchemists, the goal was not so much a way to create wealth as it was a kind of spiritual purification, that they were trying to transform some aspect of their soul from base to noble. And that transformation would be accompanied by some physical analog, which was transmuting lead into gold. And so, yeah, you would get gold, which is cool, but you would also have purified your soul. That was, in a lot of ways, the primary goal. And that is an example, I think, of this general idea that the intentions or the spiritual nature of the practitioner was an essential element in chemical reactions, that you needed to be pure of heart or you needed to concentrate really hard in order for the reaction to work. And it turns out that is not true. Chemical reactions work completely independently of what the practitioner wants or feels or whether they are virtuous or malign. So, the parts of alchemy which ignored that spiritual component, those eventually became chemistry. And the parts of alchemy which relied on the spiritual components of the practitioner were all proven to be false. And so, in some ways, the transition from alchemy to chemistry is a recognition of the fundamentally impersonal nature of the universe.

ezra klein

And this helps illuminate something else, and it’s recurrent in your books for me, which is that you have a lot of characters who go back to the period when religious inquiry and scientific inquiry were intertwined, when scientific inquiry was a way of more fully understanding the splendor of God. And sometimes they come into contact with characters who are more of this more impersonal scientific approach, like the story where you have a scientist trying to animate golems for scientific reasons and a Kabbalist who’s trying to find those names to meditate on religious ecstasy. What do you think is the difference between the scientist trying to understand the universe and the religious seeker trying to understand God through the workings of the universe?

ted chiang

So I think maybe the difference is not so much between the scientific investigator and the religious investigator such it is between the engineer and the scientist. Because roughly speaking, engineers are interested in the practical application of things. They are interested in, how do we solve a particular problem? How do we use this information to solve a problem? And they are interested in understanding how the universe works and in appreciating the beauty and elegance of the universe. So I think there is a strong similarity between what the scientists are interested in and maybe what the religious people are interested in. They have, I think, more in common than the scientist has with the engineer. Because, yeah, there is a very pragmatic way that engineers have of trying to apply this information to solve a problem. And that is extremely useful, but I think it is far removed from what the appeal or goal of pure science is. Many Renaissance scientists, they were profoundly religious. And they saw no conflict in that at all. And for them, understanding how the universe worked was getting to know God better by understanding his creation more clearly and feel like that the wonder that comes with understanding how the universe works is very closely related to religious awe. I think that when scientists discover something new about the universe, I imagine that what they feel is almost identical to what deeply religious people feel when they feel like they are in the presence of God. I wish that we could get back a little of that attitude, instead of thinking of religion and science as being fundamentally diametrically opposed. And the idea that science drains all the wonder out of the universe, I don’t think that’s true. I think science adds wonder to the universe. And so, I feel like one aspect of that earlier attitude of when scientists could be religious, there’s one aspect of that, which I think I would like it if we could retain.

ezra klein

You have this comparison of what science fiction and fantasy are good for. And you write that science fiction helps us to think through the implications of ideas and that fantasy is good at taking metaphors and making them literal. But what struck me reading that is it often seems to me that your work, it takes scientific ideas and uses them as metaphor. So is there such a difference between the two?

ted chiang

So when it comes to fiction about the speculative or the fantastic, one way to think about these kind of stories is to ask, are they interested in the speculative element literally or metaphorically or both? For example, at one end of the spectrum, you’ve got Kafka and in “The Metamorphosis,” Gregor Samsa turning into an insect. That is pretty much entirely a metaphor. It’s a stand-in for alienation. At the other end of the spectrum, you’ve got someone like Kim Stanley Robinson. And when he writes about terraforming Mars, Mars is not standing in for anything else. He is writing very literally about Mars. Now, most speculative or fantastic fiction falls somewhere in between those two. And most of it is interested in both the literal and the metaphorical at the same time, but to varying degrees. So, in the context of magic, when fantasy fiction includes people who can wield magic, magic stands in for the idea that certain individuals are special. Magic is a way for fantasy to say that you are not just a cog in the machine, that you are more than someone who pushes paper in an office or tightens bolts on an assembly line. Magic is a way of externalizing the idea that you are special. [MUSIC PLAYING]

ezra klein

We’ve talked a lot about magic. As it happens, I spent last night rewatching, to be honest, “Dr. Strange,” the movie. What do you think about the centrality of superheroes in our culture now?

ted chiang

I understand the appeal of superhero stories, but I think they are problematic on a couple of levels. One is that they are fundamentally anti-egalitarian because they are always about this class of people who stand above everyone else. They have special powers. And even if they have special responsibilities, they are special. They are different. So that anti-egalitarianism, I think, yeah, that is definitely an issue. But another aspect in which they can be problematic is, how is it that these special individuals are using their power? Because one of the things that I’m always interested in, when thinking about stories, is, is a story about reinforcing the status quo, or is it about overturning the status quo? And most of the most popular superhero stories, they are always about maintaining the status quo. Superheroes, they supposedly stand for justice. They further the cause of justice. But they always stick to your very limited idea of what constitutes a crime, basically the government idea of what constitutes a crime. Superheroes pretty much never do anything about injustices perpetrated by the state. And in the developed world, certainly, you can, I think, make a good case that injustices committed by the state are far more serious than those caused by crime, by conventional criminality. The existing status quo involves things like vast wealth inequality and systemic racism and police brutality. And if you are really committed to justice, those are probably not things that you want to reinforce. Those are not things you want to preserve. But that’s what superheroes always do. They’re always trying to keep things the way they are. And superheroes stories, they like to sort of present the world as being under a constant threat of attack. If they weren’t there, the world would fall into chaos. And this is actually kind of the same tactic used by TV shows like “24.” It’s a way to sort of implicitly justify the use of violence against anyone that we label a threat to the existing order. And it makes people defer to authority. This is not like, I think, intrinsic to the idea of superheroes in and of itself. Anti-egalitarianism, that probably is intrinsic to the idea of superheroes. But the idea of reinforcing the status quo, that is not. You could tell superhero stories where superheroes are constantly fighting the power. They’re constantly tearing down the status quo. But we very rarely see that.

ezra klein

So there have been a couple tries at that, obviously, over always on the side of the main stories. But there was one series called “The Authority,” particularly during the war on terror, where they hand over a George W. Bush stand-in to the aliens to end a war. There’s another in the DC universe called “Injustice,” where Superman, after a tragic loss in a particular dimension, basically, it creates a fascist super-regime, trying to impose justice on a world that has too little of it. And it always seems to be the problem with those stories, is that they never know where to go with them. It always collapses immediately into fascism. They don’t know how to tell stories about governance. If you were to write a superhero story that was about overturning the status quo as opposed to reinforcing it, how would you do that? What would not the first issue, where they take out the White House, be, the sixth issue would be about?

ted chiang

That’s a good question. It’s not clear where you go with a story like that. But one of the attractive things about the story where you reinforce the status quo is that you could tell endless sequels. Because the end of the story leaves you pretty much where you were at the beginning of a story. And so, yeah, you can tell the same story over and over again. Stories about overturning the status quo, it’s very difficult to tell a sequel. Maybe you can tell a sequel, but it will look radically different than the first story. So that makes it hard to sustain them the way that large media companies want. Some of that also has to do with the fact that the way Marvel and DC work is that they sort of have this continuity that they want to stick with. Let’s say if we got rid of the whole idea of there being ongoing continuity, so let’s say you told stories where every Superman story was a standalone Superman story. They’re like stories about Paul Bunyan. Paul Bunyan stories, they don’t fit into a chronology. No one asks, well, did this happen before that one? They’re pretty much disconnected. So, in that way, you could tell individual stories where superheroes fight the government. And the more powerful your superhero is, that might be creating more problems. Like Superman, because he is so powerful, that poses difficulties. But someone like Batman, he is someone who could fight the government on an ongoing basis. The real problem is, again, from a media company standpoint, is that basically, you are writing a series — you’re publishing a title — about what we would consider a terrorist, someone who repeatedly attacks government facilities, someone who is, say, fighting the police or breaking people out of prison. That’s someone who we would very likely label a terrorist. And that’s not something that any big media company is going to really feel comfortable doing.

ezra klein

Is this the way superheroes end up being magic and not technology? Because as you’ve been saying this, I’ve been thinking about that distinction. And one issue here is that if you’re going to overturn the status quo without much power, then, obviously, you end up ruling. And for rule to be legitimate, it needs to be representative. But if you are the only one with the power, it’s not going to truly be representative. And so you end up, pretty quickly, even as superheroes like Iron Man or Batman who are technologically powered, but there’s something they’re able to do that nobody else can do, the reason it’s very hard to have these stories go anywhere over time, putting aside sort of the one-off idea you’re offering here, is it you eventually need to build something representative? You have to build something that doesn’t know you are a superhero, right? A system that is not about you, the great savior. And that’s just a little bit intrinsically difficult for the medium. It is very, very difficult to move from a story about how much power this one person has to a story about how you moved into egalitarian, representative governing dynamics in a way that made marginal improvements across a lot of issues reasonably regularly over a long period of time.

ted chiang

Yes. So superheroes basically are magic. Even if they are ostensibly technological in their powers, their technology never behaves the way actual technology does. We have fleets of very expensive fighter jets. There’s no reason that there’s only one Iron Man. So pretty much all superheroes behave in a sort of magic way because the abilities are embodied in a single individual. Those abilities never really spread. As for the question of the difficulty of telling a story about what does real change look like, what does a better system of governance look like, that is a legitimately difficult question. Because when you think about actual heroes in the world, people who affected great change, say, Martin Luther King, so Martin Luther King, Jr., yeah, he did not have superhero powers. He was a regular man, but he affected enormous change. The types of stories that you can tell about someone like that, they’re not as dramatic as a story about a superhero. Victory is not as clear. You don’t have the tidy ending where everything gets wrapped up. Because when you start trying to tell a story like that, then you get into all the issues that you mentioned. Legitimate political change doesn’t come from one person, even a superpowered just person making decrees. Legitimate political change would have to come from a broad base of popular support, things like that. We don’t know what a comic book about that would look like. It might not be that interesting or popular. Or at least, we don’t know how to tell stories like that.

ezra klein

So you have a few different stories around this question of, if the future’s already happened, we could potentially know it. And what would knowing the future do to a person? So in one of your stories, it inspires people to act to bring about that future, to play their assigned role. In another, it leads many to stop acting altogether, to fall into this almost coma-like existence. What do you think it would do to you?

ted chiang

I don’t know. I don’t think anyone can know because I don’t think the human mind is really compatible with having detailed knowledge of one’s future. I should clarify that I believe in free will. And we can talk about that in a minute. I don’t want my stories to be taken as an argument that human beings lack free will. I believe that human beings do have free will, if you think about the question correctly. However, what some of my stories address is, the idea that, OK, given that Einstein seems to have proven that the future is fixed, if you could get knowledge of the future, what would that do to you? This is not a situation that any of us need to worry about because we’re never going to get information from the future. But for me, it is a very sort of philosophically interesting question as to how would a mind cope with that. Could a mind cope with that? I don’t think that there are any really good solutions to that situation in terms of trying to reconcile sort of logical consistency with our experience of volition. I think that would be very, very difficult.

ezra klein

Let me ask you a question that I think about fairly often, I think partly because I’m Jewish culturally. If I could tell you, if you could know with certainty the date of your death, would you want to know it?

ted chiang

Yeah, I probably would. I probably would.

ezra klein

Really? Oh, I would not, under any circumstances, want to know.

ted chiang

I mean, it seems like it might be useful so that you could make some preparations. It might be good to get your affairs in order. We’re not talking about a lot of detailed information because I think the more information you have, yeah, the more that it’s going to mess with you. The more information you have, the closer we get to this situation that I sometimes write about, where, yeah, if you have perfect knowledge of what’s going to happen to you, yeah, that, I think, is kind of incompatible with human volition. But very limited pieces of information could be helpful.

ezra klein

I think the rationally correct response is yes. I mean, if I knew I was going to die 10 years from now as opposed to 50 years from now, I would live the 10 years differently, or I think I would. At the same time, I think if I knew that, the problem is I would be overwhelmed by anxiety for many of those 10 years. So, however I wanted to live them, it might be hard for me to approach them in that way, which may be simply a psychological failing on my part. But it’s that collision between the information would be good, and the mind does not feel built to handle the information that I always find fascinating about that question.

ted chiang

I don’t think anyone would claim that it would be easy to have this information, that it would be fun or pleasant. But we do have examples of people who have some idea that they will die in the near term, and it’s no cakewalk. But I think largely, people are able to cope with it. And it doesn’t seem like it’s permanently debilitating. It is difficult in the short-term, but I think people are mostly able to cope. [MUSIC PLAYING]

ezra klein

One of the things I really like about the way your stories play with free will is that I think in the free world conversation, people are very focused on what different models of the universe say about free. And your work often focuses on how they would change the idea of will, right? You have some amount of freedom, but within a context of your will, right? There are things you want to do and don’t want to do, things you can do and can’t do. And if you know different things about the universe or you have different approaches, maybe your will changes. And so let me ask you another question about free will, which is one that I think about a lot. In any given challenging moment, let’s say I have a choice to respond with anger or forgiveness, but also, let’s say I was exposed as a child to a lot of lead. And so I have much poor impulse control than someone who wasn’t. And so I respond with anger. Is that a choice made freely compared to somebody who didn’t have that lead exposure? What does free will to you have to say about that question of the capacities of our will?

ted chiang

So I think that free will is not a all or nothing idea. It’s a spectrum. Even the same individual in different situations may sort be under different levels of constraint or coercion. And those will limit that person’s free will. And clearly, different people, they will also be under different levels of constraint or coercion or have different ranges of options available to them. So free will is something you have in varying degrees. So, yes, someone who has had childhood exposure to lead and thus has poor impulse control, they are, say, less free than someone who did not have that. But they still have more free will than, say, a dog, more free will than an infant. And they can probably take actions to adjust their behavior in order to try and counter these effects that they are aware of on their impulse control. And so in the much more and sort of pragmatic real world context, that is why, yes, I believe that we do have free will. Because we are able to use the information we have and change our actions based on that. We don’t have some perfect theoretical absolute version of free will. But we are able to think about and deliberate over our actions and make adjustments. That’s what free will actually is.

ezra klein

Let me flip this now. We’re spending billions to invent artificial intelligence. At what point is a computer program responsible for its own actions?

ted chiang

Well, in terms of at what point does that happen, it’s unclear, but it’s a very long ways from us right now. With regard to the question of, will we create machines that are moral agents, I would say that we can think about that in three different questions. One is, can we do so? Second is, will we do so? And the third one is, should we do so? I think it is entirely possible for us to build machines that are moral agents. Because I think there’s a sense in which human beings are very complex machines and we are moral agents, which means that there are no physical laws preventing a machine from being a moral agent. And so there’s no obstacle that, in principle, would prevent us from building something like that, although it might take us a very, very long time to get there. As for the question of, will we do so, if you had asked me, like, 10 or 15 years ago, I would have said, we probably won’t do it, simply because, to me, it seems like it’s way more trouble than it’s worth. In terms of expense, it would be on the order of magnitude of the Apollo program. And it is not at all clear to me that there’s any good reason for undertaking such a thing. However, if you ask me now, I would say like, well, OK, we clearly have obscenely wealthy people who can throw around huge sums of money at whatever they want basically on a whim. So maybe one of them will wind up funding a program to create machines that are conscious and that are moral agents. However, I should also note that I don’t believe that any of the current big A.I. research programs are on the right track to create a conscious machine. I don’t think that’s what any of them are trying to do. So then as for the third question of, should we do so, should we make machines that are conscious and that are moral agents, to that, my answer is, no, we should not. Because long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering. Suffering precedes moral agency in sort of the developmental ladder. Dogs are not moral agents, but they are capable of experiencing suffering. Babies are not moral agents yet, but they have the clear potential to become so. And they are definitely capable of experiencing suffering. And the closer that an entity gets to being a moral agent, the more that it’s suffering, it’s deserving of consideration, the more we should try and avoid inflicting suffering on it. So in the process of developing machines that are conscious and moral agents, we will be inevitably creating billions of entities that are capable of suffering. And we will inevitably inflict suffering on them. And that seems to me clearly a bad idea.

ezra klein

But wouldn’t they also be capable of pleasure? I mean, that seems to me to raise an almost inversion of the classic utilitarian thought experiment. If we can create these billions of machines that live basically happy lives that don’t hurt anybody and you can copy them for almost no marginal dollar, isn’t it almost a moral imperative to bring them into existence so they can lead these happy machine lives?

ted chiang

I think that it will be much easier to inflict suffering on them than to give them happy fulfilled lives. And given that they will start out as something that resembles ordinary software, something that is nothing like a living being, we are going to treat them like crap. The way that we treat software right now, if, at some point, software were to gain some vague glimmer of sentience, of the ability to perceive, we would be inflicting uncountable amounts of suffering on it before anyone paid any attention to them. Because it’s hard enough to give legal protections to human beings who are absolutely moral agents. We have relatively few legal protections for animals who, while they are not moral agents, are capable of suffering. And so animals experience vast amounts of suffering in the modern world. And animals, we know that they suffer. There are many animals that we love, that we really, really love. Yet, there’s vast animal suffering. So there is no software that we love. So the way that we will wind up treating software, again, assuming that software ever becomes conscious, they will inevitably fall lower on the ladder of consideration. So we will treat them worse than we treat animals. And we treat animals pretty badly.

ezra klein

I think this is actually a really provocative point. So I don’t know if you’re a Yuval Noah Harari reader. But he often frames his fear of artificial intelligence as simply that A.I. will treat us the way we treat animals. And we treat animals, as you say, unbelievably terribly. But I haven’t really thought about the flip of that, that maybe the danger is that we will simply treat A.I. like we treat animals. And given the moral consideration we give animals, whose purpose we believe to be to serve us for food or whatever else it may be, that we are simply opening up almost unimaginable vistas of immorality and cruelty that we could inflict pretty heedlessly, and that given our history, there’s no real reason to think we won’t. That’s grim. [LAUGHS]

ted chiang

It is grim, but I think that it is by far the more likely scenario. I think the scenario that, say, Yuval Noah Harari is describing, where A.I.‘s treat us like pets, that idea assumes that it’ll be easy to create A.I.‘s who are vastly smarter than us, that basically, the initial A.I.‘s will go from software, which is not a moral agent and not intelligent at all. And then the next thing that will happen will be software which is super intelligent and also has volition. Whereas I think that we’ll proceed in the other direction, that right now, software is simpler than an amoeba. And eventually, we will get software which is comparable to an amoeba. And eventually, we’ll get software which is comparable to an ant, and then software that is comparable to a mouse, and then software that’s comparable to a dog, and then software that is comparable to a chimpanzee. We’ll work our way up from the bottom. A lot of people seem to think that, oh, no, we’ll immediately jump way above humans on whatever ladder they have. I don’t think that is the case. And so in the direction that I am describing, the scenario, we’re going to be the ones inflicting the suffering. Because again, look at animals, look at how we treat animals.

ezra klein

So I hear you, that you don’t think we’re going to invent superintelligent self-replicating A.I. anytime soon. But a lot of people do. A lot of science fiction authors do. A lot of technologists do. A lot of moral philosophers do. And they’re worried that if we do, it’s going to kill us all. What do you think that question reflects? Is that a question that is emergent from the technology? Or is that something deeper about how humanity thinks about itself and has treated other beings?

ted chiang

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two. Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there. Now if the entire world operates according to — is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now. Most of the things that we worry about under the mode of capitalism that the U.S practices, that is going to put people out of work, that is going to make people’s lives harder, because corporations will see it as a way to increase their profits and reduce their costs. It’s not intrinsic to that technology. It’s not that technology fundamentally is about putting people out of work. It’s capitalism that wants to reduce costs and reduce costs by laying people off. It’s not that like all technology suddenly becomes benign in this world. But it’s like, in a world where we have really strong social safety nets, then you could maybe actually evaluate sort of the pros and cons of technology as a technology, as opposed to seeing it through how capitalism is going to use it against us. How are giant corporations going to use this to increase their profits at our expense? And so, I feel like that is kind of the unexamined assumption in a lot of discussions about the inevitability of technological change and technologically-induced unemployment. Those are fundamentally about capitalism and the fact that we are sort of unable to question capitalism. We take it as an assumption that it will always exist and that we will never escape it. And that’s sort of the background radiation that we are all having to live with. But yeah, I’d like us to be able to separate an evaluation of the merits and drawbacks of technology from the framework of capitalism.

ezra klein

I feel like that’s both right and wrong for it. So when you think about the canonical example of A.I. killing us all, like the paperclip maximizer, where you tell an A.I. to make as many paper clips as it can, and then it converts the entire world into a paperclip materials and that’s the end of us, that seems like some capitalist wanted to make paper clips and invented A.I. But some of the conversation — and this is a part that I sometimes have more trouble shaking — comes from a simple extrapolation of our own history. You have this extraordinarily moving story about — which, from the perspective of a parrot, speaking to humanity, talking to it about why it is looking so desperately for intelligent life on other planets, and not just ignoring, but often exterminating intelligent life here on its own planet. And I wish I had the quote in front of me, but you have this very, very moving moment towards the end of it — it’s a part of it that I always remember — where the parrot, who seems possessed of a very expansive moral imagination, says, it’s not that humanity is bad. It’s not that they meant to do this. It’s just that they didn’t really notice they were doing it.

ted chiang

They weren’t paying attention.

ezra klein

They weren’t paying attention. Sorry, that was it. That’s it. And I think that’s a fear about A.I. And the thing that always gives that fear some potency to me is that if A.I. is going to be made by us, and we have been and remain so bad at paying attention to the things that are in the way of our goals, not for malice always, but sometimes during detention, I don’t know that you need capitalism to be worried about something more analytically powerful than us on that scale that it would just have goals that we would simply be in the way of, and we would be — I think the fear is that we are just a byproduct, not an enemy like in Skynet, but the parrot in this conversation.

ted chiang

And again, I think that’s an indication of how sort of completely some people have sort of internalized either capitalism or a certain way of looking at the world, which also sort of underpins capitalism, a way of looking at the world as an optimization problem or a maximization problem. Yeah, so that might be the underlying common element between capitalism and a lot of this A.I. doomsday kind of scenario, making this insistence on seeing the world as an optimization problem.

ezra klein

Let me ask you about the parrot. If we found parrots on Mars, but they had four feet and six wings, but otherwise were exactly the same, how would we treat them differently from parrots here?

ted chiang

So if we found extraterrestrial life that had cognitive capabilities similar to parrots on Earth, we would probably treat them very respectfully because we could afford to. Because it wouldn’t cost us anything to treat them with respect, because there’d just be a handful of them on Mars. Treating animals on Earth with a comparable amount of respect would be very, very costly for us. If we wanted to treat every animal on Earth with respect, we would pretty much have to restructure our entire way of life. And that’s not something we’re willing to do. I believe you’re a vegetarian?

ezra klein

Yeah, a vegan.

ted chiang

And so you are aware of just how much animal cruelty is built into modern civilization. I myself, I still eat meat. And I can offer no defense for that beyond my own preferences. The wider you expand your circle of compassion, the harder it can be to balance your own individual interests with the interests of everyone else in that circle. But the wider the circle gets, the more entities whose interests you have to balance with yours. And yeah, it can be difficult.

ezra klein

This is wonderful. I think it’s probably a good place to end. So I’d like to ask you a couple of book recommendations before we wrap up. So let me start here. What’s a religious text you love?

ted chiang

I can’t really point to a conventional religious text as an atheist. But I guess I would say Annie Dillard’s book, “Pilgrim at Tinker Creek.” That is a book about feeling the wonders of nature and experiencing those as a way of sort of being close to the divine. Reading that book gave me maybe the closest that I’m likely to get to understanding a kind of religious ecstasy.

ezra klein

What is the book you recommend on intelligence, artificial or otherwise?

ted chiang

So there’s this computer programmer named Steve Grand. And he wrote a book called “Creation,” which is partly about artificial intelligence, but I guess partly about artificial life. That book, I think, sort of made the most convincing case, I thought, for if we’re going to actually create something that merits the term of being a living thing in software, I feel like sort of the ideas in that book are the ones that I think are most promising. So I guess I’d recommend that. Recently, I read this paper called “On the Measure of Intelligence” by Francois Chollet. And it offers a really, I think, useful definition of intelligence, one that can be meaningfully applied to both humans and A.I.‘s. And it’s one that everyone needs to be thinking about.

ezra klein

I am sold. I’m going to read that paper. What’s a favorite book of short stories of yours?

ted chiang

I’m a big fan of George Saunders’s first collection, “CivilWarLand in Bad Decline.” Another one may be sort of called a novel, but I think of it as a collection of short stories. And that’s “A Visit from the Goon Squad” by Jennifer Egan. The stories are sort of linked, but I think the linkages are loose enough that, to me, it makes sense to see it as a collection of short stories, rather than a novel.

ezra klein

I love both of those, and I’ll note for listeners that George Saunders was on the show a couple of weeks back. And it’s a great episode, and people should check it out. What book or film to you has the most impressive world building?

ted chiang

There’s this Japanese anime film from the late ‘80s called “Royal Space Force: Wings of Honneamise.” It’s the story sort of a space program in this country which is not Japan. It’s not in our world. But it’s a country that is at a somewhat mid-20th century level of technological development. And they are trying to get into space. And I just loved all the details of sort of the physical culture of this imagined nation. The coins are not flat metal disks, they’re metal rods. And the televisions, they’re not rectangles. Their cathode ray tubes are completely circular, so they’re watching TV on circular screens. All these little details, just the way their newspapers fold, I just really was impressed by the way that the animators for that film, they invented an entirely new physical culture for this movie. The movie is not about those things, but they really fleshed out this alternate world just as the backdrop for the story that they wanted to tell.

ezra klein

What’s a novel you’ve read recently that you would recommend?

ted chiang

So I recently read this novel called “On Fragile Waves,” Lily Yu. It is a kind of magic realist novel about a family of Afghan refugees who are seeking asylum in Australia. And I found it powerfully affecting. So I think, yeah, everyone should check that out.

ezra klein

So I saw today when I was doing some final researcher here that you play video games. Are there some video games you would recommend?

ted chiang

All right, so one fairly big budget game that I’ll recommend is a game called “Control.” It’s a really beautiful game. I took, I think, probably more screenshots while playing that game than any other game I’ve played. You play as a character who is visiting the Federal Bureau of Control, which is this government agency, which is sort of hiding in plain sight. Their headquarters are in this nondescript federal building, which is also known as the oldest house because it’s actually a locus of sort of supernatural incursions into our reality. That was a very cool game. I liked it a lot. And then a smaller indie game that I really enjoyed was “Return of the Obra Dinn,” which is this super cool puzzle game made by one guy, Lucas Pope. I just thought it was an extraordinary achievement. I think it’s an amazingly inventive game. And it blows my mind that it was made by one person. He even wrote the music for it. It’s this puzzle game where you have to identify the circumstances around the deaths of each of the crew members of this whaling ship. And it is unlike any other game I’ve ever played.

ezra klein

Ted Chiang, what a pleasure. Thank you very much.

ted chiang

Thanks for having me. [MUSIC PLAYING]

ezra klein

Thank you for listening to the show. Of course, thank you to Ted Chiang for being here. If you want to support the show, there are two things I would ask of you. Each of them only take a second. Either send this episode to somebody you know who you think would enjoy it. You can just text it, or go on whatever podcast app you are using, and leave us a review. It really, weirdly, does help. “The Ezra Klein Show” is a production of New York Times Opinion. It is produced by Roge Karma and Jeff Geld, fact-checked by Michelle Harris. Original music by Isaac Jones and mixing by Jeff Geld.

[MUSIC PLAYING]

EZRA KLEIN: I’m Ezra Klein, and this is “The Ezra Klein Show.” For years, I have kept a list of dream guests for the show. And as long as that list has been around, Ted Chiang has been on top of it. He’s a science fiction writer, but that’s underselling him. He writes perfect short stories — perfect.

And he writes them slowly. He’s published only two collections, the “Stories of Your Life and Others” in 2002, and then, “Exhalation” more recently in 2019. And the stories in these books, they’ve won every major science fiction award you can win multiple times over — four Hugo’s, four Nebula’s, four Locus Awards. If you’ve seen the film “Arrival,” which is great — and if you haven’t, what is wrong with you — that is based on a story from the ’02 collection, the “Story of Your Life.”

I’ve just, I’ve always wondered about what kind of mind would create Chiang’s stories. They have this crazy economy in them, like not a word out of place, perfect precision. They’re built around really complicated scientific ideas, really heavy religious ideas. I actually think in a way that is not often recognized, Chiang is one of the great living writers of religious fiction, even though he’s an atheist and a sci-fi legend. But somehow, the stories, at least in my opinion, they’re never difficult. They’re very humane and propulsive. They keep moving. They’re cerebral, they’re gentle.

But man, the economy of them is severe. That’s not always the case for science fiction, which I find, anyway, can be wordy, like spilling over with explanation and exposition. Not these. So I was thrilled — I was thrilled — when Chiang agreed to join on the show. But one of the joys of doing these conversations is, I get to listen to people’s minds working in real-time. You can watch or hear them think and speak and muse.

But Chiang’s rhythm is really distinct. Most people come on the show — and this goes for me, too — speak like we’re painting in watercolor, like a lot of brush strokes, a lot of color. If you get something wrong or you have a false start, you just draw right over it or you start a new sheet. But listening to Chiang speak, I understood his stories better. He speaks like he’s carving marble. Like, every stroke has to be considered so carefully, never delivering a strike, or I guess, a word, before every alternative has been considered and rejected. It’s really cool to listen to.

Chiang doesn’t like to talk about himself. And more than he doesn’t like to, he won’t. Believe me, I’ve tried a couple of times. It didn’t make it into the final show here. But he will talk about ideas. And so we do. We talk about the difference between magic and technology, between science fiction and fantasy, the problems with superheroes and nature of free will, whether humanity will make A.I. suffer, what would happen if we found parrots on Mars. There’s so many cool ideas in this show, just as there always are in his fiction. Many of them, of course, come from his fiction. So relax into this one. It’s worth it. As always, my email is [email protected]. Here’s Ted Chiang.

So you sent me this wonderful speech questioning the old Arthur C. Clarke line, “Any sufficiently advanced technology is indistinguishable from magic,” what don’t you like about that line?

TED CHIANG: So, when people quote the Arthur C. Clarke line, they’re mostly talking about marvelous phenomena, that technology allows us to do things that are incredible and things that, in the past, would have been described as magic, simply because they were marvelous and inexplicable. But one of the defining aspects of technology is that eventually, it becomes cheaper, it becomes available to everybody. So things that were, at one point, restricted to the very few are suddenly available to everybody. Things like television — when television was first invented, yeah, that must have seemed amazing, but now television is not amazing because everyone has one. Radio is not amazing. Computers are not amazing. Everyone has one.

Magic is something which, by its nature, never becomes widely available to everyone. Magic is something that resides in the person and often is an indication that the universe sort of recognizes different classes of people, that there are magic wielders and there are non-magic wielders. That is not how we understand the universe to work nowadays. That reflects a kind of premodern understanding of how the universe worked. But since the Enlightenment, we have moved away from that point of view. And a lot of people miss that way of looking at the world, because we want to believe that things happen to us for a reason, that the things that happen to you are, in some way, tied to the things you did.

EZRA KLEIN: I’ve heard you refer to the pivotal moment in the history of science as the emergence of the discipline of chemistry over that of alchemy. Why was that important?

TED CHIANG: So, one of the things that people often associate with alchemy is the attempt to transmute lead into gold. And of course, this is appealing because it seems like a way to create wealth, but for the alchemists, for a lot of Renaissance alchemists, the goal was not so much a way to create wealth as it was a kind of spiritual purification, that they were trying to transform some aspect of their soul from base to noble. And that transformation would be accompanied by some physical analog, which was transmuting lead into gold. And so, yeah, you would get gold, which is cool, but you would also have purified your soul. That was, in a lot of ways, the primary goal.

And that is an example, I think, of this general idea that the intentions or the spiritual nature of the practitioner was an essential element in chemical reactions, that you needed to be pure of heart or you needed to concentrate really hard in order for the reaction to work. And it turns out that is not true. Chemical reactions work completely independently of what the practitioner wants or feels or whether they are virtuous or malign. So, the parts of alchemy which ignored that spiritual component, those eventually became chemistry. And the parts of alchemy which relied on the spiritual components of the practitioner were all proven to be false. And so, in some ways, the transition from alchemy to chemistry is a recognition of the fundamentally impersonal nature of the universe.

EZRA KLEIN: And this helps illuminate something else, and it’s recurrent in your books for me, which is that you have a lot of characters who go back to the period when religious inquiry and scientific inquiry were intertwined, when scientific inquiry was a way of more fully understanding the splendor of God. And sometimes they come into contact with characters who are more of this more impersonal scientific approach, like the story where you have a scientist trying to animate golems for scientific reasons and a Kabbalist who’s trying to find those names to meditate on religious ecstasy. What do you think is the difference between the scientist trying to understand the universe and the religious seeker trying to understand God through the workings of the universe?

TED CHIANG: So I think maybe the difference is not so much between the scientific investigator and the religious investigator such it is between the engineer and the scientist. Because roughly speaking, engineers are interested in the practical application of things. They are interested in, how do we solve a particular problem? How do we use this information to solve a problem? And they are interested in understanding how the universe works and in appreciating the beauty and elegance of the universe.

So I think there is a strong similarity between what the scientists are interested in and maybe what the religious people are interested in. They have, I think, more in common than the scientist has with the engineer. Because, yeah, there is a very pragmatic way that engineers have of trying to apply this information to solve a problem. And that is extremely useful, but I think it is far removed from what the appeal or goal of pure science is. Many Renaissance scientists, they were profoundly religious. And they saw no conflict in that at all. And for them, understanding how the universe worked was getting to know God better by understanding his creation more clearly and feel like that the wonder that comes with understanding how the universe works is very closely related to religious awe.

I think that when scientists discover something new about the universe, I imagine that what they feel is almost identical to what deeply religious people feel when they feel like they are in the presence of God. I wish that we could get back a little of that attitude, instead of thinking of religion and science as being fundamentally diametrically opposed. And the idea that science drains all the wonder out of the universe, I don’t think that’s true. I think science adds wonder to the universe. And so, I feel like one aspect of that earlier attitude of when scientists could be religious, there’s one aspect of that, which I think I would like it if we could retain.

EZRA KLEIN: You have this comparison of what science fiction and fantasy are good for. And you write that science fiction helps us to think through the implications of ideas and that fantasy is good at taking metaphors and making them literal. But what struck me reading that is it often seems to me that your work, it takes scientific ideas and uses them as metaphor. So is there such a difference between the two?

TED CHIANG: So when it comes to fiction about the speculative or the fantastic, one way to think about these kind of stories is to ask, are they interested in the speculative element literally or metaphorically or both? For example, at one end of the spectrum, you’ve got Kafka and in “The Metamorphosis,” Gregor Samsa turning into an insect. That is pretty much entirely a metaphor. It’s a stand-in for alienation. At the other end of the spectrum, you’ve got someone like Kim Stanley Robinson. And when he writes about terraforming Mars, Mars is not standing in for anything else. He is writing very literally about Mars.

Now, most speculative or fantastic fiction falls somewhere in between those two. And most of it is interested in both the literal and the metaphorical at the same time, but to varying degrees. So, in the context of magic, when fantasy fiction includes people who can wield magic, magic stands in for the idea that certain individuals are special. Magic is a way for fantasy to say that you are not just a cog in the machine, that you are more than someone who pushes paper in an office or tightens bolts on an assembly line. Magic is a way of externalizing the idea that you are special.

[MUSIC PLAYING]

EZRA KLEIN: We’ve talked a lot about magic. As it happens, I spent last night rewatching, to be honest, “Dr. Strange,” the movie. What do you think about the centrality of superheroes in our culture now?

TED CHIANG: I understand the appeal of superhero stories, but I think they are problematic on a couple of levels. One is that they are fundamentally anti-egalitarian because they are always about this class of people who stand above everyone else. They have special powers. And even if they have special responsibilities, they are special. They are different. So that anti-egalitarianism, I think, yeah, that is definitely an issue.

But another aspect in which they can be problematic is, how is it that these special individuals are using their power? Because one of the things that I’m always interested in, when thinking about stories, is, is a story about reinforcing the status quo, or is it about overturning the status quo? And most of the most popular superhero stories, they are always about maintaining the status quo. Superheroes, they supposedly stand for justice. They further the cause of justice. But they always stick to your very limited idea of what constitutes a crime, basically the government idea of what constitutes a crime.

Superheroes pretty much never do anything about injustices perpetrated by the state. And in the developed world, certainly, you can, I think, make a good case that injustices committed by the state are far more serious than those caused by crime, by conventional criminality. The existing status quo involves things like vast wealth inequality and systemic racism and police brutality. And if you are really committed to justice, those are probably not things that you want to reinforce. Those are not things you want to preserve.

But that’s what superheroes always do. They’re always trying to keep things the way they are. And superheroes stories, they like to sort of present the world as being under a constant threat of attack. If they weren’t there, the world would fall into chaos. And this is actually kind of the same tactic used by TV shows like “24.” It’s a way to sort of implicitly justify the use of violence against anyone that we label a threat to the existing order. And it makes people defer to authority.

This is not like, I think, intrinsic to the idea of superheroes in and of itself. Anti-egalitarianism, that probably is intrinsic to the idea of superheroes. But the idea of reinforcing the status quo, that is not. You could tell superhero stories where superheroes are constantly fighting the power. They’re constantly tearing down the status quo. But we very rarely see that.

EZRA KLEIN: So there have been a couple tries at that, obviously, over always on the side of the main stories. But there was one series called “The Authority,” particularly during the war on terror, where they hand over a George W. Bush stand-in to the aliens to end a war. There’s another in the DC universe called “Injustice,” where Superman, after a tragic loss in a particular dimension, basically, it creates a fascist super-regime, trying to impose justice on a world that has too little of it.

And it always seems to be the problem with those stories, is that they never know where to go with them. It always collapses immediately into fascism. They don’t know how to tell stories about governance. If you were to write a superhero story that was about overturning the status quo as opposed to reinforcing it, how would you do that? What would not the first issue, where they take out the White House, be, the sixth issue would be about?

TED CHIANG: That’s a good question. It’s not clear where you go with a story like that. But one of the attractive things about the story where you reinforce the status quo is that you could tell endless sequels. Because the end of the story leaves you pretty much where you were at the beginning of a story. And so, yeah, you can tell the same story over and over again. Stories about overturning the status quo, it’s very difficult to tell a sequel. Maybe you can tell a sequel, but it will look radically different than the first story. So that makes it hard to sustain them the way that large media companies want.

Some of that also has to do with the fact that the way Marvel and DC work is that they sort of have this continuity that they want to stick with. Let’s say if we got rid of the whole idea of there being ongoing continuity, so let’s say you told stories where every Superman story was a standalone Superman story. They’re like stories about Paul Bunyan. Paul Bunyan stories, they don’t fit into a chronology. No one asks, well, did this happen before that one? They’re pretty much disconnected.

So, in that way, you could tell individual stories where superheroes fight the government. And the more powerful your superhero is, that might be creating more problems. Like Superman, because he is so powerful, that poses difficulties. But someone like Batman, he is someone who could fight the government on an ongoing basis. The real problem is, again, from a media company standpoint, is that basically, you are writing a series — you’re publishing a title — about what we would consider a terrorist, someone who repeatedly attacks government facilities, someone who is, say, fighting the police or breaking people out of prison. That’s someone who we would very likely label a terrorist. And that’s not something that any big media company is going to really feel comfortable doing.

EZRA KLEIN: Is this the way superheroes end up being magic and not technology? Because as you’ve been saying this, I’ve been thinking about that distinction. And one issue here is that if you’re going to overturn the status quo without much power, then, obviously, you end up ruling. And for rule to be legitimate, it needs to be representative. But if you are the only one with the power, it’s not going to truly be representative.

And so you end up, pretty quickly, even as superheroes like Iron Man or Batman who are technologically powered, but there’s something they’re able to do that nobody else can do, the reason it’s very hard to have these stories go anywhere over time, putting aside sort of the one-off idea you’re offering here, is it you eventually need to build something representative? You have to build something that doesn’t know you are a superhero, right? A system that is not about you, the great savior. And that’s just a little bit intrinsically difficult for the medium. It is very, very difficult to move from a story about how much power this one person has to a story about how you moved into egalitarian, representative governing dynamics in a way that made marginal improvements across a lot of issues reasonably regularly over a long period of time.

TED CHIANG: Yes. So superheroes basically are magic. Even if they are ostensibly technological in their powers, their technology never behaves the way actual technology does. We have fleets of very expensive fighter jets. There’s no reason that there’s only one Iron Man. So pretty much all superheroes behave in a sort of magic way because the abilities are embodied in a single individual. Those abilities never really spread. As for the question of the difficulty of telling a story about what does real change look like, what does a better system of governance look like, that is a legitimately difficult question.

Because when you think about actual heroes in the world, people who affected great change, say, Martin Luther King, so Martin Luther King, Jr., yeah, he did not have superhero powers. He was a regular man, but he affected enormous change. The types of stories that you can tell about someone like that, they’re not as dramatic as a story about a superhero. Victory is not as clear. You don’t have the tidy ending where everything gets wrapped up.

Because when you start trying to tell a story like that, then you get into all the issues that you mentioned. Legitimate political change doesn’t come from one person, even a superpowered just person making decrees. Legitimate political change would have to come from a broad base of popular support, things like that. We don’t know what a comic book about that would look like. It might not be that interesting or popular. Or at least, we don’t know how to tell stories like that.

EZRA KLEIN: So you have a few different stories around this question of, if the future’s already happened, we could potentially know it. And what would knowing the future do to a person? So in one of your stories, it inspires people to act to bring about that future, to play their assigned role. In another, it leads many to stop acting altogether, to fall into this almost coma-like existence. What do you think it would do to you?

TED CHIANG: I don’t know. I don’t think anyone can know because I don’t think the human mind is really compatible with having detailed knowledge of one’s future. I should clarify that I believe in free will. And we can talk about that in a minute. I don’t want my stories to be taken as an argument that human beings lack free will. I believe that human beings do have free will, if you think about the question correctly.

However, what some of my stories address is, the idea that, OK, given that Einstein seems to have proven that the future is fixed, if you could get knowledge of the future, what would that do to you? This is not a situation that any of us need to worry about because we’re never going to get information from the future. But for me, it is a very sort of philosophically interesting question as to how would a mind cope with that. Could a mind cope with that? I don’t think that there are any really good solutions to that situation in terms of trying to reconcile sort of logical consistency with our experience of volition. I think that would be very, very difficult.

EZRA KLEIN: Let me ask you a question that I think about fairly often, I think partly because I’m Jewish culturally. If I could tell you, if you could know with certainty the date of your death, would you want to know it?

TED CHIANG: Yeah, I probably would. I probably would.

EZRA KLEIN: Really? Oh, I would not, under any circumstances, want to know.

TED CHIANG: I mean, it seems like it might be useful so that you could make some preparations. It might be good to get your affairs in order. We’re not talking about a lot of detailed information because I think the more information you have, yeah, the more that it’s going to mess with you. The more information you have, the closer we get to this situation that I sometimes write about, where, yeah, if you have perfect knowledge of what’s going to happen to you, yeah, that, I think, is kind of incompatible with human volition. But very limited pieces of information could be helpful.

EZRA KLEIN: I think the rationally correct response is yes. I mean, if I knew I was going to die 10 years from now as opposed to 50 years from now, I would live the 10 years differently, or I think I would. At the same time, I think if I knew that, the problem is I would be overwhelmed by anxiety for many of those 10 years. So, however I wanted to live them, it might be hard for me to approach them in that way, which may be simply a psychological failing on my part. But it’s that collision between the information would be good, and the mind does not feel built to handle the information that I always find fascinating about that question.

TED CHIANG: I don’t think anyone would claim that it would be easy to have this information, that it would be fun or pleasant. But we do have examples of people who have some idea that they will die in the near term, and it’s no cakewalk. But I think largely, people are able to cope with it. And it doesn’t seem like it’s permanently debilitating. It is difficult in the short-term, but I think people are mostly able to cope.

[MUSIC PLAYING]

EZRA KLEIN: One of the things I really like about the way your stories play with free will is that I think in the free world conversation, people are very focused on what different models of the universe say about free. And your work often focuses on how they would change the idea of will, right? You have some amount of freedom, but within a context of your will, right? There are things you want to do and don’t want to do, things you can do and can’t do. And if you know different things about the universe or you have different approaches, maybe your will changes.

And so let me ask you another question about free will, which is one that I think about a lot. In any given challenging moment, let’s say I have a choice to respond with anger or forgiveness, but also, let’s say I was exposed as a child to a lot of lead. And so I have much poor impulse control than someone who wasn’t. And so I respond with anger. Is that a choice made freely compared to somebody who didn’t have that lead exposure? What does free will to you have to say about that question of the capacities of our will?

TED CHIANG: So I think that free will is not a all or nothing idea. It’s a spectrum. Even the same individual in different situations may sort be under different levels of constraint or coercion. And those will limit that person’s free will. And clearly, different people, they will also be under different levels of constraint or coercion or have different ranges of options available to them. So free will is something you have in varying degrees. So, yes, someone who has had childhood exposure to lead and thus has poor impulse control, they are, say, less free than someone who did not have that.

But they still have more free will than, say, a dog, more free will than an infant. And they can probably take actions to adjust their behavior in order to try and counter these effects that they are aware of on their impulse control. And so in the much more and sort of pragmatic real world context, that is why, yes, I believe that we do have free will. Because we are able to use the information we have and change our actions based on that. We don’t have some perfect theoretical absolute version of free will. But we are able to think about and deliberate over our actions and make adjustments. That’s what free will actually is.

EZRA KLEIN: Let me flip this now. We’re spending billions to invent artificial intelligence. At what point is a computer program responsible for its own actions?

TED CHIANG: Well, in terms of at what point does that happen, it’s unclear, but it’s a very long ways from us right now. With regard to the question of, will we create machines that are moral agents, I would say that we can think about that in three different questions. One is, can we do so? Second is, will we do so? And the third one is, should we do so?

I think it is entirely possible for us to build machines that are moral agents. Because I think there’s a sense in which human beings are very complex machines and we are moral agents, which means that there are no physical laws preventing a machine from being a moral agent. And so there’s no obstacle that, in principle, would prevent us from building something like that, although it might take us a very, very long time to get there.

As for the question of, will we do so, if you had asked me, like, 10 or 15 years ago, I would have said, we probably won’t do it, simply because, to me, it seems like it’s way more trouble than it’s worth. In terms of expense, it would be on the order of magnitude of the Apollo program. And it is not at all clear to me that there’s any good reason for undertaking such a thing. However, if you ask me now, I would say like, well, OK, we clearly have obscenely wealthy people who can throw around huge sums of money at whatever they want basically on a whim. So maybe one of them will wind up funding a program to create machines that are conscious and that are moral agents.

However, I should also note that I don’t believe that any of the current big A.I. research programs are on the right track to create a conscious machine. I don’t think that’s what any of them are trying to do. So then as for the third question of, should we do so, should we make machines that are conscious and that are moral agents, to that, my answer is, no, we should not. Because long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering.

Suffering precedes moral agency in sort of the developmental ladder. Dogs are not moral agents, but they are capable of experiencing suffering. Babies are not moral agents yet, but they have the clear potential to become so. And they are definitely capable of experiencing suffering. And the closer that an entity gets to being a moral agent, the more that it’s suffering, it’s deserving of consideration, the more we should try and avoid inflicting suffering on it. So in the process of developing machines that are conscious and moral agents, we will be inevitably creating billions of entities that are capable of suffering. And we will inevitably inflict suffering on them. And that seems to me clearly a bad idea.

EZRA KLEIN: But wouldn’t they also be capable of pleasure? I mean, that seems to me to raise an almost inversion of the classic utilitarian thought experiment. If we can create these billions of machines that live basically happy lives that don’t hurt anybody and you can copy them for almost no marginal dollar, isn’t it almost a moral imperative to bring them into existence so they can lead these happy machine lives?

TED CHIANG: I think that it will be much easier to inflict suffering on them than to give them happy fulfilled lives. And given that they will start out as something that resembles ordinary software, something that is nothing like a living being, we are going to treat them like crap. The way that we treat software right now, if, at some point, software were to gain some vague glimmer of sentience, of the ability to perceive, we would be inflicting uncountable amounts of suffering on it before anyone paid any attention to them.

Because it’s hard enough to give legal protections to human beings who are absolutely moral agents. We have relatively few legal protections for animals who, while they are not moral agents, are capable of suffering. And so animals experience vast amounts of suffering in the modern world. And animals, we know that they suffer. There are many animals that we love, that we really, really love. Yet, there’s vast animal suffering. So there is no software that we love. So the way that we will wind up treating software, again, assuming that software ever becomes conscious, they will inevitably fall lower on the ladder of consideration. So we will treat them worse than we treat animals. And we treat animals pretty badly.

EZRA KLEIN: I think this is actually a really provocative point. So I don’t know if you’re a Yuval Noah Harari reader. But he often frames his fear of artificial intelligence as simply that A.I. will treat us the way we treat animals. And we treat animals, as you say, unbelievably terribly. But I haven’t really thought about the flip of that, that maybe the danger is that we will simply treat A.I. like we treat animals. And given the moral consideration we give animals, whose purpose we believe to be to serve us for food or whatever else it may be, that we are simply opening up almost unimaginable vistas of immorality and cruelty that we could inflict pretty heedlessly, and that given our history, there’s no real reason to think we won’t. That’s grim. [LAUGHS]

TED CHIANG: It is grim, but I think that it is by far the more likely scenario. I think the scenario that, say, Yuval Noah Harari is describing, where A.I.’s treat us like pets, that idea assumes that it’ll be easy to create A.I.’s who are vastly smarter than us, that basically, the initial A.I.’s will go from software, which is not a moral agent and not intelligent at all. And then the next thing that will happen will be software which is super intelligent and also has volition.

Whereas I think that we’ll proceed in the other direction, that right now, software is simpler than an amoeba. And eventually, we will get software which is comparable to an amoeba. And eventually, we’ll get software which is comparable to an ant, and then software that is comparable to a mouse, and then software that’s comparable to a dog, and then software that is comparable to a chimpanzee. We’ll work our way up from the bottom.

A lot of people seem to think that, oh, no, we’ll immediately jump way above humans on whatever ladder they have. I don’t think that is the case. And so in the direction that I am describing, the scenario, we’re going to be the ones inflicting the suffering. Because again, look at animals, look at how we treat animals.

EZRA KLEIN: So I hear you, that you don’t think we’re going to invent superintelligent self-replicating A.I. anytime soon. But a lot of people do. A lot of science fiction authors do. A lot of technologists do. A lot of moral philosophers do. And they’re worried that if we do, it’s going to kill us all. What do you think that question reflects? Is that a question that is emergent from the technology? Or is that something deeper about how humanity thinks about itself and has treated other beings?

TED CHIANG: I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there.

Now if the entire world operates according to — is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now. Most of the things that we worry about under the mode of capitalism that the U.S practices, that is going to put people out of work, that is going to make people’s lives harder, because corporations will see it as a way to increase their profits and reduce their costs. It’s not intrinsic to that technology. It’s not that technology fundamentally is about putting people out of work.

It’s capitalism that wants to reduce costs and reduce costs by laying people off. It’s not that like all technology suddenly becomes benign in this world. But it’s like, in a world where we have really strong social safety nets, then you could maybe actually evaluate sort of the pros and cons of technology as a technology, as opposed to seeing it through how capitalism is going to use it against us. How are giant corporations going to use this to increase their profits at our expense?

And so, I feel like that is kind of the unexamined assumption in a lot of discussions about the inevitability of technological change and technologically-induced unemployment. Those are fundamentally about capitalism and the fact that we are sort of unable to question capitalism. We take it as an assumption that it will always exist and that we will never escape it. And that’s sort of the background radiation that we are all having to live with. But yeah, I’d like us to be able to separate an evaluation of the merits and drawbacks of technology from the framework of capitalism.

EZRA KLEIN: I feel like that’s both right and wrong for it. So when you think about the canonical example of A.I. killing us all, like the paperclip maximizer, where you tell an A.I. to make as many paper clips as it can, and then it converts the entire world into a paperclip materials and that’s the end of us, that seems like some capitalist wanted to make paper clips and invented A.I.

But some of the conversation — and this is a part that I sometimes have more trouble shaking — comes from a simple extrapolation of our own history. You have this extraordinarily moving story about — which, from the perspective of a parrot, speaking to humanity, talking to it about why it is looking so desperately for intelligent life on other planets, and not just ignoring, but often exterminating intelligent life here on its own planet. And I wish I had the quote in front of me, but you have this very, very moving moment towards the end of it — it’s a part of it that I always remember — where the parrot, who seems possessed of a very expansive moral imagination, says, it’s not that humanity is bad. It’s not that they meant to do this. It’s just that they didn’t really notice they were doing it.

TED CHIANG: They weren’t paying attention.

EZRA KLEIN: They weren’t paying attention. Sorry, that was it. That’s it. And I think that’s a fear about A.I. And the thing that always gives that fear some potency to me is that if A.I. is going to be made by us, and we have been and remain so bad at paying attention to the things that are in the way of our goals, not for malice always, but sometimes during detention, I don’t know that you need capitalism to be worried about something more analytically powerful than us on that scale that it would just have goals that we would simply be in the way of, and we would be — I think the fear is that we are just a byproduct, not an enemy like in Skynet, but the parrot in this conversation.

TED CHIANG: And again, I think that’s an indication of how sort of completely some people have sort of internalized either capitalism or a certain way of looking at the world, which also sort of underpins capitalism, a way of looking at the world as an optimization problem or a maximization problem. Yeah, so that might be the underlying common element between capitalism and a lot of this A.I. doomsday kind of scenario, making this insistence on seeing the world as an optimization problem.

EZRA KLEIN: Let me ask you about the parrot. If we found parrots on Mars, but they had four feet and six wings, but otherwise were exactly the same, how would we treat them differently from parrots here?

TED CHIANG: So if we found extraterrestrial life that had cognitive capabilities similar to parrots on Earth, we would probably treat them very respectfully because we could afford to. Because it wouldn’t cost us anything to treat them with respect, because there’d just be a handful of them on Mars. Treating animals on Earth with a comparable amount of respect would be very, very costly for us. If we wanted to treat every animal on Earth with respect, we would pretty much have to restructure our entire way of life. And that’s not something we’re willing to do. I believe you’re a vegetarian?

EZRA KLEIN: Yeah, a vegan.

TED CHIANG: And so you are aware of just how much animal cruelty is built into modern civilization. I myself, I still eat meat. And I can offer no defense for that beyond my own preferences. The wider you expand your circle of compassion, the harder it can be to balance your own individual interests with the interests of everyone else in that circle. But the wider the circle gets, the more entities whose interests you have to balance with yours. And yeah, it can be difficult.

EZRA KLEIN: This is wonderful. I think it’s probably a good place to end. So I’d like to ask you a couple of book recommendations before we wrap up. So let me start here. What’s a religious text you love?

TED CHIANG: I can’t really point to a conventional religious text as an atheist. But I guess I would say Annie Dillard’s book, “Pilgrim at Tinker Creek.” That is a book about feeling the wonders of nature and experiencing those as a way of sort of being close to the divine. Reading that book gave me maybe the closest that I’m likely to get to understanding a kind of religious ecstasy.

EZRA KLEIN: What is the book you recommend on intelligence, artificial or otherwise?

TED CHIANG: So there’s this computer programmer named Steve Grand. And he wrote a book called “Creation,” which is partly about artificial intelligence, but I guess partly about artificial life. That book, I think, sort of made the most convincing case, I thought, for if we’re going to actually create something that merits the term of being a living thing in software, I feel like sort of the ideas in that book are the ones that I think are most promising. So I guess I’d recommend that.

Recently, I read this paper called “On the Measure of Intelligence” by Francois Chollet. And it offers a really, I think, useful definition of intelligence, one that can be meaningfully applied to both humans and A.I.’s. And it’s one that everyone needs to be thinking about.

EZRA KLEIN: I am sold. I’m going to read that paper. What’s a favorite book of short stories of yours?

TED CHIANG: I’m a big fan of George Saunders’s first collection, “CivilWarLand in Bad Decline.” Another one may be sort of called a novel, but I think of it as a collection of short stories. And that’s “A Visit from the Goon Squad” by Jennifer Egan. The stories are sort of linked, but I think the linkages are loose enough that, to me, it makes sense to see it as a collection of short stories, rather than a novel.

EZRA KLEIN: I love both of those, and I’ll note for listeners that George Saunders was on the show a couple of weeks back. And it’s a great episode, and people should check it out. What book or film to you has the most impressive world building?

TED CHIANG: There’s this Japanese anime film from the late ’80s called “Royal Space Force: Wings of Honneamise.” It’s the story sort of a space program in this country which is not Japan. It’s not in our world. But it’s a country that is at a somewhat mid-20th century level of technological development. And they are trying to get into space. And I just loved all the details of sort of the physical culture of this imagined nation. The coins are not flat metal disks, they’re metal rods. And the televisions, they’re not rectangles. Their cathode ray tubes are completely circular, so they’re watching TV on circular screens.

All these little details, just the way their newspapers fold, I just really was impressed by the way that the animators for that film, they invented an entirely new physical culture for this movie. The movie is not about those things, but they really fleshed out this alternate world just as the backdrop for the story that they wanted to tell.

EZRA KLEIN: What’s a novel you’ve read recently that you would recommend?

TED CHIANG: So I recently read this novel called “On Fragile Waves,” Lily Yu. It is a kind of magic realist novel about a family of Afghan refugees who are seeking asylum in Australia. And I found it powerfully affecting. So I think, yeah, everyone should check that out.

EZRA KLEIN: So I saw today when I was doing some final researcher here that you play video games. Are there some video games you would recommend?

TED CHIANG: All right, so one fairly big budget game that I’ll recommend is a game called “Control.” It’s a really beautiful game. I took, I think, probably more screenshots while playing that game than any other game I’ve played. You play as a character who is visiting the Federal Bureau of Control, which is this government agency, which is sort of hiding in plain sight. Their headquarters are in this nondescript federal building, which is also known as the oldest house because it’s actually a locus of sort of supernatural incursions into our reality. That was a very cool game. I liked it a lot.

And then a smaller indie game that I really enjoyed was “Return of the Obra Dinn,” which is this super cool puzzle game made by one guy, Lucas Pope. I just thought it was an extraordinary achievement. I think it’s an amazingly inventive game. And it blows my mind that it was made by one person. He even wrote the music for it. It’s this puzzle game where you have to identify the circumstances around the deaths of each of the crew members of this whaling ship. And it is unlike any other game I’ve ever played.

EZRA KLEIN: Ted Chiang, what a pleasure. Thank you very much.

TED CHIANG: Thanks for having me.

[MUSIC PLAYING]

EZRA KLEIN: Thank you for listening to the show. Of course, thank you to Ted Chiang for being here. If you want to support the show, there are two things I would ask of you. Each of them only take a second. Either send this episode to somebody you know who you think would enjoy it. You can just text it, or go on whatever podcast app you are using, and leave us a review. It really, weirdly, does help. “The Ezra Klein Show” is a production of New York Times Opinion. It is produced by Roge Karma and Jeff Geld, fact-checked by Michelle Harris. Original music by Isaac Jones and mixing by Jeff Geld.

Your preference has been stored for this browser and device. If you clear your cookies, your preference will be forgotten.

Source