Transcript

Episode 10: Eugenic Thinking & The Race to Build AI

KEY TOPICS

  • Introduction to main idea of TESCREAL paper: the cultural push to develop artificial general intelligence is undergirded eugenic thinking

  • Dr. Timnit Gebru discusses her intellectual journey of tackling bias and discrimination in technology and  becoming a vocal critic of Big Tech

  • The core ideas of each of the philosophies in the TESCREAL bundle (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism)

  • Concrete examples of how TESREALism is playing out in the United States today

  • Why is it important to interrogate “the why” in our efforts to build artificial general intelligence?

  • How does the TESCREAL framework serve as a jumping off point for taking a critical eye towards genetics and genomics research?

  • What is your greatest fear about the future of eugenic thinking in American culture?

  • Thought experiment: how could knowing our likely date of death and cause of death from birth change our relationship to mortality?

 

Interview

Susanna Smith

Hi everyone. This is Genetic Frontiers. A podcast about the promise, power and perils of genetic information find us wherever podcasts are found and go to geneticfrontiers.org to join the conversation about how genetic discoveries are propelling new personalized medical treatments, but also posing ethical dilemmas and emotional quandaries. I'm your host, Susanna Smith.

This season we’re focusing on  Genetics in American Politics & Culture. We talk with historians, journalists, technologists and philosophers  about the alluring but dangerous pursuit of improving the human species through genetics. We discuss how ideas about people’s genetic worth and worthiness are driving American politics and policy today. 

We'll be talking today about the widespread influence of eugenics in American culture, including in Silicon Valley. Joining me are Dr. Timnit Gebru and Dr. Émile P. Torres to talk about what they've called a second wave of eugenic ideologies that arose from the 1950s to the 2010s and help explain how we went from pre-World War II eugenic beliefs to present day—a eugenicist in the White House. 

Let me introduce our guests. Dr. Timnit Gebru is the founder and executive director of the Distributed AI Research Institute (DAIR). Previously, she worked as the co-lead of the Ethical AI research team at Google but was infamously fired in 2020 for writing a paper and speaking out about her concerns about the dangers of large language models and discrimination at Google. Timnit  has received many accolades, including being named one of Nature's Ten people who helped shape science and one of TIME’s 100 most influential people. She is currently writing The View From Somewhere, a memoir and manifesto, arguing for a technological future that serves our communities instead of one that is used for surveillance, warfare, and the centralization of power by Silicon Valley.

Dr. Émile P. Torres is a philosopher of human extinction, the author of a number of books, including most recently, Human Extinction, A History of the Science and Ethics of Annihilation, and is currently a visiting scholar at the Inamori International Center for Ethics and Excellence at Case Western University. Émile began their career writing about ideologies they now critique and have published numerous articles about topics like human enhancement, machine superintelligence, and the future of humanity. And since 2019, Émile  has been a vocal critic of a group of ideologies that they believe are both philosophically flawed and potentially dangerous but are driving many technologists.

Together, Timnit and Émile published a paper, in which they argue that the corporate and cultural push to develop artificial general intelligence is undergirded by a worldview that is:

“rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of ‘safety’ and ‘benefiting humanity’ to evade accountability.”

That's an excerpt from their paper, which we're going to discuss at length in this episode. I see Timnit and Émile's critique of Silicon Valley also as a critique of American culture and a reminder of how pervasive our history of eugenic thinking remains today. Their work caught my eye because it could have easily been reframed and aimed at the world of personalized medicine and the push towards large-scale genomic studies, full genome sequencing, and gene therapy in some cases.

In both artificial intelligence and genomic research the technologies are built on big data sets, which require the contributions and work of large numbers of people. But few people working in those fields are asking probing questions like, “Why are we building these technologies? To what end?” Ostensibly the answer is to help people, and often that answer goes unexamined.

I'm thrilled that Timnit and Émile are here today to help me think more about this. So let's jump right in. So Timnit I'd like to start with you. And for our listeners who may not be as familiar with your research, I want to give one concrete example from your work, which is that you showed that facial recognition software has inherent biases. Namely, it was most accurate at recognizing the faces of light-skinned men and least accurate at identifying the faces of dark-skinned women. This has many dangerous and damning implications but could also be approached as an engineering problem.

Could you talk a little bit about your intellectual journey from tackling bias and discrimination in technology to becoming a vocal critic of big tech?

Timnit Gebru
Yes, thank you for that question. Initially, when I was working on the issues of face recognition with my collaborator, Dr. Joy Bolamini, we didn't see it as an engineering problem to be fixed. In fact, we also warned that these kinds of issues could lead to people being falsely arrested, for example. But we looked at two facets: one is if you don't just look at the issue of face recognition and, for example, you look at these algorithms that might be used to try to determine whether someone has a melanoma, for instance, right? They make more mistakes on darker skinned people rather than lighter skinned people that can have repercussions: misdiagnosis, that could have severe repercussions. And then in the case of face recognition even being identified correctly also has severe repercussions because we also don't want to advocate for over-policing. So we kind of worked on it from a nuanced way, from a nuanced angle. As in, we weren't just looking at telling people to make these systems accurate for everyone because we're not pro over-policing and using face recognition on everyone. So we had to kind of differentiate between these two use cases. 

And then later on, I was fired from Google because I wrote about large language models, which are kind of the underlying technology on a lot of systems such as ChatGPT and other systems that weren't on the market when I was at Google. And so that was a whole big public thing. The biggest thing for me was to ask myself, “Why are people so interested in developing these models?” And that was kind of around the time that I was also familiar, started being familiar with Émile's writings and works on long-termism. And it took me back to my experiences in graduate school and other experiences in Silicon Valley. I started to see how people were discussing what a so-called artificial general intelligence because they were talking about large language models as stepping stones to AGI, which is artificial general intelligence. And then I started asking, “Why are these people so interested in creating such a system?” And that's kind of how I started thinking about the eugenic origins of these systems. And so I approached Émile and I said, hey, you know, there's some eugenics-y stuff here you know and so are you interested in co-authoring a paper on this? And I don't think we imagined that the eugenic links would be so direct but to me that's kind of how the evolution of my thinking happened.

Susanna Smith

Thank you for sharing that Timnit and giving us a sense of how this collaboration came together. So let's dive into your paper and the work that you guys have done together. 

So in your joint paper about the TESCREAL bundle, you outline a group of ideologies which you argue comprise a second wave of eugenic thinking. Émile, could you briefly explain the core idea of each of these ideologies for our listeners?

Émile Torres

Sure, so at the heart of the paper is this acronym, which is TESCREAL. And TESCREAL, it's a bit clunky. But it's less clunky than the various you know, polysyllabic terms which correspond to different ideologies that the acronym represents. And so the T through the L refers to Transhumanism, Extropianism, Singularitarianism, Cosmism. Rationalism, Effective Altruism, and Longtermism. 

And one way to think about the TESCREAL bundle, at least this is what I would argue, is that the backbone is really transhumanism. So all of the other ideologies are either versions of transhumanism with different sorts of emphases. Or their ideologies that emerged directly out of the modern transhumanist movement. Modern transhumanist movement dates back to the late 1980s / early 1990s. Although transhumanism as an idea was developed all the way going back to the early 20th century, the 1920s in particular. And transhumanism is this idea that we ought to develop advanced technologies in order to radically re-engineer the human organism. Why would we want to do that? Well, because the idea is that the human condition right now is in many senses suboptimal. You might even say wretched. You know, we are condemned to die. Our cognitive abilities are limited in scope. And so if we use these technologies to re-engineer ourselves that we could transcend these limitations. And ultimately what that means is creating a new quote unquote superior post-human species. So that is transhumanism.

Extropianism was just the first organized transhumanist movement with a very strong emphasis on libertarian politics, singularitarianism just emphasizes that there's this hypothetical future moment when we develop artificial general intelligence, which quickly leads to artificial superintelligence, which results in a rupturing of the fabric of human history in some completely unprecedented way. And what lies on the other side is quite possibly some sort of utopia. Cosmism is sort of emphasizes not just re-engineering the human organism but re-engineering the universe itself. Actually developing what Cosmists refer to as “future scientific magic,” which would enable us to actually intervene upon the fabric, to use that term again, of space and time, to redesign the universe in a way that we find optimal. And then rationalism is basically just this idea that, okay, if we're going to create this post-human species, living in some kind of techno-utopian world that techno-utopian civilization spread out across the galaxies then we're going to need to do a lot of really smart things, quote unquote.

And so let's take a step back and try to figure out how to maximize our rationality, maximize our intelligence thereby enabling us, facilitating the realization of this techno-utopia. And effective altruism is kind of a similar view as rationalism but there's an emphasis more on the ethical aspect rather than the as it were decision theoretic aspect And then long-termism in many ways is kind of the galaxy brain that sits atop the backbone of transhumanism because it really ties together all of these different like thematic threads. It's about re-engineering humanity. It's about going out, colonizing space and creating the sprawling multi-galactic civilization full of trillions and trillions of future supposedly happy people, nearly all of whom would be living in vast computer simulations. So they'd be digital beings living in these simulated worlds. 

So this is a brief overview of the TESCREAL bundle. These ideologies overlap very significantly in terms of their future logical aims and you know the communities that correspond to each of the letters in the acronym also overlap very significantly. And historically, one sort of grew out of the others. So that's the TESCREAL bundle.

Susanna Smith

Thank you, Émile. So I'd love to hear from both of you on this next question, which is, how do we see these ideologies playing out right now in American culture? What are some concrete examples?

Timnit Gebru

I'll start with some of the things we outlined in our paper. I also had written an op-ed about effective altruism before. So from my perspective, being in the field of AI, the idea that so-called artificial general intelligence, some super intelligent thing bringing us utopia and/or at the same time potentially rendering humanity extinct was not an idea that was so popular within the field of AI. It wasn't. Academics were ridiculing people who said these kinds of things. However, money from various billionaires in this bundle, including Elon Musk, Peter Thiel, Jaan Tallinn, all sorts of names we can name, spearheaded an actual movement to put this idea of working on AGI into the mainstream. And companies like DeepMind, whose founder and CEO now has the Nobel Prize in physics, were created with money from these billionaires. And then in 2015, OpenAI was founded with the mission to create so-called beneficial AGI. And so at that time, OpenAI was not as much in the mainstream but still it was kind of getting into the mainstream. But now everybody knows OpenAI's name because of what they created: ChatGPT. And they're valued at more than $100 billion. And then there is another company: Anthropic, that just like OpenAI told us that they were going to save us from Google's AI, Anthropic tells us they're going to save us from OpenAI's AI. They’ve raised more than $7 billion in just one year. Everything that is synonymous with AI is now what's called generative AI or, you know, these kind of systems that take audio, video, text, or images as input and can output similar things. 

And so now artists are organizing because their works are being stolen to train these systems. To try to replace them basically, without a cent to them. You saw the Hollywood the WGA strike, AI was a big topic there. You have everyone, from government, from military to the UN talking about so-called AGI systems, and that's just from the perspective of AGI, which is what we wrote our paper about. But if you look at, for example, the fact that Sam Bankman-Fried is now in federal prison for perpetuating one of the biggest frauds in American history. He says that he was encouraged to get rich basically as part of the earn to give program, effective altruism: trying to amass as much money as possible and then use that money to advance the causes of effective altruism. I can say much more, but you look at, you know, Elon Musk's and other billionaires’ quest to go to Mars, space that can also be linked to this movement. 

And so even though we started our paper and worked on our paper with the lens of artificial general intelligence. Mostly because, you know, I'm in the field of AI and that's what I write about and that's what I care about. I'm not kind of writing about how Scientology is impacting Hollywood for example. But you can see that this powerful group of people, including the richest man in the world are adherents of a bundle of ideologies that are very problematic.

Susanna Smith

And there are so many examples, Timnit. Thank you for sharing a few. Émile, are there any favorites you'd like to add?

Émile Torres

Sure, I mean, one way to think about the situation is that I have referred to long-termism and these other TESCREAL ideologies as arguably the most influential ideologies in the world today, that few people have heard about. And one way to think about this situation is, as Timnit was saying, the development of the large language models that power ChatGPT and similar chatbots out there, these are seen as stepping stones on the path to AGI. Why are people interested in AGI in the first place? What are the origins of the current race to build AGI involving companies like DeepMind, Anthropic, OpenAI, X-AI, Elon Musk's company, and so on. The origins go back directly to the TESCREAL movement. And we could go into details about that but there are lots of things to say about this issue. And so the development of ChatGPT is because people are trying to build AGI. ChatGPT, these LLMs, are already profoundly influencing our world, arguably in many inimical, unfortunate ways. 

In other words, if not for the TESCREAL ideologies.We probably wouldn't be pursuing AGI right now. The pursuit of AGI has resulted in the development of these increasingly large language models. and these models are influencing the world that we live in right now. And in that way, these ideologies have already shaped the world that we live in and are poised to continue to shape the world that we live in over the coming years, maybe the coming decades.

Susanna Smith

Thank you, Émile. So I want to turn a little bit to your TESCREAL paper again. And talk about a couple of things it accomplishes. So of course, it lays out the TESCREAL framework, which for me really filled in some intellectual and chronological gaps in terms of how we got from eugenic thinking pre-World War II to the present day. But another thing you do in this paper is you pose a really big question which is why are we trying to build artificial general intelligence in the first place? So can you talk a little bit about why asking the “why” question is so important?

Timnit Gebru

I can start. So when I got fired from Google in 2020, I co-wrote a paper called, “On the Dangers of Stochastic Parrots. Can Language Models Be Too Big?” And this paper was asking the question of why there was a race to try and build larger and larger language models. People, industries, whole companies were just obsessed with this notion of creating large language models, and they wanted their system to be larger than the existing one. And there was this race. It was just like a mine is bigger than yours kind of thing. And so they didn't seem to be interested in what the language models could do versus not.

And so just a quick technical description, language models are built to basically construct the most likely sequences of words, based on the data sets that they're trained on. So initially, when DeepMind and OpenAI also were founded, they thought that what's called reinforcement learning as a separate set of techniques were going to be stepping stones towards AGI. But then OpenAI started talking about how large language models were going to be stepping stones towards AGI. Once they started saying that, everybody else kind of started getting in the race to build larger and larger language models. And so what are the consequences of building larger and larger language models?

Well, first of all, the larger your models are, the more computationally intensive they are. And now we know that many of these companies, their carbon footprint is overtaking or might have already overtaken that of the airline industry. They're even building nuclear power systems to be able to power these data centers because the energy cost is so high. So that is accelerating that environmental catastrophe.

And two, all of this data that they're scraping is done without consent or compensation to people who are suppliers of that data. And in fact, many small organizations have even shut down their websites because OpenAI is scraping has caused the traffic to be so high that their server bills are really high. And, you know, in addition to that, so there's all of this stealing of data. Some people have called it the greatest theft in history or one of them. And there's also worker exploitation in spite of all this talk about what Sam Altman says is “literally magic intelligence in the sky.” A lot of our work there shows that worker exploitation that fuels these systems. So, there are hundreds of millions of workers around the world, an estimated, I remember, 300-or more-million workers around the world who sit down and label all this data or moderate some of the outputs of some of these systems to make sure that they're safe, similar to content moderators for social media platforms. And these people are getting more and more exploited and having more and more PTSD. And for what? So we asked the question, and why? 

Because these systems haven't been shown to do anything magical. Why are we moving away from creating tools, what I call “task-specific tools,” right? If you want something that you can sit on, you build a chair. You differentiate between couches and chairs and different kinds of tables, et cetera. You're not trying to build a thing that can do everything for everyone. And so in this case, they're claiming to essentially be building an everything machine or a god, which is inherently, incredibly unsafe because of all the reasons that I described. 

So this was why we had to ask the question of what is fueling this? You know, of course, there's the promise of capital. If you have AGI, you're going to tell people that, oh, we can make as much money as possible using this AGI, quote unquote AGI, without hiring anybody. So you give that promise, too. But there's also the ideology of the TESCREAL bundle that we discussed that is fueling the systems. 

To me, that was why it was so important to ask the question of why is this happening? How is this propaganda machine basically selling this so-called AGI? Why is so much money going into this in spite of the documented harms? And so I think that's why it is really important to ask the question of why.

Susanna Smith

Thank you, Timnit. Émile, would you like to add anything to that?

Émile Torres

Yeah, sure. So I think a succinct way of answering the question is that people in the TESCREAL movement believe that building AGI is the surest way to realize the techno-utopian vision at the heart of the bundle. And so for these individuals, who tend to have backgrounds in computer science and tend to be very quantitative in their way of thinking, everything is an engineering problem. That includes paradise, right? And so literally there are people in the community who refer to the creation of utopia as “paradise engineering.” And what makes a good engineer? Well, it's quote unquote intelligence. So the reasoning goes, if you create an AI system that is super intelligent, it will be a super-engineer. If that super-engineer is controllable and does what you intend it to do, then you can delegate it the task of creating paradise. And so maybe there are alternative paths to creating paradise through the development of different kinds of technologies like advanced nanotechnology, so-called molecular nanotechnology, maybe like advanced, genetic engineering and so on. But if you create a technology that is an agent itself, that is an intelligence itself then you could give it the task of advancing all these other fields of technology and actually enabling us to radically re-engineer the human organism to create a new post-human species and then beyond that, to go out and colonize space. 

So like Sam Altman, for example, has said that without AGI, he thinks that space colonization is impossible, which gestures at this idea that AGI is the surest way, is the best guarantee for actually fulfilling these techno-utopian fantasies. So a lot of it is just, it is driven by utopian thinking. And for reasons that Timnit gestured at, I think utopian thinking can be very dangerous. So as Timnit was saying, I mean, there's worker exploitation, there's the environmental impact, there's massive intellectual property theft and so on. But if your goal is utopia and these are various unfortunate negative consequences, you can still kind of justify those consequences. 

One way to think about this is if the ends can at least sometimes justify the means, and the ends are a literal utopia in which we live forever, we ourselves become super intelligent, we spread throughout the universe, get to explore distant galaxies and so on then what exactly is off the table for realizing those ends? For realizing what some long-termists refer to as, quote, “our vast and glorious future?” Others, Eliezer Yudkowsky, for example, has described the future as our, quote, “glorious transhuman future.” So all of the harms that are being caused in the present right now are easy to justify from this particular techno-utopian perspective. And so I think that is one reason why understanding the TESCREAL bundle and the utopianism that is inherent in this cluster of ideologies is really important for understanding the origins for what has launched, sustained, and accelerated the race to build AGI.

Susanna Smith

Yeah, and this is one of the areas where I see a lot of parallels with genomics research. And if you sort of extend utopian thinking into human genetics, you get into some very dangerous territory. And I think it's completely misguided And I think a lot of people don't want to talk about the fact that genetic mutations are natural occurrences in human beings and most, maybe all of us carry different types of genetic variants. And there are situations in which having genetic medicine could help a specific person. I'm potentially a very clear case example as a person with a 50% risk for developing untreatable, progressive genetic disease. But still, I have questions about, where are we moving big picture? And I see a lot of these parallels between the fields of AI and genetic and genomic research, including the research and development paths, the chronology of how the tech was developed from the 1950s to the 1970s with rapid acceleration from the 1990s to present day. And in both cases, these technologies are being built on big data sets, which you guys talked about requires enormous computing power and the contributions from large numbers of people, both doing the work and contributing the data. But who really profits off these technologies? How are the people who are contributing being compensated? Can they exercise control over how their data is used? And the answer a lot of times is they aren't being fairly compensated, and they aren't able to exercise control over their data. 

So if we look to the TESCREAL framework that you guys have outlined, how does that serve as a jumping off point for taking a critical eye towards genetics and genomics research?

Timnit Gebru

I have a few things to say but because I'm not as familiar with genomic research, I know that Émile might be so I'll let Émile answer part of that. But I didn't know that some of these people in the TESCREAL bundle also worked on quote unquote bioethics and that was terrifying to me hearing that, right? 

Because to me, it seems like they're trying to not just create the quote unquote ideal human in this case. But as we note in our paper, they're trying to create an entirely new, basically, species that doesn't have the properties of humans like death or maybe you know you live to be 500 years or something like that. 

Some of the parallels that I see well, I don't know if you were thinking about this when you had decided to have the word “frontier” in your podcast, but “frontier” is a word that originated in colonization, it's a word that they use to describe the boundaries of European colonization. Right now in AI development, there is a changing naming convention. So we had names that just meant specific things that other people outside of the field might not know. Like we call things like pre-trained models or base models, which does not ring any sort of bells to people who are not in the field probably because it's just a specific word. And then now they moved on to calling those same things to foundation models. And so now you know what foundation means. It's very close to foundational. So they're trying to create the foundation of everything. 

And now they've moved on from foundation to frontier models. They call them frontier models. And I don't know if they understand how inadvertently, they are really telling us they're colonizing mindset because they're using the same exact words that colonizers were using. For me, the parallels are in the way in which that this discussion about genetics has been used during colonization to talk about how certain groups of people are inferior and basically deserve to be colonized, and how the colonizers are doing us a favor. You know, Nick Bostrom said that blacks are more stupid than whites. And he went on and on and on. He’s one of the most central figures of the TESCREAL bundle, even though they're all trying to disassociate from him now after our work. That's one of the biggest parallels that I see. It's kind of trying to get at the same goal using these two different fields.

Émile Torres

I would totally agree with that. One common thread that ties the two together, like the current race to build AGI, some work within the field of genetic engineering and so on is eugenics, right? It's just this idea that we can use technology or some kind of scientific quote unquote rational method to radically, quote unquote, improve the human species. First-wave eugenicists advocated for the perfection of the human species while also preventing different human populations from degenerating, which is a major concern at the turn of the 20th century, so very early 20th century, late 1890s, following the work of Charles Darwin, people were very concerned about degeneration. And so the eugenicists wanted to prevent degeneration while also perfecting the human species using various methods relating to basically selective breeding. So if you can change the population level patterns of reproduction such that people who are less, quote unquote, fit have fewer children, have fewer offspring, and those deemed to be more fit have more children, larger families, then the human population as a whole could become healthier, smarter, and so on. 

And transhumanists came along, again, I mentioned the 1920s, this is when Julian Huxley, one of the most prominent eugenicists of the twentieth century published his book in 1927 called “Religion Without Revelation,” where he basically developed the transhumanist ideology although he didn't use that term yet. He did thirty years later in his 1957 book, “New Bottles for New Wine,” which is what popularized the term “transhumanism.” But nonetheless in the 1920s, he was talking about transhumanism.

I have described transhumanism as eugenics on steroids. And the reason for that is the transhumanists of the first wave said, “Why stop at just perfecting the human species? Why not go beyond the human species? Why not create something new and even better?” And early on, there was a strong focus on genetics. You know, the key to figuring out how to perfect the human species and even transcend the human species was sort of like understanding how genetics works. The discovery of recessive genes threw a lot of eugenicists off. It confounded their simplistic plan for radically improving the human species. But then with the second wave of eugenics, which transhumanism is very much a part of, and that was after World War II, really like in the 1970s, there was a shift to focusing on advanced technologies that go beyond the genetic engineering domain. 

So this would include nanotechnology, which I mentioned earlier, artificial intelligence and so on. And maybe there are ways we don't even need to modify our genes, but perhaps there are ways to radically re-engineer the human organism by sending nanobots into our bloodstream that alter our physiology, morphology, and so on, maybe there are ways of connecting ourselves as cyborgs with advanced AI. 

I'm trying to get at the methods of achieving this end have changed over time. The end itself has also changed. Initially, it was just about creating the most perfect version of the human species possible. And then later with the rise of early transhumanism and then modern transhumanism, it was really about creating this radically, quote unquote, superior post-human species that could ultimately rule the world, become completely digital, and then go out and colonize space, and establish this sprawling utopian world among the heavens.

Susanna Smith

Thanks, Émile. So what I'm hearing is we all need to be very wary of utopia. 

But I want to back up now to a point you made earlier, Timnit, about the naming of this podcast, Genetic Frontiers. And the colonialist overtones of the word “frontiers.” I think it's really helpful to think about it that way because it's perhaps not that far from how I see how genetic information is being used in the world today. 

So, for example, genetic test kits are being sold for relatively cheap as fun consumer products. People go out and buy them and then their entire genome lives with these corporations, who are mining this data for profit. And I do think there's a sort of tiering of corporations and people who are collecting these big data sets for profit, and people are contributing essentially free or cheap labor, a sort of neocolonialism mediated by direct-to-consumer genetic test kits. So the tone of the word “frontiers” is perhaps not that far from how I see genetic information being used today, and the potential dangers in the future, if that makes any sense.

Timnit Gebru

Yeah, that makes sense.

Susanna Smith

So I am hoping both of you will answer this question because I'm super curious about what you would say. What is your greatest fear about the future of eugenic thinking in American culture?

Timnit Gebru

Oh, man. Um. Well…

Émile Torres

Yes [laughs] go ahead Timnit.

Timnit Gebru

Oh, man. Okay, so… One of the first things I would say about the present is that it is so embedded in so many things from the tech world to the healthcare system to education and how we discuss education, standardized testing, IQ tests. Just even talking about IQ, I mean, how normalized it is, is really, I think, for me, my fear. And then my fear is that it just becomes even more normalized. I mean this bundle is so powerful and so good at doing the in-group, out-group kind of conversation. Within their group, you'll see they have a different tone. They use different words. They don't mince words as much. So you know exactly what they're talking about. However, they have a whole strategy of how to spread their gospel basically and that strategy is based on how to talk to different groups. 

I mean, they do all sorts of testing of their message testing by race, by age, by political orientation, et cetera, in many different countries. They are now in the UN. There's an AI safety institute in the US that's led by a TESCREAList person, who we have as a footnote in our paper. So they've basically entered the mainstream. And I think just like we wrote in our paper. Just like the race scientists were able to legitimize themselves through prestigious institutions, academic institutions, and all sorts of other institutions, that's what's happened here with respect to the TESCREAL bundle. So not just the future but the current, the present, the biggest fear I have is how it continues to be normalized and how people just don't see the signs of eugenic thinking and identify it to be dangerous because they have managed to seep through every single facet of society and establish themselves and legitimize themselves. And my fear is that will just continue to happen. More and more of that will continue to happen without the red flags that people know to identify.

Susanna Smith

Yeah, I agree with all of that. And I think one thing I would add, I think there's been a real intentional forgetting of the past. It's happened largely culturally but also in sort of niche fields. I did, we did a podcast on sort of the historic origins of eugenic thinking and beliefs and training, explicit training for genetic counselors and how the people who started the field of genetic counseling were overtly eugenic, but that isn't taught in a lot of genetic counseling programs. And that applies to the entire field of medicine so it applies going forward, but we're forgetting our past. 

But Émile, would you like to add to that?

Émile Torres

Yeah, absolutely. So I very much agree with everything you said, Susanna and Timnit, maybe one thing to mention is that in the paper that Timnit and I published, we quote two French scholars who use this phrase, “the eternal return of eugenics.” So in terms of forgetting the past and not learning lessons, it is just extraordinary how this idea won't die. 

It goes back to the very origins of the Western intellectual tradition. It goes back to ancient Roman law. Plato and Aristotle advocated for forms of eugenics. Throughout the twentieth century there was continuity in the eugenics movement, including post-World War II. There were some countries that were occupied by Nazi Germany that implemented eugenics programs after the occupation. Japan implemented a eugenics program after World War II. The reason it is potentially dangerous and elusive is that it takes a different form. It can take different forms over time. And so you have to be perceptive to properly identify eugenics because the version that is pervasive now in Silicon Valley is different than the overt, racist, racial hygiene kind of version that you found In Nazi Germany in the 1930s and 1940s.

So this is an idea that just won't go away.

And the other thing I would mention briefly, is that in addition to everything Timnit was saying, I am worried about eugenics in the form of TESCREALism, the TESCREAL ideologies for two main reasons. One, has to do with the pursuit of utopia. As I was mentioning before, building off of Timnit's remarks, utopia can justify all sorts of harms in the present. It can justify worker exploitation and the environmental impact and intellectual property theft and so on. After all, these are just like small harms. Maybe they're necessary to build a much, much, much better future. So that is one category, the pursuit of utopia. The other has to do with the realization of utopia. 

So if you grant TESCREALists many of their claims, okay, maybe AGI is possible and if we build AGI, we'll get super intelligence. Maybe it'll then enable us to re-engineer ourselves to become post-human. We'll colonize space, build these vast computer simulations, and create an enormous future population. Okay, assuming all of that is possible, what exactly does this utopia look like? Some would argue that utopia is an inherently exclusionary idea. Somebody's always left out of utopia, right? If it's Christian utopia, if it's heaven. If they're non-believers there, then it's not heaven. You know, something's gone wrong. You could say this about just about every version of utopia that's ever been put forward. So the question then, is who exactly is left out of the utopia of TESCREALism? And I would argue that most of humanity is. 

One of the most striking facts about the TESCREAL literature is that there is almost zero reference to what the future could or should look like from the perspective of let's say communities, cultures, people, who are not super-privileged, white men at universities like Oxford and in Silicon Valley. A colleague of mine, Monika Bielskyte, likes to say, you cannot “design for” you have to “design with.” And the future that these TESCREALists have designed has been designed just by super-privileged, white men, for the most part. And so I think that marginalized communities will be marginalized even more if utopia is realized, if not entirely eliminated. And you could go even a step further and emphasize that there's every reason to believe that if TESCREALists succeed in creating utopia, then our species itself will probably either be marginalized very significantly or eliminated entirely. And so oftentimes when we think of pro-extinctionist ideologies that is, ideologies that advocate for the elimination of our species. Oftentimes, many of us will think of like radical environmentalists. Who say we've been terrible, very destructive in the biosphere therefore, we should go extinct. Or philosophical pessimists who say, oh, life is very, very bad and therefore it would be better if there were no humans at all.

But there is a version of pro-extinctionism which is ubiquitous in Silicon Valley, and it takes the form of TESCREALism. Because the utopia that they envision is one that is run by post-humans. And in a world run by post-humans, is there any place for our species? No, not really. And so this is ultimately a pro-extinctionist ideology. It's an ideology that's pushing for the creation of a world in which our species is completely disempowered, if it exists at all. So that is another reason that I worry about these ideologies. You've got the pursuits, the harms caused by the pursuit of utopia, and then what happens if we actually realize the TESCREAL utopia. I think that would be catastrophic for most humans around the world, if not for our species itself.

Susanna Smith

Well, that's terrifying.

Timnit Gebru

Yeah.

Susanna Smith

Yeah.

Émile Torres

Well, a lot of people don't realize there is this pro-extinctionist ideology, which is being advocated by many, many powerful people. I mean, Musk likes to say he's a humanist as opposed to the extinctionist. No, that's not true. He's a transhumanist, who calls long-termism quote, “a close match for my philosophy.” These are ultimately, pro-extinctionist ideologies. Larry Page is another example, the co-founder of Google. He has explicitly argued that the creation of digital intelligences, quote unquote, and these intelligences ultimately replacing our species, biology-based life. That is the natural next step of cosmic evolution. And so he is advocating for a kind of pro-extinctist ideology. Not like the radical environmentalists who say, oh, we should just disappear and then that's that. We should have no successors. These TESCREALists want human extinction, the disappearance of our species to coincide with the emergence of this new glorious, superior post-human species, which might take the form of digital sentient beings. But this is pro-extinction. People don't understand just how influential pro-extinctionism is in the world today.

Susanna Smith

Yeah, wow. 

So, Émile, you specialize in the philosophy of human extinction, which I find fascinating. And this question sort of relates more to our individual relationship with death than it does to the extinction of all humans. But I'm curious about your take on this thought experiment. So I see a day in the not too distant future where genomic data will be able to predict from birth or in utero every baby's most likely statistical cause of death and their natural lifespan using genetic and genomic data. 

What do you think about how knowing our likely date of death and cause of death from birth, how would that fundamentally change our relationship to mortality?

Émile Torres

This is an absolutely fascinating question. I'm not exactly sure. I suspect that if, maybe if somebody knows that they're going to have a very long life, that might have implications for their willingness to take risks. So this is an issue that has come up every now and then in the context of the possibility of radical life extension. If it were possible to extend your life by centuries, maybe millennia, then, you know, you would still be vulnerable to the sorts of things that are responsible for the deaths of young people like accidents, for example. So what this means is, okay, if I can live for 2,000 years, and if I were to go out today, walk around the neighborhood, and expose myself to a very low probability of getting hit by a car. Would I do it? Because the typical definition of a risk is the probability of some adverse outcome multiplied by its consequences. Which means that if the probability is low, but the consequences are huge, for example, the loss of thousands of years of future life then suddenly the risk becomes much, much greater. So if you, for example, knew that you could potentially live to 120. I don't know, maybe that would have some implications for the amount of risk you're willing to take but also if you were to know that there's a good chance that you will get cancer for example at age 45, then maybe you'd be less focused on pursuing a career where you can save up a bunch of money for your retirement, right? That's just irrelevant then. So maybe then you sort of take more risk and you live life in a way that is fuller than somebody who thinks that their life is going to be 100 years, 120 years or something. So I don't know. It is a really fascinating question and it seems like it's possible we will soon live in a world where this thought experiment becomes an actual experiment.

Susanna Smith

Yeah, I think we will. I think we do. One of the topics, just so you know, that really fascinates me and I find problematic is the push for newborn sequencing, full genome sequencing for newborns. There are clinical reasons to do it, especially among very sick newborns. They've shown that like 50% of babies who don't have a diagnosis but are in the NICU when they do full genome sequencing, they find they have genetic mutations. But there's also a push, and they've done studies where the control group is healthy newborns. And so then they're identifying all these potentially disease-causing mutations in newborn babies who are well. I think it's supercharged, I guess, is my opinion. And I, I mean, I'm living in a world, I guess, to some degree where yeah, I could go find out if I'm likely to die like my mom. But I don't. I don't go find out. So yeah.

Émile Torres

That's really interesting because I know… First of all, hope I didn’t talk over you.

Susanna Smith

No, go ahead.

Émile Torres

I've known some people who have opted not to get tested to determine their genetic vulnerability to early onset Alzheimer's. Just don't want to know. I myself, I don't have that same aversion. I would like to know. If I could get tested to find out if there's a good chance I'm going to have cancer, you know, because there was a period of my life in the past where I smoked. And so if I knew that I'm genetically predisposed to get cancer, I'd be interested to know that. Same with Alzheimer's. I think it would have implications for how I live my life right now. Maybe I'd spend less time working or writing some like obscure philosophy papers and more time doing music, you know, pursuing some kind of art.

Susanna Smith

Thank you, Timnit and Émile, for joining me today on Genetic Frontiers. For anyone listening who would like to learn more about Timnit's work, go to the DAIR Institute website, which is dair-institute.org. To learn more about Émile's work, go to the website Xriskology, which is X-R-I-S-K-O-L-O-G-Y.com. 

Genetic Frontiers is co-produced by Brandy Mello and by me: Susanna Smith. Music is by Edward Giordano and design by Abhinav Chauhan and Julie Weinstein. Thank you for listening to this episode of Genetic Frontiers connect with us at geneticfrontiers.org or on Instagram and Linkedin at Genetic Frontiers, to continue the conversation. If you enjoyed this episode and would like to support our independent production. Please make a donation to Genetic Frontiers through our Patreon account.