The Conflict Over a “Dangerous” Ideology Governing AI Discussion

Longtermism, the philosophy of choice in Silicon Valley, has contributed to the framing of the AI discussion around the prospect of human extinction. However, the philosophy’s detractors are becoming more outspoken, saying that it is hazardous and that the concern with extinction diverts attention from the genuine issues with AI, such as data theft and biased algorithms. The movement’s opponent and former longtermist, author Emile Torres, told AFP that the movement’s worldview was based on the same ideas that had previously been used to justify genocide and mass murder.

However, the movement and associated philosophies like effective altruism and transhumanism have a significant influence at institutions from Oxford to Stanford and in the tech industry.

Venture investors with ties to the movement, like Peter Thiel and Marc Andreessen, have invested in life-extension businesses and other side ventures.

Even though they have a financial stake in the argument that only their technologies can rescue mankind, Elon Musk and Sam Altman of OpenAI have signed open letters warning that AI may wipe out humanity.

In the end, opponents claim that this fringe movement is exerting an excessive amount of influence on discussions about the future of mankind.

very hazardous

According to long-termists, it is our responsibility to work toward achieving the best results for as many people as possible.

This is similar to many liberals of the 19th century, but longtermists have a far longer time horizon in mind.

When they glimpse into the distant future, they envision untold billions of human beings colonizing new planets while drifting through space.

They contend that we owe everyone of these future humanity the same obligations as we do to everyone who is living now.

Additionally, they are significantly more significant than modern specimens since there are so many of them.

According to Torres, author of “Human Extinction: A History of the Science and Ethics of Annihilation,” this way of thinking makes the worldview “really dangerous.”

“Anytime you have a utopian vision of the future marked by nearly infinite amounts of value, and you combine that with a sort of utilitarian mode of moral thinking where the ends can justify the means, it’s going to be dangerous,” said Torres.

Longtermists are obligated to fight it regardless of the results if a superintelligent machine with the capacity to wipe out mankind is ready to come to life.

Eliezer Yudkowsky, a longtermist idealogue, said there simply needed to be enough individuals “to form a viable reproductive population” in response to a question from a member of Twitter, the social media site now known as X, in March about how many people needed to pass away to prevent this from occurring.

He replied, albeit subsequently erased the post, “So long as that’s true, there’s still a chance of reaching the stars someday.”

Eugenics allegations

The Swedish philosopher Nick Bostrom’s work on existential risk and transhumanism, or the notion that humans may be enhanced by technology, in the 1990s and 2000s is where longtermism originated.

Academic Timnit Gebru has noted that eugenics and transhumanism have always been related.

In addition to being the inventor of the phrase transhumanism, British scientist Julian Huxley served as the British Eugenics Society’s president in the 1950s and 1960s.

Gebru said on X last year that “longtermism is eugenics under a different name.”

After listing “dysgenic pressures” as an existential risk—basically, less intellectual individuals reproducing more quickly than their more brilliant peers—Bostrom has long been accused of being a eugenics supporter.

The philosopher, who directs the Future of Humanity Institute at Oxford University, issued an apology in January after acknowledging that in the 1990s, he had posted racist messages on an online message board.

“Do I believe in eugenics? No, not in the way the phrase is often interpreted,” he said in his apology, noting that it had been employed to excuse “some of the most heinous crimes of the previous century.”

greater feeling

Despite these issues, longtermists like Yudkowsky, a polyamory supporter and high school dropout who is well known for authoring Harry Potter fan fiction, are nonetheless praised.

He has been given credit by Altman for securing funding for OpenAI, and in February, he said he deserved the Nobel Peace Prize.

However, Gebru, Torres, and several other individuals are working to redirect attention on injustices including theft of artists’ creations, prejudice, and the concentration of wealth in the hands of a few number of businesses.

While there were ardent supporters like Yudkowsky, according to Torres, who uses the pronoun they, the majority of the extinction discussion was driven by money.

They said that discussing human extinction, a true apocalyptic catastrophe in which everyone perishes, was much more exciting and interesting than discussing Kenyan laborers earning $1.32 an hour or artists and authors being taken advantage of.

Related Articles

Back to top button