Mind Matters Natural and Artificial Intelligence News and Analysis
greatThinker
Licensed via Adobe Stock

Eugenics, Transhumanism, and Artificial Intelligence

If we were to succeed at creating an ethical decision-making AI, whose ethics would it abide by?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In his article for the Digital Journal, Saratendu Sethi argues that to build a sustainable global supply chain requires the humanization of AI. This technological revolution, he says, includes “truly autonomous and self-correcting supply chains” that will replace the flawed capital-driven decision making of humans. Sethi defines this utilitarian mission of serving the “greater good” through what he calls a “sustainable, ethical and responsible world that puts equity for all at the center.”

His motive of helping everyone while protecting the environment is commendable, yet the larger question remains, whose ethic will drive the logic used by this AI? When resources are limited, how will this AI decide who gets food and who gets medicine? Based on my own study, Sethi’s proposal for an ethical decision-making AI sounds eerily familiar to the work of Pierre Teilhard de Chardin and there is a lesson we can learn here.

Pierre Teilhard de Chardin

Eric Steinhart considers Pierre Teilhard de Chardin the first transhumanist thinker to give “serious consideration to the future of human evolution” through the use of genetic engineering and technologies to achieve a new cosmic-intelligence that would transcend humanity itself.[1] Teilhard crafted a new theology that was wholly dependent on Darwin’s evolutionary narrative that the cosmos — birthed in chaos — was steadily evolving toward eternal perfection. This perfection could only be achieved by tethering the current state of imperfect anthropology to the future hope of a perfect cosmic singularity. It must not be overlooked, however, that Teilhard’s method of transforming humanity was grounded in his commitment to eugenics. 

The eugenics movement in the early twentieth century employed techniques of population control such as oral contraception, social isolation, forced sterilization, welfare, abortion, and family planning to achieve the goal of genetic fitness for a superior human race. But for Teilhard, “eugenics applied to individuals leads to eugenics applied to society.”[2] Therefore Teilhard, like Sethi, reasoned that the future of human existence must be guided by the reasonable organization of the world’s limited resources.

Teilhard believed that industrialization had forced an overpopulation of the earth. On the brink of famine and suffocation, steps must be taken to ensure that the best of humanity survived. The challenge to build a better mankind, wrote Teilhard, demanded the use of, “individual eugenics (breeding and education designed to produce only the best individual types) and racial eugenics (the grouping or intermixing of different ethnic types being not left to chance but effected as a controlled process in the proportions most beneficial to humanity as a whole).”[3] While these methods involved both technological and psychological challenges, Teilhard believed that science, alongside his theology, would guide human evolution toward a better future. Yet, the moral duty of Teilhard’s transhumanism required a repudiation of the traditional Christian ethic that valued every individual human as created in the image of God. Society, he believed, must be willing to sacrifice the individual for the greater good of humanity and sustainability of nature.

I have no way of knowing if Sethi and Teilhard share the same ethic or anthropology, but without a doubt they share the utilitarian goal of a ‘sustainable’ future. This potential for disagreement on such basic questions about morality and human dignity, however, illustrates the problem with Sethi’s “greater good” AI. 

  • Whose ethic will drive the AI’s decision-making algorithms?
  • When resources are limited, will atheist technicians decide who gets food and who does not?
  • In the next pandemic when supplies run short, will agnostic philosophers decide who gets vaccinated and who does not? Will the AI control human populations based upon projections of biological survivability, statistical calculations of racial equity, or the nebulous criteria of sustainability?
  • Will the “humanized AI” allow itself to be guided by religious moral codes, and if so, which ones will it choose? Or will religion, and maybe human existence itself, be rendered obsolete by these autonomous and self-correcting algorithms?

In contrast to Sethi’s utopian supply chain AI, I realize these questions sound alarmist. But this brief look at Teilhard’s willingness to sacrifice the weak for the greater good illustrates the importance of addressing these moral concerns. A clearly defined ethic that protects every human, and protects humans above animals, is fundamental to measuring the practicality of any AI tasked with the mission of replacing human intelligence, human morality, and human compassion.


[1] Eric Steinhart, “Teilhard de Chardin and Transhumanism,” Article, Journal of Evolution & Technology 20, no. 1 (2008): 1. See also, David Grumett, “Transhumanism and Enhancement: Insights from Pierre Teilhard de Chardin,” in Transhumanism and Transcendence: Christian Hope in an Age of Technological Enhancement, ed. Ronald Cole-Turner (Washington, DC: Georgetown University Press, 2011), 38.

[2] Pierre Teilhard de Chardin, The Phenomenon of Man, trans. Bernard Wall (New York: Harper Perennial, 1959), 282.

[3] Pierre Teilhard de Chardin, The Future of Man, trans. Norman Denny (New York: Harper & Row, 1964), 231–32.


Here are all five short essays in the series by J. R. Miller:

With transhumanism, what happens to human rights? The transhumanist accepts suffering for the individual if suffering can advance the evolution of the species toward immortality and singularity. If humans can redefine what it means to be human, what prevents us from eliminating anyone opposed to this grand vision? (January 1, 2022)

Eugenics, transhumanism, and artificial intelligence If we were to succeed at creating an ethical decision-making AI, whose ethics would it abide by? The utilitarian goal of a “sustainable future” must be guided by a higher ethic in order to avoid grave mistakes of the past. (January 13, 2022)

The deadly dream of Human+ Look at the price tag… Some are prepared to sacrifice actual humans now for the hope of future immortality. Without a fixed and final definition of human personhood, there is no foundation for a fixed and final ethic of “human” rights. (January 20, 2022)

Can Christian ethics save transhumanism? J. R. Miller looks at the idea that the mission to self-evolve through technology is “the definitive Christian commitment.” In Miller’s view, Christian transhumanists do not provide a stable and persistent definition of human personhood, thus cannot ground human rights. (February 27, 2022)

and

Why the imago Dei (Image of God) shuts the door on transhumanism. As the belief that technology promises us a glorious post-human future advances among scholar who profess Christianity, we must ask some hard questions. The mission to self-evolve beyond humanity begs the question, how is humanity “saved” through technological advancement designed to eliminate humanity? (March 20, 2022)


Joe Miller

Dr. J. R. Miller is the President and co-founder of the Center for Cultural Apologetics. He earned a BS in architectural engineering from Pennsylvania State University, an MDiv from Oral Roberts University, an MASR from Southern California Seminary, a DMin from Biola University, and a ThM and PhD in ethics from Midwestern Baptist Theological Seminary. He has taught in higher education for more than a decade and has worked in pastoral ministry for over 20 years. Dr. Miller has authored multiple books and journal articles on leadership, church history, biblical theology, and ethics.

Eugenics, Transhumanism, and Artificial Intelligence