In his article for the Digital Journal, Saratendu Sethi argues that to build a sustainable global supply chain requires the humanization of AI. This technological revolution, he says, includes “truly autonomous and self-correcting supply chains” that will replace the flawed capital-driven decision making of humans. Sethi defines this utilitarian mission of serving the “greater good” through what he calls a “sustainable, ethical and responsible world that puts equity for all at the center.”
His motive of helping everyone while protecting the environment is commendable, yet the larger question remains, whose ethic will drive the logic used by this AI? When resources are limited, how will this AI decide who gets food and who gets medicine? Based on my own study, Sethi’s proposal for an ethical decision-making AI sounds eerily familiar to the work of Pierre Teilhard de Chardin and there is a lesson we can learn here.
Eric Steinhart considers Pierre Teilhard de Chardin the first transhumanist thinker to give “serious consideration to the future of human evolution” through the use of genetic engineering and technologies to achieve a new cosmic-intelligence that would transcend humanity itself. Teilhard crafted a new theology that was wholly dependent on Darwin’s evolutionary narrative that the cosmos — birthed in chaos — was steadily evolving toward eternal perfection. This perfection could only be achieved by tethering the current state of imperfect anthropology to the future hope of a perfect cosmic singularity. It must not be overlooked, however, that Teilhard’s method of transforming humanity was grounded in his commitment to eugenics.
The eugenics movement in the early twentieth century employed techniques of population control such as oral contraception, social isolation, forced sterilization, welfare, abortion, and family planning to achieve the goal of genetic fitness for a superior human race. But for Teilhard, “eugenics applied to individuals leads to eugenics applied to society.” Therefore Teilhard, like Sethi, reasoned that the future of human existence must be guided by the reasonable organization of the world’s limited resources.
Teilhard believed that industrialization had forced an overpopulation of the earth. On the brink of famine and suffocation, steps must be taken to ensure that the best of humanity survived. The challenge to build a better mankind, wrote Teilhard, demanded the use of, “individual eugenics (breeding and education designed to produce only the best individual types) and racial eugenics (the grouping or intermixing of different ethnic types being not left to chance but effected as a controlled process in the proportions most beneficial to humanity as a whole).” While these methods involved both technological and psychological challenges, Teilhard believed that science, alongside his theology, would guide human evolution toward a better future. Yet, the moral duty of Teilhard’s transhumanism required a repudiation of the traditional Christian ethic that valued every individual human as created in the image of God. Society, he believed, must be willing to sacrifice the individual for the greater good of humanity and sustainability of nature.
I have no way of knowing if Sethi and Teilhard share the same ethic or anthropology, but without a doubt they share the utilitarian goal of a ‘sustainable’ future. This potential for disagreement on such basic questions about morality and human dignity, however, illustrates the problem with Sethi’s “greater good” AI.
- Whose ethic will drive the AI’s decision-making algorithms?
- When resources are limited, will atheist technicians decide who gets food and who does not?
- In the next pandemic when supplies run short, will agnostic philosophers decide who gets vaccinated and who does not? Will the AI control human populations based upon projections of biological survivability, statistical calculations of racial equity, or the nebulous criteria of sustainability?
- Will the “humanized AI” allow itself to be guided by religious moral codes, and if so, which ones will it choose? Or will religion, and maybe human existence itself, be rendered obsolete by these autonomous and self-correcting algorithms?
In contrast to Sethi’s utopian supply chain AI, I realize these questions sound alarmist. But this brief look at Teilhard’s willingness to sacrifice the weak for the greater good illustrates the importance of addressing these moral concerns. A clearly defined ethic that protects every human, and protects humans above animals, is fundamental to measuring the practicality of any AI tasked with the mission of replacing human intelligence, human morality, and human compassion.
 Eric Steinhart, “Teilhard de Chardin and Transhumanism,” Article, Journal of Evolution & Technology 20, no. 1 (2008): 1. See also, David Grumett, “Transhumanism and Enhancement: Insights from Pierre Teilhard de Chardin,” in Transhumanism and Transcendence: Christian Hope in an Age of Technological Enhancement, ed. Ronald Cole-Turner (Washington, DC: Georgetown University Press, 2011), 38.
In case you missed it:
With Transhumanism, What Happens to Human Rights? The transhumanist accepts suffering for the individual if suffering can advance the evolution of the species toward immortality and singularity. If humans can redefine what it means to be human, what prevents us from eliminating anyone opposed to this grand vision? (J.R. Miller)