Artificial Intelligence and Evolving Morality
by Cowboy Bob Sorensen
People have had a fear of machines for a long time, especially since the Industrial Revolution. The term Luddite has been applied to people who loathe technology, but the original protesters were okey-dokey with machinery per se, and instead protested unfair labor practices by destroying certain contraptions. In simpler terms, laborers have had a fear of being replaced by machines for many years.
Suspicion of machines naturally extended to robots. While science fiction media often portray robots with humanoid appearances (let's face it, many people don't cotton to dealing with a metal "person"), it depends on the application; robots used for police bomb disposal generally do not look all that human.
Some robots can be considered mobile computers if they are sophisticated enough. The history of science fiction is replete with tales of computers having artificial intelligence, and even becoming self-aware. I disremember when and the title, but I read a story about a space ship that had a computer integrated into all aspects (no, not the HAL 9000 or the ship's computer on various series of Star Trek, it predated those, I think). The computer/spaceship decided that it was God, instantly teleported them to the destination, and declared that the crew could "worShip me". In David Gerrold's When HARLIE was One, the computer became increasing self-aware, and proposed an extension called the Graphic Omnicient Device to "free mankind from making erroneous decisions about anything".
Artificial intelligence is based on the programming a computer receives. This programming comes from humans (so far). The Terminator stories are driven by Skynet, which has modified its own programming to eliminate humanity so it could achieve its purpose.
Computers like HARLIE, HAL, and Skynet are not yet possible, but where would they get their ideas? Should be obvious. Although much of science fiction is for entertainment purposes, authors also write cautionary tales — based on their own worldviews, of course.
Since many people have erroneous faith in atheistic scientism and what "scientists say", we may very well be accepting of AI morality programming based on atheistic materialism and evolutionary foundations. There will be no room for God the Creator and biblical morality in this worldview, and a push for AI to establish the standard for morality has alarming implications. God is the source of true morality, not evolution, and information must come from a mind.
I've had atheists tell me that I (and other biblical creationists) deserve all the ridicule we get. Why? Because atheism. Their "morality" is irrational and arbitrary, and since they detest our worldview, it is morally acceptable in their minds to ridicule and even persecute us. Imagine them programming artificial intelligence to determine societal standards!
Let's keep the advanced programming out of police and military bomb-disposal robots, shall we? The robot could very well be destroyed by a bomb. For that matter, there was a case of a robot used to use a bomb and kill a sniper suspect. In either circumstance, it would be mighty difficult if the robot signaled, "I'm afraid I can't do that". Questions of ethics and morality remain with the operators. For now.
Ironically, those who have a materialistic worldview are attempting to create consciousness. Only God can do that, old son. Even so, the questions raised can make for a passel of thinking and discussion. Who is the ultimate arbiter of right and wrong, what is the standard, according to what worldview? Is this version of morality completely and correctly implemented? What about conflicts in programming and concepts? I lack belief that fallen humans are capable of implementing all the important factors.
This article was inspired by an episode of The Briefing by Dr. Albert Mohler. (An earlier Briefing report inspired a recent article, "Children, Evolution, and Robots".) Now I would like to direct you to two segments. The first one is "Why Artificial Intelligence is incapable of driving us toward a better system of morality", and the second, "Conscious machines, cruelty, and conventional morality: Confronting the ethics of HBO’s ‘Westworld’". I find them rather startling. You can listen online, download the MP3, or read the transcript at the Wednesday, May 2, 2018, episode of The Briefing.
People have had a fear of machines for a long time, especially since the Industrial Revolution. The term Luddite has been applied to people who loathe technology, but the original protesters were okey-dokey with machinery per se, and instead protested unfair labor practices by destroying certain contraptions. In simpler terms, laborers have had a fear of being replaced by machines for many years.
Credit: Pixabay / Gerd Altmann (geralt) |
Some robots can be considered mobile computers if they are sophisticated enough. The history of science fiction is replete with tales of computers having artificial intelligence, and even becoming self-aware. I disremember when and the title, but I read a story about a space ship that had a computer integrated into all aspects (no, not the HAL 9000 or the ship's computer on various series of Star Trek, it predated those, I think). The computer/spaceship decided that it was God, instantly teleported them to the destination, and declared that the crew could "worShip me". In David Gerrold's When HARLIE was One, the computer became increasing self-aware, and proposed an extension called the Graphic Omnicient Device to "free mankind from making erroneous decisions about anything".
Artificial intelligence is based on the programming a computer receives. This programming comes from humans (so far). The Terminator stories are driven by Skynet, which has modified its own programming to eliminate humanity so it could achieve its purpose.
Computers like HARLIE, HAL, and Skynet are not yet possible, but where would they get their ideas? Should be obvious. Although much of science fiction is for entertainment purposes, authors also write cautionary tales — based on their own worldviews, of course.
Since many people have erroneous faith in atheistic scientism and what "scientists say", we may very well be accepting of AI morality programming based on atheistic materialism and evolutionary foundations. There will be no room for God the Creator and biblical morality in this worldview, and a push for AI to establish the standard for morality has alarming implications. God is the source of true morality, not evolution, and information must come from a mind.
I've had atheists tell me that I (and other biblical creationists) deserve all the ridicule we get. Why? Because atheism. Their "morality" is irrational and arbitrary, and since they detest our worldview, it is morally acceptable in their minds to ridicule and even persecute us. Imagine them programming artificial intelligence to determine societal standards!
Let's keep the advanced programming out of police and military bomb-disposal robots, shall we? The robot could very well be destroyed by a bomb. For that matter, there was a case of a robot used to use a bomb and kill a sniper suspect. In either circumstance, it would be mighty difficult if the robot signaled, "I'm afraid I can't do that". Questions of ethics and morality remain with the operators. For now.
Ironically, those who have a materialistic worldview are attempting to create consciousness. Only God can do that, old son. Even so, the questions raised can make for a passel of thinking and discussion. Who is the ultimate arbiter of right and wrong, what is the standard, according to what worldview? Is this version of morality completely and correctly implemented? What about conflicts in programming and concepts? I lack belief that fallen humans are capable of implementing all the important factors.
This article was inspired by an episode of The Briefing by Dr. Albert Mohler. (An earlier Briefing report inspired a recent article, "Children, Evolution, and Robots".) Now I would like to direct you to two segments. The first one is "Why Artificial Intelligence is incapable of driving us toward a better system of morality", and the second, "Conscious machines, cruelty, and conventional morality: Confronting the ethics of HBO’s ‘Westworld’". I find them rather startling. You can listen online, download the MP3, or read the transcript at the Wednesday, May 2, 2018, episode of The Briefing.