The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom. – Isaac Asimov
Sifting through my roboethics research, I discover in a terrifying, yet fascinating manner that all the stuff of sci-fi movies and books is now reality: Minority Report‘s projected touch-screen, Battlestar Galactica‘s unmanned stealth jet, boids (swarm robots), morphing robots that could be prototypes for Transformers, chip implanted in the human brain that can direct movements and programs on a computer screen by thought (potential future sees a wireless hookup of the chip where if we don’t know something, we have only to look it up online simply by using our mind). My concern: are humans capable of using such technology for the good or will technology get the better of us? The stark image of two conflicting futures comes to mind: Terminator vs WALL-E. However, 80% of all AI research in the US right now is funded by the US military and over 40 countries are now developing military robots with lethal autonomous capacities. Guess this is where John Connor should arrive.
Humans are visionaries complete with a creative imagination and willpower to make our dreams become realities. We are something like children who daydream about having a pet dinosaur, but we never realize the nightmare of the reality until the T-rex from Jurassic Park is breathing down your neck. I do believe that robots can be programmed to follow ethical principles, but my intuition is that they would never be sentient beings, as their use plan would be compromised by being programmed to feel pain or other human emotions thereby creating notions that they are enslaved by us or deserve rights. Telescopic progress – the notion that the ages of technological advances are shortening and soon we will have a time where technological breakthroughs happen everyday – presents a new challenge to the Weberian despair.
I’ll admit maybe Kant was wrong that as our rational capacities developed, our moral nature would improve. I’ll stand by Weber and see this doomsday where society must sacrifice its values for the increasing rationalization of the world through scientific control. This overstretching of Asimov is remedied by ethical debate post-production of the technology, however can we say that such moral prescriptions are a byproduct? This defeats a priori cognitions and I fundamentally believe that our creative capacity and moral nature are linked. I would argue that for every small step in moral progress, scientific advances increase exponentially. After all, it is the regulation or setting of clearly defined freedoms, rules, principles, and methods for scientific research that allowed it to flourish (others may disagree). A certain amount of moral conservatism to each new revelation of science is also healthy for debate in regarding the use plan of any given new technology. Perhaps our creative rational capacity is overstimulated and overdeveloped at present, but it should not affect our ability to choose humanity over wars and states.