Contributions


In 1942 Isaac Asimov (1920-1992) published his three laws in a short story called "Run-around" (Street and Smith Publications, Inc.):
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later Asimov added a further rule to combat a more sinister prospect:
4. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

It is interesting to read how Asimov arrived to his three laws, from his positive view of the role played by robots in society:
"In the 1920's science fiction was becoming a popular art form for the first time (…) and one of the stock plots (…) was that of the invention of a robot.
Under the influence of the well-known deeds and ultimate fate of Frankenstein and Rossum, there seemed only one change to be rung on this plot - robots were created and destroyed their creator (…) I quickly grew tired of this dull hundred-times-told tale (…) Knowledge has its dangers, yes, but is the response to be a retreat from knowledge?
I began in 1940, to write robot stories of my own - but robot stories of a new variety My robots were machines designed by engineers, not pseudo-men created by blasphemers".

Maybe Asimov was not the first to conceive of well-engineered, non-threatening robots, but he pursued the theme with such enormous imagination and persistence that most of the ideas that have emerged in this branch of science fiction are identifiable with his stories.

From the time of Isaac Asimov to the present days, and especially after the nuclear explosion of Hiroshima and Nagasaki, more and more scientists have warned about the dangers of unlimited use of technology.

Among them, Nobel Prize and nuclear physicist Joseph Rotblat, Chairman of the Pugwash Conference on Science and World Affairs, who repeatedly spoke against "thinking computers, robots endowed with artificial intelligence and which can also replicate themselves. This uncontrolled selfreplication is one of the dangers in the new technologies".
(50th Pugwash Conference on Science and World Affairs, "Eliminating The Causes Of War", Queens' College, Cambridge, 3-8 August 2000, Background Paper working group 6, Misuse of Science)

Bill Joy (USA, cofounder and Chief Scientist of Sun Microsystems, co chair of the presidential commission on the future of IT research, and is co-author of The Java Language Specification) wrote an article titles Why the future doesn't need us, published in the March 2000 issue of Wired Magazine. The subtitle of the article was : "Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species". It was a scared response to a Ray Kurzweil's intervention on the future implications of scientific and technological development.

In his paper, Joy paints a very bleak picture of humanity's future if we are going to study and design self-replicating technologies such as; robotics, genetic engineering, and nano technology. He talks about the dangers of such technologies and the irresponsibility of the people who are studying them. "From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things".
(Kurzweil was referring to the message "The New Luddite Challenge", written by Ted Kaczynski, the infamous Unabomber Manifesto that briefly summarizes the author's charge against technological progress.)

In the United States especially, in the Internet, in chats, forum and public conferences, the debate over the Joy/Kurzweil dialogue was - and still is - acute and strong.

Ray Kurzweil (USA, cofounder of Sun Microsystems and one of the developer of the Java programming language), wrote one opinion in response to Bill Joy's Wired article.
In his article called The age of Intelligent Machines "Our Concept of Ourselves" he believes it is possible to overreact to a vision of robotic Armageddon and says the potential benefits make it impossible to turn our backs on the benefits of artificial intelligence.

In his book The End of the World: The Science and Ethics of Human Extinction, John Leslie, professor of philosophy at Guelph University in Canada, predicts ways in which intelligent machines might cause the extinction of mankind.
He says that super-clever machines might argue to themselves that they are superior to humans. They might eventually be put in charge of managing resources and decide that the most efficient course of action is for humans to be removed. He also believes it would be possible for machines to override in-built safeguards.
"If you have a very intelligent system it could unprogram itself," he says. "We have to be careful about getting into a situation where they take over against our will or with our blessing."

The Seattle Times asked several other well-respected members in the technological society for their responses to Bill Joy's article. The responses where then published in the March 19, 2000 issue of the paper.
One person, Nathan Myhrvold (USA, Chief technology officer at Microsoft corporation) wrote an essay, which presented an interesting gument to Joy's article. He stated that history has shown that people who have tried to predict the impact of new technologies were often highly inaccurate. He believes that by the time machines become this intelligent (if they ever do) the human race will have had time to adjust.

Hans Moravec (USA, Robotics Institute, Carnegie Mellon University) believes that machines will inherit the earth - and he welcomes the prospect. Moravec said that the majority of significant human evolution has taken place on a cultural level and therefore replacing biological humans with mechanical machines capable of far greater learning and cultural development is the next logical step in evolution.

Marvin Minsky, the Artificial Intelligence pioneer who founded the AI Lab at MIT and is on the board of advisors at the Foresight Institute, agrees that extinction at the mechanical hands of a robot race may be just around the corner, but says that developments in the field of artificial intelligence call for considered debate.
"Our possible futures include glorious prospects and dreadful disasters," says Minsky. "Some of these are imminent, and others, of course, lie much further off."
Minsky notes that there are more immediate threats to think about and combat, such as global warming, ocean pollution, war and world overpopulation. However, he says, the possibilities of artificial intelligence should not be completely ignored.
"In a nutshell, I argue that humans today do not appear to be competent to solve many problems that we're starting to face. So, one solution is to make ourselves smarter -- perhaps by changing into machines. And of course there are dangers in doing this, just as there are in most other fields -- but these must be weighed against the dangers of not doing anything at all."
Minsky adds a warning for those who question whether machines may ever become intelligent enough to better us. "As for those who have the hubris to say that we'll 'never' understand intelligence well enough to create or improve it, well, most everyone said the same things about 'life' -- until only a half dozen decades ago."

Gianmarco Veruggio (founder of CNR-Robotlab, Italy) explored the integration networks/robotics imagining the world covered by a tangled skein of waves, wires, minds (some artificial some biological) and actuators. Actually, he underlined that the junction between two of the most advanced technological fields, Networks and Robotics, could widen their potentialities and applications, opening up a new frontier of the AI. He envisaged that an intelligence deeply different from ours will be able to meet the boundaries of a future not very much different from a SF novel.
2001, in an article appeared in Technology Review (Italian edition) Veruggio writes: "On the basis of the recent developments of the wireless technology and of the world wide web, we mean by e-Robotics a new method to conceptualize, design and build a robot. This is not only the traditional physical machine, endowed with intelligence, but a cluster of machines, not necessarily interconnected among them physically, but throughout the information (…) And when the net will be not only a networks of computers, but of robots, and it will have eyes, ears and hands, it will be itself a robot. Maybe, the ultimate robot!"

Bill Joy gave the human race a 50-50 chance of surviving the next 100 years. In an April 5, 2001, conversation with Wired Magazine's then editor-in-chief Katrina Heron, the chief scientist of Sun Microsystems sounded a clear and urgent warning against the unrestricted consequences of the rapid advancement and convergence of genetics, nanotechnology, and robotics and cautioned against the hubris of scientists who see themselves as tool-builders and therefore absolved of responsibility for catastrophic consequences of the use of their tools.

In an intervention at the Foresight Institute, quoting his book, Ethics for Machines (2000), J. Storrs Hall (USA, Research Fellow of the Institute for Molecular Manufacturing) takes side in favour of the machines. "Why Machines Need Ethics":
"(…) The clear trend in ethics is for a growing inclusivity in those things considered to have rights -- races of people, animals, ecosystems. There is no hint, for example, that plants are conscious, either individually or as species, but that does not, in and of itself, preclude a possible moral duty to them, at least species of them.
There has always been a vein of Frankenphobia in science fiction and futuristic thought, either direct, as in Shelley, or referred to, as in Asimov. It is clear, in my view, that such a fear is eminently justified against the prospect of building machines more powerful than we are, without consciences. Indeed, on the face of it, building superhuman sociopaths is a blatantly stupid thing to do.
Suppose, instead, we can build (or become) machines that can not only run faster, jump higher, dive deeper, and come up drier than we can, but have moral senses similarly more capable? Beings that can see right and wrong through the political garbage dump of our legal system; corporations one would like to have as a friend (or would let ones daughter marry); governments less likely to lie than your neighbor is.
I could argue at length (but will not, here) that a society including superethical machines would not only be better for people to live in, but stronger and more dynamic than ours is today. What is more, not only ethical evolution but most of the classical ethical theories, if warped to admit the possibility, (and of course the religions!) seem to allow the conclusion that having creatures both wiser *and morally superior* to humans might just be a good idea".
"The inescapable conclusion is that not only should we give consciences to our machines where we can, but if we can indeed create machines that exceed us in the moral as well as the intellectual dimensions, we are bound to do so. It is our duty. If we have any duty to the future at all, to give our children sound bodies and educated minds, to preserve history, the arts, science, and knowledge, the Earth's biosphere, "to secure the blessings of liberty for ourselves and our posterity" -- to promote any of the things we value --those things are better cared for by, *more valued by*, our moral superiors whom we have this opportunity to bring into being. It is the height of arrogance to assume that we are the final word in goodness. Our machines will be better than us, and we will be better for having created them".


José Maria Galvan (Vatican State, Professor of Theology, Pontifical University of the Holy Cross) intervened several times in the debate on Technoethics. Speaking at the Workshop "Humanoids, A Techno-ontological Approach" which took place at the Waseda University, 2001, he said: "The humanoid then, is the most sophisticated thinking machine able to assist human beings in manifesting themselves, and this is ethically very good, as it supposes a radical increment of human symbolic capacity; humanoids will develop a lot of activities in order to increase the human quality of life and human intersubjectivity. But humanoids can never substitute a specific human action, which has its genesis in free will. Everything that an anthropoid can perform is an extension of the human brain's capacity to support human relationships. When you look at the Sistine Chapel you come into dialogue with Michelangelo. When you shake the hand of a humanoid you are in contact with its creator, the engineer".

Bill Hibbard USA, Senior Scientist Space Science and Engineering Center of the University of Wisconsin) answered to R. Kurzweil optimism and warnings: "Super-intelligent Machines Must Love All Humans".
"So in place of laws constraining the behaviour of intelligent machines, we need to give them emotions that can guide their learning of behaviours. They should want us to be happy and prosper, which is the emotion we call love. We can design intelligent machines so their primary, innate emotion is unconditional love for all humans. First we can build relatively simple machines that learn to recognize happiness and unhappiness in human facial expressions, human voices and human body language. Then we can hard-wire the result of this learning as the innate emotional values of more complex intelligent machines, positively reinforced when we are happy and negatively reinforced when we are unhappy. Machines can learn algorithms for approximately predicting the future, as for example investors currently use learning machines to predict future security prices. So we can program intelligent machines to learn algorithms for predicting future human happiness, and use those predictions as emotional values. We can also program them to learn how to predict human quality of life measures, such as health and wealth, and use those as emotional values".

David Bruemmer (USA, Idaho National Engineering and Environmental Laboratory), writes:
"(…)So why should we have intelligent, emotion exhibiting humanoids? Emotion is often considered a debilitating, irrational characteristic. Why not keep humanoids, like calculators, merely as useful gadgetry? If we do want humanoids to be truly reliable and useful, they must be able to adapt and develop. Since it is impossible to hard-code high-utility, general-purpose behaviour, humanoids must play some role as arbiters of their own development. One of the most profound questions for the future of Humanoid Robotics is, "How we can motivate such development?" Speaking in purely utilitarian terms, emotion is the implementation of a motivational system that propels us to work, improve, reproduce and survive. In reality, many of our human "weaknesses" actually serve powerful biological purposes. Thus, if we want useful, human-like robots, we will have to give them some motivational system. We may choose to call this system "emotion" or we may reserve that term for ourselves and assert that humanoids are merely simulating emotion using algorithms whose output controls facial degrees of freedom, tone of voice, body posture, and other physical manifestations of emotion".
"The real danger is not that humanoids will make us mad with power, or that humanoids will themselves become super intelligent and take over the world. The consequences of their introduction will be subtler. Inexorably, we will interact more with machines and less with each other. Already, the average American worker spends astonishingly large percentages of his/her life interfacing with machines. Many return home only to log in anew. Human relationships are a lot of trouble, forged from dirty diapers, lost tempers and late nights. Machines, on the other hand, can be turned on and off. Already, many of us prefer to forge and maintain relationships via e-mail, chat rooms and instant messenger rather than in person. Despite promises that the Internet will take us anywhere, we find ourselves - hour after hour -- glued to our chairs. We are supposedly living in a world with no borders. Yet, at the very time we should be coming closer together, it seems we are growing further apart. Humanoids may accelerate this trend".

Paolo Dario (Italy, Director of Arts-Lab, The Sant'Anna School of University Studies and Doctoral Research, Humanoid Project) intervened on the issue, "recurrent in the history of the evolution of humankind for the engineers who designed machines (..) and who dreamed of developing human-like machines (humanoids) which aimed at replicating human functions". The issue is, in an unusual but eloquent locution: the "new and artificial humanism". "Today, the mission of the robotics engineer is to design and built robots able to co-operate with humans. It is an activity deeply different from that of the traditional engineer, who built industrial robots designed for specialized and technical end-users (…) In this new enterprise, our European humanist culture is an experienced and solid base to face these problems in an original and satisfying way".

There are also scientist who advocates "rights" for robots:

Should robots have rights?

People who pose this issue say that, although right now nobody would think a robot should have rights, as they become more and more human peoples' opinions are likely to change. We might have to start paying them for their services. Perhaps give them the right to vote, to get married, and be protected as humans are.

Jo Bell (USA, director, Animal Liberation):
"Asimov's Robot series grappled with this sort of question. As we've incorporated others races and people - women, the disabled - into the category of those who can feel and think, then I think if we had machines of that kind, then we would have to extend some sort of rights to them".

Simon Longstaff, (Sidney, Australia, director, St James Ethics Centre)
"It depends on how you define the conditions for personhood. Some use preferences as criteria, saying that a severely disabled baby, unable to make preferences, shouldn't enjoy human rights yet higher forms of animal life, capable of making preferences, are eligible for rights.
The other way to look on it is that humans are a certain class of being, and that the disabled baby, in a truly excellent form, is of the same class of being that would produce an Einstein or a Tchaikovsky. A chimp has never produced anything like General Relativity, or a symphony.
Machines would never have to contend with transcending instinct and desire, which is what humans have to do. I imagine a hungry lion on a veldt about to spring on a gazelle. The lion, as far as we know, doesn't think, "Well, I am hungry, but the gazelle is beautiful and has children to feed." It acts on instinct. Altruism is what makes us human, and I don't know that you can program for altruism"
.

Ray Jarvis (USA, director intelligent robotics research centre, Monash University, Australia):
"I think that we would recognise machine rights if we were looking at it from a human point of view. I think that humans, naturally, would be empathetic to a machine that had self awareness. If the machine had the capacity to feel pain, if it had a psychological awareness that it was a slave, then we would want to extend rights to the machine.
The question is how far should you go? To what would you extend rights? Some people believe we should extend rights to rocks, and to objects like a chair"
.

Edited by Fiorella Operto


Draft 17th Jan '04

Back