April 19, 2018 YHouse Cognition Lunch Salon at IAS

PresenterBrian P. McLaughlin (Distinguished Professor of Philosophy and Cognitive Science; Director, Rutgers Cognitive Science Center)

Title: On the Matter of Robot Minds

Abstract:  (By the presenter) “A number of AI researchers are predicting that there will be sentient robots with human-level intelligence or greater within the next thirty or so years. If this prediction is correct, we face enormously difficult moral and social issues. Status as a moral agent or moral patient depends only on mental abilities. Sentient robots would have moral rights, and so should have legal rights to protect them. Moreover, the sale of robots with intelligence even approaching human-level intelligence would be slavery. There is a tsunami of humanoid robots soon to enter our lives. I argue, however, that the prediction that sentient robots with human-level intelligence will soon be here is based, in part, on a false behaviorist assumption about mentality. Although the tsunami will bring a flood of difficult moral and social issues in its wake, robots rights is not among them. The robots will be devoid of mentality. They could be damaged or destroyed, but neither harmed nor wronged.”

 

Present:

Brian McLaughlin, Susan Schneider, Ed Turner, Will Storer (CTI), Neil Acherson (CTI), Josh Malden (CTI), Jak Kornfilt, Chris Brown,  Bruce Molloy, Michael Solomon

We introduced ourselves. 

Brian began by noting that in 1999 in Seattle, Washington, a group was organized calling itself the ASPCR – American Society for Prevention of Cruelty to Sentient Robots. Their mission was to insure the rights of artificially created sentient beings.  Brian agreed that a sentient robot would have moral status: it would be a moral patient, and so have moral rights.:If a robot could feel pain or suffer, then it deserved to be treated “humanely”.  A moral agent would be normally accountable and responsible. A robot would be a machine, but so are we (made of biologic material not silicon). Arobot would be artefactual, and we are not. But he said that we can imagine a wet lab sometime in the future constructing a physical duplicate of a human sperm and egg out of molecules, having the sperm fertilize the egg, and then implanting the fertilized egg in a woman’s womb.  The being born nine months later would be an artefact, but it would have the same mental capacities as a human baby, and so the same moral status.He proposed that status as moral agent or moral patient depends  onlyon mentalabilities, and thus that there can be no difference in general moral status without a difference in mental abilities.He is not concerned that robots are silicon based; that would only matter if it affected mental abilities; likewise for artifactuality.  He referred to Jeremy Bentham’s often quoted question regarding animal rights: “Can they suffer?”  The ASPCR was named to mirror the ASPCA (Am. Soc. For prevention of cruelty to animals).

     Currently, there are no sentient robots. Perhaps a pan-experientialist might say that a bowl of gelatin has moral status, but he disagreed as the gelatin would not be sentient. He quoted an article from the 11/17/2003 New York 

Times predicting that robots with internal mental life could exist, and that such robots with abilities beyond human capability will exist by 2050. If such a prediction is accurate, or even if it will take 100 years, then the mission of the ASSPCR becomes urgent, as changing public sentiment is a slow process. Furthermore, sale or ownership of such robotic entities would be slavery and sowould not be morallypermissible. In fact, he stands opposed to slavery of robots even approaching human capability, let alone equaling or surpassing human capability. However, he does not share the ASPCR’s expectation for sentient robots.  The issue of robotic mental ability is inextricablylinked to the issue of moral rights. As concerns the promise of sentient and human level intelligent agents by 20150, henoted that there are promises that cannot be kept and also promises that should not be allowed to be kept.  There will certainly be robotic applications for driving, for data processing, for sex toys, and many others that we can now predict. But these robots will be devoid of mentality.

     To get across the idea that the notion of intelligence is a functional notion, he quoted Forest Gump’s mother saying, “Intelligence is what intelligence does.” A robot that can use fluent natural language may be possible, but not for a long time if everin his opinion.  Intelligence is not Sentience. A robot might reason, but not feel.  Feeling is a subjective experience.  While there is no inherent reason a robot could not have subjective experience, Nomologically (based on laws) that may not be possible.  This may be analogous to a machine that can transfer information faster than the speed of light.  Such a machine can be imagined, but nature’s laws prohibit that possibility. 

     He referred to Daniel Dennet’s Nonological Behaviorism. This  is the thesis that iftwo agents are alike with regard to behavior, then they are alike with regard to sentience. If that is true, then there may wellbe sentient robots. But he thinks not. A single case of behavioral duplicates not having the same sentience would disprove the thesis. 

     He considered a new born baby.  It can feel pain, and so is sentient, in his opinion. Consider a three toed sloth.  It too is sentient. We cannot build a behavioralduplicate of a three toed sloth or of a neonate now, but perhaps we could make such behavioralduplicates in time.  If that could be done, and Nomological Behaviorism is true, then such duplicates would havethe same mental abilities as a neonate or sloth, respectively, and so would havethe same moral standing. But he thinks Nomological Behaviorism is false.  Two beings can have the same dispositions to behave, and yet the one be sentient and the other not. 

     He had intended to say more about his conception of cognition, but instead elected to stop at this point to take questions and for discussion.

Q: Ed said that was a lucid and clear presentation, but wondered if you cannot tell by behavior, how would you tell if a robot was sentient?

A: That is a crucial question. John Stuart Mill in the 19thcentury wrote of Phenomenal Consciousness.  How do we know that others have phenomenologic consciousness? Mill answered 1) by their external behavior and 2) by similar internal structure to our own. But someone with Alzheimer’s or someone in locked in states may still feel pain and therefore, he believes, be entitled to moral standing.

Q: Michael took issue with the assertion that moral standing depended only on sentience. While sentience might be sufficient, it might not be necessary for moral standing. An infant without sentience but with the potential for sentience or a person with dementia who has lost sentience may still deserve moral standing.  What about gametes (egg or sperm) with the potential to become sentient, or even a rain forest which is not sentient but may deserve moral standing?

A: Brian said that sentience requires a brain.  A fetus is sentient, but a zygote is not, in his opinion. He asked, can a robot have human intelligence and be a fluent speaker of a natural language but not be sentient? He thought that possible. To have moral standing an entity must have intention and awareness of its intentions. He referred to a Star Trek episode where they wanted to build a duplicate of the robotic commander Data.  But that required taking the original Data apart, and Data refused. He was told he could not refuse and a trial was held in which it was decided not to take him apart.

Q: Chris offered that the emphasis on behavior is misleading. Behavior alone might not reflect internal dialogue.

A: Brian discussed Functionalism and two functional isomorphs: superficial and scientific. Superficial functionalism is not sufficient for consciousness.

Q: Jak asked, Where is the neuro correlate? Whether the substrate is silicon or something else, there may still be emergent states that may be random or may have specific attractors.

A: There are many projects looking at neural correlates of consciousness. When we will get results is not clear.

Q: Susan said she was recently on a panel with Chris Koch and others at South by SouthWest on neural correlates of consciousness. Koch felt that systems with short term memory may have consciousness, but there were also Hot Zones. These hot zones are where sensory experiences are associated with correlates of consciousness, and this is where intelligence and sentience come apart.

A: Brian referred to butterflies and vision. He felt that butterflies have vision and even see colors to navigate, but they don’t really see in the sense of a visual experience. He described the dorsal and ventral streams for vision in mammals. Perception is in the dorsal stream while navigation is in the ventral.

Q: Susan said the neural correlate argument is limited. The brain stem may be needed as an enabling condition, but may not be the site of consciousness.

Ed said this could be said of the heart just as easily. Circulation is necessary but is not the site of thought.

At this time we adjourned to continue the discussion after returning our lunch trays.

(For an extended discussion of some of the issues raised in this talk, see “Ned Block has posed a problem concerning phenomenal consciousness” a 39 page article by Brian McLaughlin on line at <www.nyu.edu>papers>McLaughlin> Entitled “A Naturalist-Phenominal Realist Response to Block’s Harder Problem” published in Philisophical Issues 13, 2003: 163 – 204.)

 

 

 

 

Comment