Sunday, September 22, 2019

The consciousness con


Can an AI be conscious?
Can we make a conscious AI?
HOW can we make an AI conscious?
SHOULD we let an AI become conscious?
SHOULD we limit the emergence of conscious AI?


Let's go back to 1950 when Turing asked a slightly different question:

Can a computer think?


What is well known is that he picked apart a few definitions of think (like thinking is something people do in their heads) and basically decided that was pointless as such definitions can simply rule it out.  So he talked about a parlour game and we ended up with the Turing Test and the Loebner Prize and CAPTCHA. If you talk to a human and a computer/robot and can't tell the difference, that is you're guessing is at the 50:50 chance level, then if you think the person is thinking you've got to admit the computer is.

What is less well known is that he suggested effectively giving the computer sensory-motor capabilities, like a robot (or spaceship or even a car or a drone) and getting it to learn. He also predicted that computers would get to be able to fool people 30% of the time, in a five minute conversation, by the year 2000.

Arguably Weizenbaum's Doctor/Eliza program did that by 1970, and certainly Loebner prize winners have done so periodically since the 1990s (and not just in 5 minute conversations).  In fact, Loebner went one step beyond Turing's explicit test, consistent with his proposed training regime, in wanting an audiovisual or sensorimotor component to the his $100K +  Gold Medal prize. People are too easily fooled. The charitable assumption is that people are just like us, and that is a necessary assumption to avoid information overload.  If a person has a disability, or is a foreigner or a child, that actually changes the level of our assumptions - we might slow down or dumb down or some such.

Harnard popularized the sensory-motor grounding idea in the 1980s and enshrined a similar (robotic) idea in his Total Turing Test as well as a look alike (android) idea. Searle on the other hand changed the question in 1980 to:

Can a computer have a  mind?


Whereas we are quite happy to talking about computers having to "think over" some relatively mundane problem (we really just mean the computer or network is slow), having a mind makes it more subjective again - although even now we might talk about a computer or a shopping trolley "having a mind of its own".  The problem here is that we are very accustomed to using metaphor, and in fact most words have a range of meaning from the most literal physical (like "in" in a 2D or 3D world sense) to increasingly abstract (like "in an hour" or "in mind" or "in the process of doing that" or "in order to do that").

Searle's definition of AI was so different from that of AI researchers, that it became known as strong AI and the good old fashioned AI (GOFAI) as weak AI.

Searle's thought experiment is basically about hand (or later mind) simulating the program for an AI that is a Chinese speaker (unlike Searle) with messages written in Chinese passed in and out of a locked room  - and he concludes when he does this as a thought experiment that there is nobody else at home, no Chinese-speaking mind. The problem with thought experiments is that may be you don't think them through enough.

So what if I ask Searle if he is hungry, and receiving an affirmative, what he'd like to eat? Are we simulating a Chinese stomach too - and leaving Searle's to starve? Or does he get to order and eat the food after doing all that work of  looking up grammar rules and dictionaries and relating it to himself - in this case it becomes a Chinese teaching/learning environment.  He'll eventually recognize the dinner order questions and be able to write out the order for his preferred dish or the day, or even just copy it from or mark it on the menu he's given.

So what is different about asking questions about mind and consciousness? Definitions of mind tend to be in terms of awareness, consciousness and thought. But Searle and Turing also both talked about aesthetics, feelings, emotions, and the subjective way we are aware of and experience things like love and pain.

Definitions of consciousness tend to talk about being awake and aware - but this applies to cats and dogs, and arguably autonomous drones and driverless cars.

But what it misses is the part that Turing and Searle focus on? The language aspect. Being conscious also means communicating with the external world, or interacting with it in a sensorimotor sense, as Turing and Harnad focus on. And of course you can also talk to yourself - this is the stream of consciousness. And by talking about a stream we are focussing on the serial nature of conscious attention, which is quite different from the parallel way the nervous system processes all our sensory and other neural inputs in parallel, and produces our muscle and other neural outputs in parallel.

Another key element that tends to get overlooked is thus idea of a focus of attention, but it is one of the ideas at the heart of the recent explosion in the capabilities of neural networks, particularly in relation to auditory and visual processing, speech and language. And all of these concepts should hopefully map to things we can look for in the brain with EEG or fMRI or other brain imaging techniques.

It may be helpful to sum up in terms of three levels (or alternate definitions) of consciousness that we can see even in a pet or a baby:
  • awake - brain/cpu operating, not currently in sleep mode
  • aware - sensors are providing information about our world
  • awail - sensors trigger an alarm designed to elicit a remedy

So what is the part of consciousness that we think computers DON'T have?

My books

My Casindra Lost stories feature an emergent AI 'Al' and a captain who is reluctantly crewed with him on a rather long journey to another galaxy - just the two of them, and some cats... There's another one, 'Alice' that emerges more gradually in the Moraturi arc.

Casindra Lost
Kindle ebook (mobi) edition ASIN: B07ZB3VCW9 — tiny.cc/AmazonCL
Kindle paperback edition ISBN-13: 978-1696380911 justified Iowan OS
Kindle enlarged print edn ISBN-13: 978-1708810108 justified Times NR 16
Kindle large print edition ISBN-13: 978-1708299453 ragged Trebuchet 18

Moraturi Lost
Kindle ebook (mobi) edition ASIN: B0834Z8PP8 – tiny.cc/AmazonML
Kindle paperback edition ISBN-13: 978-1679850080 justified Iowan OS 

Moraturi Ring
Kindle ebook (mobi) edition ASIN: B087PJY7G3 – tiny.cc/AmazonMR
Kindle paperback edition ISBN-13: 979-8640426106 justified Iowan OS 


Author/Series pages and Awards


Amazon Series and Author pages:
WorldCon2020 presentation (COVID-style):
New York City Book Awards 2021 (Gold and Silver): 

No comments:

Post a Comment