Thursday, May 19, 2011

Wallach in H+ Magazine

An interview of Wendell Wallach by Ben Goertzel has been published in H+ magazine online. Goertzel asks Wallach a number of questions regarding the likelihood of developing artificial agents with moral decision-making capabilities and consciousness.
Ben:
What are your thoughts about consciousness? What is it? Let’s say we build an intelligent computer program that is as smart as a human, or smarter. Would it necessarily be conscious? Could it possibly be conscious? Would its degree and/or type of consciousness depend on its internal structures and dynamics, as well as its behaviors?

Wendell:
There is still a touch of the mystic in my take on consciousness. I have been meditating for 43 years, and I perceive consciousness as having attributes that are ignored in some of the existing theories for building conscious machines. While I dismiss supernatural theories of consciousness and applaud the development of a science of consciousness, that science is still rather young. The human mind/body is more entangled in our world than models of the self-contained machine would suggest. Consciousness is an expression of relationship. In the attempt to capture some of that relational dynamic, philosophers have created concepts such as embodied cognition, intersubjectivity, and enkinaesthetia. There may even be aspects of consciousness that are peculiar to being carbon-based organic creatures.

We already have computers that are smarter than humans in some respects (e.g., mathematics and data-mining), but are certainly not conscious. Future (ro)bots that are smarter than humans may demonstrate functional abilities associated with consciousness. After all, even an amoeba is aware of its environment in a minimal way. But other higher-order capabilities such as being self-aware, feeling empathy, or experiencing transcendent states of mind depend upon being more fully conscious.

I suspect that without somatic emotions or without conscious awareness (ro)bots will fail to interact satisfactorily with humans in complex situations. In other words, without emotional and moral intelligence they will be dumber in some respects. However, if certain abilities can be said to require consciousness, than having the abilities is a demonstration that the agent has a form of consciousness. The degree and/or type of consciousness would depend on its internal structure and dynamics, not merely upon the (ro)bots demonstrating behavior equivalent to that of a human.

The full interview is available here.