# Affective Computing: good or bad idea?



## Everness

The goal of “affective computing," I just learned, is to create technology that can express emotion, interpret and respond to the emotions of their human handlers, and even establish a sense of intimacy with their human companions. Researchers at MIT Media Lab have already created a computerized virtual person named ”Laura." “Laura” plays the role of an exercise trainer. She helps real-life subjects increase their physical activity levels. 

Here is a typical response of people interacting with “Laura:” “I like talking to Laura, especially those little conversations, about school, weather, interests, etc. She is very caring. . . I found myself looking forward to these fresh chats that pop up every now and then. They make Laura so much more like a real person." http://www.boston.com/news/globe/editorial_opinion/oped/articles/2006/10/10/virtual_companionship/

How should we react to this new technology? The author of the above article is rather critical of this “development.” He limits his responses to two: to laugh off such technological pretensions as sadly pathological or to be truly frightened. I suggest a third one and a more optimistic one: virtual companionship is better than no companionship. Some people will finally be able to answer “not anymore” to Elvis’ immortal question: “Are you lonesome tonight?”


----------



## whattheflock

It has been argued that some of us relate to other people not so much for whatever peculiarities or traits those people display, but for what we might perceive of us in them. That is, the traits that remind you of yourself are the ones that permit you this interaction with others, or even make it desirable.
If we have no problem anthropomorphizing pets and even plants, I don't think there's much problem doing it to a machine. As it is, whenever something goes wrong with our domestic technological appliances (like computers) don't we blame them for it already?


----------



## Everness

whattheflock said:


> It has been argued that some of us relate to other people not so much for whatever peculiarities or traits those people display, but for what we might perceive of us in them.



I think you are onto something. It was Berkeley who argued that it is impossible for something to exist without being perceived (or, as he says, esse est percipi, Latin for "To be is to be perceived"). Taking Berkeley's main argument to its logical extreme, all relationships are virtual. So Locke was wrong!


----------



## curly

I have a feeling that it could lead to legal problems if nothing else, if a robot displays emotion people might slowly feel that more than imitating a person they ARE a person in their own right.  Also i think it is possible that people could fall in love with emotional robots, which would raise the question of should they be allowed to marry, after all we can't marry our pets.


----------



## maxiogee

Check out the film S1m0ne, with Al Pacino. The whole concept of what makes us human is being challenged by the concept of intelligent machines. It's an interesting philosophical challenge.


----------



## Kajjo

curly said:


> I have a feeling that it could lead to legal problems if nothing else, if a robot displays emotion people might slowly feel that more than imitating a person they ARE a person in their own right.  Also i think it is possible that people could fall in love with emotional robots, which would raise the question of should they be allowed to marry, after all we can't marry our pets.


Well, there is no discussion about marrying pets, so why should there be a question about computers?

On the other hand, you raised a good point regarding "protecting robots". I can easily imagine all this wild political correctness going even wilder and protect robotic feelings. People are so stupid, this could actually happen. Maybe we would not be allowed switching them off -- or we will have to give them a day off. Horrible scenario. )

Kajjo


----------



## cuchuflete

Everness said:


> The goal of “affective computing," I just learned, is to create technology that can express emotion, interpret and respond to the emotions of their human handlers, and even establish a sense of intimacy with their human companions.



Nope.  It's not a computer program that does, or can, establish a sense of intimacy.  It is the human "companion" that takes automated computer responses, and chooses to interpret it as if it were human.  That is self-deception, not intimacy.  

A computer program can certainly facilitate that self-deception, but it's the person who buys into the illusion.
The machine may help the human do what the human wants to do, but it cannot "express emotion". It can create a false sense in the willing human that emotion has been expressed.


----------



## maxiogee

cuchuflete said:


> The machine may help the human do what the human wants to do, but it cannot "express emotion".


There are those of us humans who cannot do that either, and I don't mean "who won't do it", but people who, for one reason or another, cannot express emotions - sometimes because they cannot themselves identify them. There are their counterparts too, those who cannot interpret emotion in others (I have been diagtnosed as being mildly/somewhat of that persuasion) and this hinders communication.



> It can create a false sense in the willing human that emotion has been expressed.


When humans interact with an interface, and do not know if they are interacting with a human or a computer, what does that say about the state of computer 'intelligence' — and, of course, what does it say about our abilities to distinguish _reality_ from _artifice_?


----------



## .   1

This new techonolgy creeps me out.
I often encounter it when I am trying to contact large companies and am placed on a computer loop that tries to sound human but just comes off as weird.
Computers that sound like computers are fine but computers trying to impersonate humanity are simply off putting to me and I definitely do not like the experience.

.,,


----------



## Everness

cuchuflete said:


> Nope.  It's not a computer program that does, or can, establish a sense of intimacy.  It is the human "companion" that takes automated computer responses, and chooses to interpret it as if it were human.  That is self-deception, not intimacy.
> 
> A computer program can certainly facilitate that self-deception, but it's the person who buys into the illusion.
> The machine may help the human do what the human wants to do, but it cannot "express emotion". It can create a false sense in the willing human that emotion has been expressed.



I see your point but technology is taking us to places we have never been.  The same article says, 

_Other experiments conducted at Stanford University report similarly positive results with empathetic embodied computer agents interacting with subjects, leading researchers to conclude that "embodied computer agents are indeed social actors in the truest sense of the word 'social,' capable of forming relationships with users comparable to those found in the world of human-human interactions."_

We share with God the gift of creativity. If God created us in his image and likeness, we also have the capacity to create computerized virtual persons in our own image and likeness. That's exactly what's going on at MIT and Stanford. We aren't gods but we are like them. Same thing applies to these "embodied computer agents:" they aren't human but they are becoming just like us. 

On the other hand, whattheflock had something very important to say.



whattheflock said:


> It has been argued that some of us relate to other people not so much for whatever peculiarities or traits those people display, but for what we might perceive of us in them. That is, the traits that remind you of yourself are the ones that permit you this interaction with others, or even make it desirable.



As I stated before, whattheflock is onto something. Idealism has a point. People and relationships are in our heads.  People are the result of massive emotional projections. They are just screens on which we project our most basic emotional needs. We fall in love and we then fall out of love. Did the person change? Yes and no. We changed. But when we change, that person that we have construed, changes too.  We take back our projections and suddenly that person doesn't meet our needs anymore.


----------



## cuchuflete

_Other experiments conducted at Stanford University report similarly positive results with empathetic embodied computer agents interacting with subjects, leading researchers to conclude that "embodied *computer agents are indeed social actors* in the truest sense of the word 'social,' *capable of forming relationships with users* comparable to those found in the world of human-human interactions."


_Nope.  Again.   They—the researchers— are so into their topic that they are standing on their heads.   "Computer agents are indeed [capable of being perceived as] social actors, and are *not* "capable of forming relationships with users".  Users are capable of forming relationships with what they, the users, perceive to be social actors.   Does a relationship result, in the mind of the user? Yes.  But it is the user that creates the relationship, not the computer.   It is a man<=>machine relationship, which the user deludes himself into perceiving as a person<=>person relationship.   Clever programming makes this easier for the person.  



			
				Everness said:
			
		

> We share with God the gift of creativity. If God created us in his image and likeness, we also have the capacity to create computerized virtual persons in our own image and likeness.



I fail to see what this has to do with the technology.  It presupposes that your reader shares your notions of "God" and bits of theology or scriptural interpretation.  I have a moustache.  I have no reason to believe that god has a moustache.  I can build a computer with facial hirsute adornments. Computers can play chess far better than I can, and can also do far more damage, but cannot hybridize flowers.  Your religious viewpoints may help you understand this technology in a particular way, and that's ok, but it is very narrow-minded to assume that others see the world, including this subject, through your God filter.  It might lead to the absurd line of reasoning that politheists should build more computers, and atheists none at all.


----------



## Sallyb36

maxiogee said:


> Check out the film S1m0ne, with Al Pacino. The whole concept of what makes us human is being challenged by the concept of intelligent machines. It's an interesting philosophical challenge.



Or does anyone remember Bladerunner?  The potential problems are huge, we can't even decide what to do about people in certain circumstances, how will we cope when there are machines that could be classed as people and want rights etc?


----------



## almostfreebird

Sallyb36 said:


> Or does anyone remember Bladerunner? The potential problems are huge, we can't even decide what to do about people in certain circumstances, how will we cope when there are machines that could be classed as people and want rights etc?


 
Especially when he or it is in charge of nuclear missile button.


----------



## Everness

cuchuflete said:


> _Other experiments conducted at Stanford University report similarly positive results with empathetic embodied computer agents interacting with subjects, leading researchers to conclude that "embodied *computer agents are indeed social actors* in the truest sense of the word 'social,' *capable of forming relationships with users* comparable to those found in the world of human-human interactions."
> 
> 
> _Nope.  Again.   They—the researchers— are so into their topic that they are standing on their heads.   "Computer agents are indeed [capable of being perceived as] social actors, and are *not* "capable of forming relationships with users".  Users are capable of forming relationships with what they, the users, perceive to be social actors.   Does a relationship result, in the mind of the user? Yes.  But it is the user that creates the relationship, not the computer.   It is a man<=>machine relationship, which the user deludes himself into perceiving as a person<=>person relationship.   Clever programming makes this easier for the person.



I suggest you get acquainted with the concept of functionalism. 

http://plato.stanford.edu/entries/functionalism/ 

Mental states are identified by a functional role. They are able to be manifested in various systems, even computers, so long as the system performs the appropriate functions.



cuchuflete said:


> I fail to see what this has to do with the technology.  It presupposes that your reader shares your notions of "God" and bits of theology or scriptural interpretation.  I have a moustache.  I have no reason to believe that god has a moustache.  I can build a computer with facial hirsute adornments. Computers can play chess far better than I can, and can also do far more damage, but cannot hybridize flowers.  Your religious viewpoints may help you understand this technology in a particular way, and that's ok, but it is very narrow-minded to assume that others see the world, including this subject, through your God filter.  It might lead to the absurd line of reasoning that politheists should build more computers, and atheists none at all.



I used a Hebrew Scripture story to illustrate a simple point and not to impose my religious views on other people: if "Laura" is capable of conversing with her subjects and is able to use hand gestures, eye gaze behavior, posture shifts, head-nods, and facial expressions, it has to do with our human ability to create a computerized virtual person in our own image and likeness. "Laura," the creature, reflects the creator, us. By the way, the concept of image and likeness doesn't refer to God's physical appearance. Why? In the best Judeo-Christian tradition, God is spirit. The fact that Genesis states that we were created in God's image and likeness means that we think, feel and act like him/her. But please don't get me wrong: the fact that God doesn't have a moustache, doesn't mean that you need to shave yours, ok?


----------



## .   1

Everness said:


> Mental states are identified by a functional role. They are able to be manifested in various systems, even computers, so long as the system performs the appropriate functions.


Are you saying that it is possible to have a mental state but not have a mind?

.,,


----------



## Everness

. said:


> Are you saying that it is possible to have a mental state but not have a mind?
> 
> .,,




I'm saying that mental states (beliefs, thoughts, desires, likes, dislikes, being in pain, etc.) are constituted solely by their functional role. Functionalism says that mental states are constituted by their causal relations to one another and to sensory inputs and behavioral outputs. Cartesian Dualism, and apparently you subscribe to it, says that the ultimate nature of the mental is to be found in a special mental substance.


----------



## cuchuflete

Everness said:


> I suggest you get acquainted with the concept of functionalism.
> 
> http://plato.stanford.edu/entries/functionalism/
> 
> Mental states are identified by a functional role. They are able to be manifested in various systems, even computers, so long as the system performs the appropriate functions.



Thanks. Interesting stuff.  Now, I suggest you get acquainted with the concept of functionalism.  
From the linked essay, regarding the Turing test...

“Is it theoretically possible for a finite state digital computer, provided with a large but finite table of instructions, or program, to provide responses to questions *that would fool an unknowing interrogator* into thinking it is a human being?”  My emphasis.  This is exactly the point I made earlier.
The system can perform the appropriate functions, such that the humanoid interrogator is fooled, or willing suspends disbelief, if literary analogies float your boat, into establishing an emotive relationship with a presumed, but false, human replica.  Another quote from the essay:  "...the idea that internal states can be fully described in terms of their relations to input, output, and _one another_, and can figure in lawlike descriptions, and predictions, of a system's output, was a rich and important idea that is retained by contemporary functionalist theories."  If the internal states constitute a relationship—and I don't doubt that one can come into being—the perception, and prior creation, of said relationship is done by the humanoid component.  The machine is nothing but an input source to the humanoid.  The machine itself facilitates the relationship, but the identification of the interaction, including expectations of "correct" responses, are the invention of the non-machine component.

The “absent qualia” objection to functionalism...discussed in the essay, seems to fit the computer rather well.


----------



## .   1

Everness said:


> I'm saying that mental states (beliefs, thoughts, desires, likes, dislikes, being in pain, etc.) are constituted solely by their functional role. Functionalism says that mental states are constituted by their causal relations to one another and to sensory inputs and behavioral outputs. Cartesian Dualism, and apparently you subscribe to it, says that the ultimate nature of the mental is to be found in a special mental substance.


Are you saying that a computer program can have beliefs, thoughts, desires, likes, dislikes and experience being in pain?

.,,


----------



## whattheflock

I think a computer program can _eventually_ have thoughts and feelings (which is all we have). Maybe not yet. The way this "Laura" is described, is just a conterfeit personality. Canned responses.
But, it has been opined before, all we are, all of us, is _software._ The human personality seems to be a fortunate accident. The concept of _emergent behavior_ seems to apply: where does the soul reside? It arises from the millions of interactions between neurons, it seems. Maybe, one day, when all software can achieve such a level of complexity, a distinct personality will arise. Say, around the year 2021.


----------



## .   1

I think that computer programs will always be ruled by GIGO.  I can not imagine a computer program that could attain self awareness and not immediately go insane.

.,,


----------



## whattheflock

Cool. I think Isaac Asimov, Philip K. Dick, Vernor Vinge and even James Cameron (Terminator) would agree completely with _*.,,*_


----------



## maxiogee

. said:


> Are you saying that it is possible to have a mental state but not have a mind?
> 
> .,,



What is a mind? As distinct, of course, from a brain.


----------



## .   1

maxiogee said:


> What is a mind? As distinct, of course, from a brain.


A mind is a functioning brain.

.,,


----------



## ireney

If we manage to give computers hormones, a minor complex or trauma here and there, insticts and whatnot we may be able to create an affective computer. You see, we may think with our mind but the way we act and feel is sometimes unreasonable. Sometime even we ourselves can't actually understand where _that_ response came from. Our "affections" don't come from simple computing. 

Simple computing and reasonmeans cannot account for things like "yes, I know he is a scoundrel (old fashion but I just love this word) but I don't care! I'm in love". The day "Laura" is extremely cheerful and puts up with what the students did because it woke up and felt it was such a nice day! or is not so sympathetic although it tries her best because it is so melancholy!, I will call "it" a "she".


----------



## maxiogee

. said:


> A mind is a functioning brain.



Ahhh, a programmed hard drive! But surely most people would define "a mind" as more than the working brain - cats, gnats, sprats and chats all have one of them - and yet we don't ascribe minds to them.


----------



## .   1

maxiogee said:


> Ahhh, a programmed hard drive! But surely most people would define "a mind" as more than the working brain - cats, gnats, sprats and chats all have one of them - and yet we don't ascribe minds to them.


Got me there.  Well spotted. I withdraw.

.,,


----------



## cuchuflete

I just noticed the thread title.

*Affective Computing: good or bad idea?

*​I guess my heartfelt personal answer is that if people can eat food with artificial flavorings, dye their hair, undergo plastic surgery, etc. to feel good through falsehoods...what's one more little sham?  Sure, go ahead, enjoy an emotional high based on
what isn't real.  Affective computing, the next generation of inflatable dolls.   Be the first on your block to try it. New, Improved, 87% fewer/less calories than competitive brands.


----------



## Seana

Without taking into consideration incomprehensible give for me technology of designers that transfixing me - to answer this question - it seems to be an incredible idea at first glance of course, but where it will lead us... are we sooner or later supposed to become cyborgs controlled by machines. 

I am afraid that nobody and nothing will manage to stop this trend - finding *substitutes for real-life*, we can only observe passively what will come. Engineers of huge concerns will be still working _'on the next generation of technological marvels to address our lonesome high-tech existence'_ in order to make next generations of people totally addicted to them. 

BTW Isn't our WR community the best proof of that?


----------



## maxiogee

Seana said:


> are we sooner or later supposed to become cyborgs controlled by machines.



No, we are too stupid for the machines to allow us to be given cyber-implants! We may be kept around to maintain them, but not as many of us as currently exist.


----------



## Everness

Seana said:


> I am afraid that nobody and nothing will manage to stop this trend - finding *substitutes for real-life*, we can only observe passively what will come. Engineers of huge concerns will be still working _'on the next generation of technological marvels to address our lonesome high-tech existence'_ in order to make next generations of people totally addicted to them.
> 
> BTW Isn't our WR community the best proof of that?



Yes, it is. This is a virtual place. Our relationships are virtual as well as our conversations. Even if we had the opportunity to watch each other via webcams, everything would continue to be essentially virtual. Because we don't have the chance to communicate face to face at different levels with one another on a daily basis, we construe personas. It was Jung who said, "The persona is a complicated system of relations between individual consciousness and society, fittingly enough a kind of mask, designed on the one hand to make a definite impression upon others, and, on the other, to conceal the true nature of the individual." 

For some reason we don't frown upon the type of technology --a true substitute for real life-- but we take exception with the technology that created "Laura." What we fail to realize is that we are already creating "Lauras" in this virtual world of ours because we are relationship-hungry. 

_There is in the heart of every human being, a powerful longing for a meaningful relationship with at least one other person. For some, the longing is a conscious awareness; for others it remains unconscious, felt only as loneliness or an absence of meaning in life.
- Howard Clinebell_


----------

