READING

On Emotionally Intelligent Machines

On Emotionally Intelligent Machines

In 1954, J. Robert Oppenheimer, an American physicist considered to be the father of the atomic bomb, was called before a controversial Atomic Energy Commission security hearing to defend his work and reputation. During this hearing, Oppenheimer made a statement: “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.” Today, all signs suggest this is also the way it will be with emotionally intelligent machines. If the dangers of a nuclear weapon, which could bring the end of human existence, were not enough to deter scientific innovation, it seems doubtful that the more intangible dangers of emotionally intelligent machines will be sufficient.

Emotional intelligence, as defined in 1990 by the seminal psychologist Peter Salovey, consists of “the ability to monitor one’s own and others’ feelings and emotions, to discriminate among them and to use this information to guide one’s thinking and actions.” In order to avoid delving too deeply into the complexity of how best to measure whether a machine is emotionally intelligent or not we simply say for the time being: an emotionally intelligent machine is one endowed with these abilities at an average human level. We distinguish this from emotionally aware machines, which might have some or even all of these abilities, but at a level well below that of an average human. We also differentiate emotionally intelligent machines from artificial general intelligence (AGI) – emotional intelligence is a subset of the required abilities of an AGI, pertaining in particular to the ability to understand or influence human emotions.

Most cinematic depictions of AI, such as Blade Runner (1982), Artificial Intelligence (2001), I, Robot (2004), and Ex Machina (2015), show human-like creatures who appear to think and feel like us, raising fundamental questions about what it means to be human, and at what point AI beings warrant the same respect. In reality, most of the AI we interact with on a day-to-day basis is faceless and works behind the scenes – algorithms implemented on servers in isolated data centers. However, as research in this area progresses, we are coming closer to the day when we will have developed emotionally intelligent machines.

A clip from the early AI-themed film Blade Runner (1982). Source: moviestillsdb.com

Recently, many leading tech-evangelists and scientists such as Bill Gates, Elon Musk, and Stephen Hawking have spoken out about the general dangers of artificial intelligence. Yet as figures racing to establish themselves as pillars of the artificial intelligence community, the incentive is to prioritize innovation over discussions of safety and harm, which at the current rate of development will only seriously take place well-after the technology has been realized. This is not to say that the public prudence of thought leaders such as those mentioned above is not worthwhile, merely that the technology’s development will inevitably continue regardless of the outcomes of those conversations. The fundamentally human self-interest of AI researchers must be recognized.   Perhaps the late mathematician G. H. Hardy put it best in his 1940 essay “A Mathematician’s Apology”:

There are many highly respected motives which may lead men to prosecute research, but three which are much more important than the rest. The first (without which the rest must come to nothing) is intellectual curiosity, desire to know the truth. Then, professional pride, anxiety to be satisfied with one’s performance, the shame that overcomes any self-respecting craftsman when his work is unworthy of his talent. Finally, ambition, desire for reputation, and the position, even the power or the money, which it brings. It may be fine to feel, when you have done your work, that you have added to the happiness or alleviated the sufferings of others, but that will not be why you did it. So if a mathematician, or a chemist, or even a physiologist, were to tell me that the driving force in his work had been the desire to benefit humanity, then I should not believe him (nor should I think the better of him if I did). His dominant motives have been those which I have stated, and in which, surely, there is nothing of which any decent man need be ashamed.

While Hardy makes a valid point, it is important to ensure that we are not creating new technology for the wrong reasons, namely for the sole purpose of satisfying the intellectual curiosity and ambitions of researchers. In the case of emotionally intelligent machines, we believe that the benefits to humanity are tangible enough to outweigh the risks.

As far as we are from developing emotionally intelligent machines, initial research has already begun to identify potential benefits.  Recently, a University of Southern California research team led by Jon Gratch found that people have lower fear of self-disclosure when interacting with an autonomous “virtual human” versus one they believe to be controlled by a human. This indicates that virtual human therapists might be able to elicit more honest information from a patient during therapy sessions. The team’s research has also demonstrated that virtual humans who engage in rapport-building promote self-disclosure from human interlocutors more than virtual humans who do not attempt to build rapport.  This rapport-building generally depends on inferring the emotional state of the interlocutor from audio, visual or text input signals. There are many obvious applications of emotionally intelligent machines that may be inferred from this. For example, we can imagine motivational coaches that help people abstain from negative habits by increasing a coachee’s level of motivation, automatic tutors that elicit optimal emotional states from their tutee to absorb new information, and embodied companions that decrease the loneliness of isolated people.

There is a particularly strong market for emotionally intelligent machines that can help us deal with loneliness. Indeed, there has been an unprecedented rise in loneliness across the globe. The number of Americans who reported feeling frequently or regularly lonely has risen from 11-20% in the 1980s, to 26-45% in 2010. In China and Japan, the number of single-person homes has skyrocketed, and has unsurprisingly been accompanied by an increase in feelings of loneliness. According to one study, social isolation is as likely to lead to an early death as smoking 15 cigarettes a day, and recent research suggests loneliness can be up to twice as deadly as obesity. It affects young adults and the elderly alike, leading to an increase in depression and plummeting life satisfaction. Sadly, these numbers seem to get worse every year.

The commercial incentive to alleviate this problem is enormous, and emotionally intelligent machines could very well be the answer.  We are at the very beginning of a period where some of the technologies necessary to develop emotionally intelligent machines are becoming feasible. For example, Microsoft’s XiaoIce chat-bot, dubbed the ‘girlfriend app’, uses sentiment analysis to respond to a user appropriately, depending on their emotional state. The social assistant has taken China by storm, with over 20 million users that interact with the bot more than twice a day on average. The abilities of these bots will only increase as improved technology allows them to recognize a wider variety of human emotions and respond in a more human-like manner.

Although the potential benefits of emotionally intelligent machines are immense, there are unique challenges that may arise from their development. Just as the emergence of the web has resulted in new types of negative behaviors, such as video game and Internet addiction, so could the emergence of emotionally intelligent machines result in new kinds of addiction. This will become particularly problematic as the abilities of emotionally aware machines increase, and they become better at understanding and communicating with us. Why make an effort to venture outside your home to interact with real humans, when your emotional longings can be fulfilled by your computer? Not to mention the potential for external manipulation, for example to engender positive feelings about a brand or stoke anger towards a particular political candidate. Targeted advertising will doubtlessly become far more effective when carried out by machines that understand how to explicitly manipulate human emotions. This points to the importance of designing new technologies in a manner that minimizes the facilitation of these negative behaviors.

J. Robert Oppenheimer discussing the atomic bomb. Source: Science Photo Library

It is natural at this point to ask: who should be responsible for ensuring designs conform to these principals?  It is clear that inventors should not have the only vote. As Hardy’s timeless quote demonstrates, the motives of inventors are not purely altruistic. Video game designers, for example, are incentivized to develop addictive games that keep people playing longer.  Rather, the main burden should be placed on the public sector, where there is less conflict of interest in implementing and enforcing regulations to ensure new technologies are designed safely. Since technological advancement generally outpaces the adoption of new regulations and laws, discussions such as this one are necessary even before the technology itself has been realized.

Despite these dangers, history provides ample evidence that new technologies have been hugely beneficial for society. The development of new types of prescription drugs has cured many of the diseases that plagued us in the past. The Internet has paved the way for the development of Massive Open Online Courses (MOOCs) that show promise in democratizing access to knowledge, and provide those low on the economic ladder the resources necessary to compete with the more fortunate. The gamification of learning has also shown positive learning effects. If implemented safely with consideration of possible side effects, emotionally intelligent machines could have a hugely positive impact on society.

History has also shown us that even if a technology possesses incredible dangers, it is likely that we will develop it anyway.  As evidenced by the case of the atomic bomb, scientific curiosity coupled with unbridled ambition, when combined with political necessity or economic indispensability, is a potent cocktail that fuels innovation despite potential costs. It is not clear when we will have truly emotionally intelligent machines, but we can get an idea of how creators and society will react to this achievement by looking at the past. On the eve of the Hiroshima bombing, J. Robert Oppenheimer gave a speech in Los Alamos. Ray Monk’s 2012 book on the scientist quotes a colleague of Oppenheimer’s who was in attendance, describing the scene thusly:

“I’ll never forget his walk; I’ll never forget the way he stepped out of the car … his walk was like High Noon … this kind of strut. He had done it.” At an assembly at Los Alamos on August 6 (the evening of the atomic bombing of Hiroshima), Oppenheimer took to the stage and clasped his hands together “like a prize-winning boxer” while the crowd cheered.

The risks associated with nuclear technology are widely recognized. Despite the glory we may find in scientific advancement, let us treat AI technology with the same kind of caution.

References

Hamari, J. Koivisto and H. Sarsa “Does Gamification Work? — A Literature Review of Empirical Studies on Gamification.” Proc. 47th Hawaii Int. Conf. Syst. Sci.,  pp.3025 -3034 2014

G.M. Lucas, J. Gratch, A. King, A., L-P Morency.  “It’s only a computer: Virtual humans increase willingness to disclose.”  Computers in Human Behavior, 37, 94–100.

Gratch, J., Wang, N., Gerten, J., Fast, E. (2007). “Creating rapport with virtual agents.” In Paper presented at the 7th International Conference on Intelligent Virtual Agents, Paris, France.

Gratch, J., Kang, S.-H., & Wang, N. (2013). “Using social agents to explore theories of rapport and emotional resonance.” In J. Gratch & S. Marsella (Eds.), Emotion in nature and artifact. Oxford University Press.


COMMENTS ARE OFF THIS POST