行业新闻-Banner1

Artificial intelligence involves human emotions. What will we miss in the future world?

   With the acceleration of artificial intelligence technology, many things that seems unlikey to be produced is becoming possible. The existence of artificial intelligence in human life has become deeply extensive, and even has begun to intervene in human emotions.

  Artificial intelligence is gradually exerting influence around us like making argeted advertising on social media, screening job seekers, determining ticket prices, controlling central heating systems through voice recognition, creating cultural output, or regulating traffic flow, etc, and artificial intelligence has gradualy taken much more tasks in human life.


  Musk predicts that by the end of 2017, Tesla driverless cars will be able to travel safely throughout the United States without human intervention. Social robots around human can perform many home or care tasks within a decade.


  It is widely believed that by 2050 we will be able to make progress in other areas beyond these specific areas, ultimately achieving Universal Artificial Intelligence (AGI). AGI is part of the new Singularity concept operating system released by Microsoft, whose idea is that computers can transcend humans on any cognitive task, and human-computer integration will become extremely common. No one can say what happens after that.


  Do you have an artificial intelligence strategy? Do you want to have it?


  One interesting imagining is to install computer parts in the human body, which makes it easier to process data. The"neural mesh" envisioned in the field of artificial intelligence can act as an extra cortex outside the brain, connecting us to electronic devices with high speed and efficiency. This will be a major innovation in machine parts—electronic heart pacemakers and titanium alloy joints in the "semi-robot" body. The focus of future artificial intelligence will be applicated into military and defense applications, and the concept of fully autonomous weapons is very controversial. This weapon system can search, identify, select, and destroy an algorithm-based target and learn from past security threats but without human involvement. This is a rather scary concept.


  These ideas about artificial intelligence dominates the future of mankind, the sci-fi dystopia, which makes people reminiscent of the "Terminator" scene.


  Accidental discrimination


  There may still be a way to go before human beings can be destroyed, but now the warnings surrounding the ethics of artificial intelligence have already sounded the alarm. Just last month, machine learning algorithms have been criticized because it actively recommends Amazon users to make bomb components, embody gender inequality in job advertisements, and spread hate messages through social media. Much of the reason for this error is the quality and nature of the data in machine learning. Therefore, machine will draw less perfect conclusions from human data. Today, this result raises serious questions about the management of algorithms and the artificial intelligence mechanisms in human daily life.


  Recently, a young American man with a history of mental illness was rejected for a job because his attitude toward an algorithmic personality test was not satisfactory. He believes that he has been unfairly and illegally discriminated against, but since the company does not understand how the algorithm works, and the Labor Law does not currently explicitly cover the content of machine decision-making, he has not resorted to the law. China's "social credit" program has also raised similar concerns. Last year, the program collected data from social media (including friends' posts) to assess the quality of a person's "citizenship" and use it for decision making such as whether to give a loan to the person.


  The need for artificial intelligence ethics and law


  It is necessary to develop a clear ethical system for AI operations and regulation, especially when governments and businesses have priority in certain areas such like acquiring and maintaining electricity, etc. Yuval Hralili, Israeli historian discussed the paradox of the problem of driverless cars and trams by behaving like MIT's ethical machine attempt to collect data on human ethics.


  However, ethics is not the only area that involves artificial intelligence and human health issues. Artificial intelligence has had a major emotional impact on humans. Nevertheless, as a subject of artificial intelligence research, emotions are still neglected.


  Feel free to browse the 3,452 peer-reviewed articles on artificial intelligence published on the Science Academic Database page over the past two years. Only 43 of them, or 1.2%, contain the word "emotion". There are even fewer articles that really describe the study of artificial intelligence emotions. When considering the Singularity system, emotions should be included in the context of artificial machine cognitive structure considerations. However, 99% of artificial intelligence research does not seem to recognize this.


  Artificial intelligence understands human feelings


  When we talk about emotions in artificial intelligence, we mean several different things. First, the machine recognizes our emotional state and takes action accordingly. The field of emotional computing is rapidly evolving to test skin reactions, brain waves, facial expressions and other emotional data through biometric sensors. Most of time, the calculations are accurate.


  The application of this technology can be either good or evil. Companies can get feedback based on your emotional response to a movie and sell them to you in real time via a smartphone. Politicians may craft information that appeals to a specific audience. While social robots may adjust their reactions to better help patients in a medical or nursing environment, digital assistants may use a song to help boost your mood. Market forces will drive the development of this area, expand its coverage and improve its capabilities.


  How do we view artificial intelligence?


  This is the second emotional area of artificial intelligence. There has been no progress in human emotional response to artificial intelligence. Humans seem to want to connect with artificial intelligence just as we treat most technologies, connecting people's personality with inanimate objects, letting appliances have purpose, and projecting emotions into the technology we use such as "It's very angry with me, that's why it doesn't work", etc.


  This is called the Media Equation, which involves a double thinking: we understand that machines are not conscious creatures in a rational sense, but we react emotionally to them as if they have emotions. This may stem from the most basic needs of our human beings, namely interpersonal relationships and emotional connections. Without them, humans will become depressed. This demand drives humans to connect with other people and animals, even machines. The sensory experience is an important part of this combined driver and reward mechanism, which is also a source of happiness.


  False social


  When there is no connection and sense of belonging in our environment, we replicate this experience through television, movies, music, books, video games, and anything that provide an immersive social world. This is called the Social Surrogacy Hypothesis, a theory supported by the empirical evidence of social psychology, which is beginning to be applied into artificial intelligence.


  There is evidence to support that human emotions come from something, even in the face of virtual artificial intelligence such as the joy of compliment from the digital assistant, the anger from rejecting the algorithm of a mortgage application, the fear of facing the driverless car, and Twitter's automation intelligence's refuse to verify account. Depressed (I am still sad about this issue).


  Robot


  Humans are more emotionally responsive to physical artificial intelligence, which means that we response to robots more strongly, if they are like a human being, and we are easily attracted by anthropomorphic robots, expressing positive emotions to them when we see them hurt; accordingly we feel sympathy and unhappy, even sad if they refuse us.


  Interestingly, however, if a robot is almost exactly like a human being, but not a perfect human, our assessment to them will suddenly drop, even we reject it. This is the so-called "horror valley" theory, and the resulted design concept is to make the robot less looking like ahuman like at this stage, unless one day we can make the robot exactly the same as humans.


  Gentle touch


  Artificial intelligence now uses haptic technology, a touch-based experience, to further deepen the emotional bond between humans and robots. Perhaps this example is the most famous: there is a furry seal Paro that is useful in nursing facilities in different countries.


  Social and emotional robots have many potential uses. Some of these include caring for the elderly, helping them to live on their own, and also helping people who are isolated as well as suffering with dementia, autism or disability. Touch-based sensory experiences are increasingly being integrated into virtual reality and other technologies, which is a part of it.


  In other areas, artificial intelligence may be responsible for tasks such as daily household chores or teaching. A survey of 750 Korean children between the ages of 5 and 18 found that although most of them did not have any problems in the courses taught by artificial intelligence robots, many people expressed a concern about artificial intelligence's emotional role in class. Can robots provide advice or emotions to students? However, more than 40% of people favor the use of artificial intelligence robots instead of teachers.


  As Harvard psychologist Steven Pinker said, an experience like the social alternative described above allows us to deceive ourselves. We didn't really experience socializing, but we deceived the brain and made us believe it so that we feel better. However, the effect of copying is not as good as real.


  Conclusion


  Obviously, people can experience real emotions from interactions with artificial intelligence. But will we miss something that is not so far away from ourselves, besides driverless cars, virtual assistants, robot teachers, cleaners and playmates?


  This scene is reminiscent of Harry Harlow's famous experiment, where isolated monkeys can choose a "mother" with soft hair instead of picking up the milk through the cold wire mesh. Can we technically achieve everything we want and realize that the basic emotional needs of human beings do not exist with the pleasure of the real world sensory experience? As for the extravagant things in the future, will we pursue something that is opposite to mass-produced junk food, which means the real sensory experience and contact with real people, not robots?


  The answer is still unknow yet. However, 99% of artificial intelligence research does not focus on emotions. This fact indicates that if emotions do play a greater role in artificial intelligence, then this is either an afterthought or it is because emotional data allows artificial intelligence devices and their employers to have more power and money. The digital humanism program may help us to remember that when we move toward the integation of Singularity system and human-machine, we should not ignore the ancient mammalian brain and their need for emotional bonds. The OpenAI project is a step towards this goal to make the benefits of artificial intelligence available to everyone. So let us go further and consider emotional health in the field of artificial intelligence. Who knows what this will bring to us?

Copyright 2009 - 2022 Instar Electromechanical Retain ownership.