Loading

Connecting Linkedin...

artificial-intelligence

Challenges in AI: Interpreting Human Emotion

25 days ago by Andrea Amato
artificial-intelligence

Can technology mimic human behaviour we haven’t yet accurately grasped?

As a result of the pandemic, we’ve grown further dependent on technologies to carry out our daily tasks. With a workforce largely working from home, we rely on present software to replace previous roles that took place in the office, such as video conferencing software instead of conducting meetings in person. Whilst a broad range of technologies have been rapidly advancing to keep up with our present needs and IT jobs in Malta or elsewhere, one particular umbrella of technology has been a focus of interest to many industry professionals: artificial intelligence (AI).

AI encompasses numerous capabilities and works as an umbrella term for several technologies. Be it machine or deep learning, natural language processing (NLPs), and so forth, a less spoken about but familiar AI technology is emotion-recognition software. Previously, such software was created to assist learning environments for children in schools. Now, we’re looking to use the software to monitor remote workers as well as children learning from home.

The software works in mapping facial features to detect the emotions of individuals. In psychology, emotions have popularly been divided into six distinguishable and common states: happiness, sadness, anger, disgust, surprise, and fear. Supposedly a means of surveillance for remote jobs and workers, emotion-recognition software is an increasingly invested industry, and like many AI investments—is expected to reach multiple billions in the next few decades.


What do researchers think of AI detecting emotions?

Unsurprisingly, there is widespread disagreement in emotion-recognition software in scientific communities. This is largely due to poor evidence supporting the claims made in such software to accurately interpret human emotions. In a paper written by Feldman Barrett et al. (2019), it is inaccurate to claim emotions can be readily inferred from facial movements (such as a smile to indicate happiness). In truth, human emotions are complex and nuance, and differ according to cultures. There is still a lot more to learn and explore in this space and how we can apply software practically.

It is also becoming increasingly noticed that AI software with ‘promising results’ to interpret and determine emotions are creating a commercial field that is concerning tech ethicists. Software that claims their emotion detection can reveal scores in employability, employee satisfaction, and so forth, have challenging bases in science. Any software findings should be made accessible and available for employers and researchers to assess their information. Researchers are also asking for policy regulation to ensure such technology is being used properly.

Another practical growing concern revolves around discrimination. A common challenge to software recognition technologies, whereby in determining a person’s feeling state to inform important decisions can negatively deter individual prospects in their careers, schooling, and other applications, emotion-recognition software is found. These concerns further inform regulation practices and safety procedures in using the technology. As researchers often apply rigorous scientific methods to deduce their findings, regulation not only supports human ethics, but protects their approach from commercial misuse as well.


A case for emotion-recognition software regulation

Is a call for emotion-recognition software to be regulated substantiated? Absolutely, because we’re already presented with regulations from governmental fronts that make scientific research accessible. With medicine, for example, scientists undergo strict clinical trials before further verification with governmental departments. In IT jobs, we do not oftentimes observe the same approach. If regulations are applied to technological advancements that will be used to inform important decisions, we can review the same benefits offered through scientific rigour.

AI consistently evolves to mimic and interpret human behaviour, often based on current scientific and psychological findings. This means that AI can create great claims in interpreting what we know to be true behaviourally in a technological dome. However, whilst we’re familiar with a lot in human behaviour, there is still plenty we do not know. Human behaviour, including emotions, are complex and nuance—and at times are tricky to generalise with many cultural influences. This is however a limitation AI researchers are already wary of.

Contrary to commercial claims surrounding emotion AI, tech researchers are careful to determine what emotion-recognition software can and cannot do. In particular, software cannot interpret what people are feeling internally. Its estimations are based on what we think outwardly emotions interpret and on trending literature we abide by. Researchers also attempt to combat this limitation by analysing other forms of communication such as body posture and other nonverbals, adding to a holistic approach to their findings. In this way, any commercial claims that strictly observe emotions to determine absolute findings are immediately discredited.


Why are human emotions complex?

Based on the above, we know that AI hasn’t yet cracked human emotions, largely because it’s difficult for technology to mimic a behaviour not even humans have fully explored. Presently, the universal emotion claim that features the six different but interpretable feeling states was originally surfaced by psychologist Paul Ekman in the 1960s. In his research, Ekman concluded these six states are transferable across cultures, despite receiving backlash from sociologists who disputed that cultural factors interlink with other social factors, making the claim less reliable.

Nevertheless, the neat distinguishable categories presented by Ekman was adopted by organisations wanting to implement a technological spin to emotions. Whilst these later featured many biases, including racial discrimination, we still find that emotion-recognition software relies on these six ‘universal’ emotions. The disadvantages we observe today with emotion-recognition software are fourfold:

  • Uncredible software is being used to determine people’s opportunities with poor supporting evidence,

  • Job applicants are judged unfairly over their emotions and relatability to existing organisations,

  • Students are being compared to one another between those who are happy and motivated to those children who appear angry and disengaged, and

  • Facial-recognition software already presents an important caveat that interprets Black faces to portray more negative emotions than white faces do.

The above summarises the scope of this article to present the current challenges presented to emotion AI and IT jobs related to this space. Such software is recommended to be further regulated in order to prevent negative consequences and unfair treatment to individuals in diverse applicability’s. The software should be fair and shouldn’t segregate groups of people further, leading a lot more growth for emotion AI research before it can be used in the present world of work. More research should be done before applying such technologies for in-office and remote IT jobs in Malta and globally.