AI software can read human emotions: how do you prevent abuse?

Our subconscious reacts to impulses in fractions of a second. This sometimes results in happiness or anger, but much more often in more subtle emotional reactions.

These reactions are not or hardly perceptible to the eye. However, high-quality image recognition technology is capable of capturing so-called ‘micro-expressions’ in our faces. Interpreting these reactions – also with artificially intelligent software – is a rapidly developing specialty.

‘Artificially intelligent emotion analysis is very complex,’ says Gabi Zijderveld. ‘This is partly because the emotions shown are usually not primary, but nuanced, context-sensitive and influenced by cultural norms. For example, a smile does not always mean that someone is happy. Depending on the context, it can also indicate frustration, for example. Such nuances are also culturally determined, such as the polite smile in Japan or the head bobble in India.’

Human Insight AI

Gabi Zijderveld is Chief Marketing Officer of Smart Eye. The originally Swedish company profiles itself as an international forerunner in the field of Human Insight AI. Loosely translated, this is artificially intelligent software that is able to observe, interpret and in some cases also predict human behavior and emotions. A valuable skill with many different applications.

Partly thanks to the efforts of Zijderveld, who was voted one of the world’s most influential marketers by the American trade journal CMO Huddles at the beginning of November, Smart Eye serves customers such as BMW, Mercedes-Benz, NASA, Boeing and Lockheed Martin.

Gabi Zijderveld. Photo: Affectiva

As an art historian who graduated in Utrecht, Zijderveld left for the US at the age of 25, where by chance she started working in the high-tech industry. Within a few years she rose to IBM marketing leader for Linux applications (a multi-billion dollar market for IBM), before switching to Affectiva in 2014.

This 76-person spin-off from a research project at the renowned Massachusetts Institute of Technology was also a forerunner in Human Insight AI before being acquired by Smart Eye in May 2021. With its smart image recognition technology, Affectiva mainly focused on corporate marketers.

Billions of personal video clips

‘Marketers have been questioning test panels for decades about the impact of their marketing efforts’, Zijderveld explains. ‘However, consumers often do not say exactly what they think, partly because of the tendency to give socially desirable answers. With our software, we can read the reactions literally from the faces of hired panelists. Because we also know exactly where and which video content they are watching, our algorithms are constantly learning.’

They do this for advertising giant WPP, Coca-Cola and Nike, among others. The software ‘just’ works via the webcam, so that Affectiva could quickly collect more data to train its algorithms. For example, in May 2021 the database already contained more than 5 billion video fragments from more than 11 million test subjects from 90 countries.

At that time, Smart Eye acquired the startup for $73.5 million. Zijderveld, who is ultimately responsible for marketing and product strategy, moved with the new owner.

We monitor pilots during their training: are they looking at the right information at the right time?

‘In total we now work for fifteen major car manufacturers and suppliers,’ she says. ‘Our Driver Monitoring System, for example, has already been incorporated in more than a million cars worldwide. This system warns drivers if risky situations arise. For example, if the camera processed in the cabin detects signals that indicate that the driver is above average tired, has drunk too much or is distracted by a telephone or children that disrupts his concentration.’

With a related application, Smart Eye helps car manufacturers, but also aircraft manufacturer Lockheed Martin and the American space agency NASA, with the optimal design of cockpits, dashboards and other interfaces.

Zijderveld: ‘In the cockpit of an aircraft or spacecraft you are presented with a lot of complex information. By monitoring the movements of the eyes, head and body of test subjects, we help designers to design interfaces efficiently and in a user-friendly way. And by extension, we also monitor pilots during their training: are they following the right procedures and are they looking at the right information at the right time?

Potentially huge impact

In this way, Smart Eye creates ‘deepened insight into human behavior and possible motives’. That can be valuable, but also controversial. As recently as October, the UK Information Commissioner’s Office warned companies to refrain from using this technology for the analysis of, for example, job applicants or potential fraudsters.

‘We see a strong increase in the number of companies that are experimenting with this’, says deputy commissioner Stephen Bonner. ‘However, the technology used is still in full development and often not sufficiently reliable, while the results can have a huge impact on those involved.’

Bonner isn’t the only one concerned. The American behavioral scientist BJ Fogg has been warning for years about the way in which American tech companies in particular try to map our emotional and psychological blueprint.

According to Fogg, founder of Stanford University’s Persuasive Technology Lab, they are trying to use this knowledge, among other things, to influence human behavior through digital stimuli. For example, Meta is said to be working on so-called persuasion profileswhich can, among other things, record users’ emotional reactions to various types of advertisements and sales pitches.

Meta likes to play with emotions

Meta likes to play with the emotions of its users. As far back as 2014, information was leaked about secret tests in which Facebook tried to influence the mood of hundreds of thousands of users with subtle changes to their news feed. A new leak in 2017 revealed that Facebook allowed brands to target their ads specifically at insecure teens with low self-esteem. The controversial Cambridge Analytica used Facebook data to select American voters whom it could manipulate using fake news to vote for presidential candidate Donald Trump.

Emotional metaverse

At the beginning of this year, Meta filed several hundred patent applications. It is about potential applications of the headsets that provide access to the new metaverse in which the social network is currently investing billions. Meta also owns VR headset maker Meta Quest (formerly Oculus).

Some of the patent applications describe technology that helps your avatar move fluidly in the digital space and interact with other avatars. For example, there is a sensor that accurately tracks your eye movements and facial expressions. This way your avatar can smile or wave when you do too.

In another patent, Meta is preparing for commercial applications. By analyzing the user’s eye movements and facial expressions with artificially intelligent software, the company can also monitor whether, and for how long, you look at certain ads in the metaverse.

We only collect data from persons who have given explicit permission in advance

In fact, the planned software must soon even be able to determine the emotional response of the user to advertisements or other content. For example, Meta can possibly tailor displayed advertisements (or other content) to the persuasion profile of the individual user.

Like behavioral scientist BJ Fogg and the UK Information Commissioner’s Office, Smart Eye’s Gabi Zijderveld fears the impact of unethical use of these new technological possibilities.

‘We only collect data ourselves from people who have given explicit permission in advance,’ she says. ‘This data is super confidential and cannot be traced back to specific persons. We also work with a strict code of ethics to prevent misuse of our technology by third parties. For example, we have refused customers many times on that basis.’

Prevent abuse

However, like any powerful technology, abuse of Emotion AI is difficult to prevent, says Zijderveld. She therefore thinks it is ‘naive’ to make the prevention of abuse depend on the goodwill of companies.

‘That has to be done through legislation,’ she says. ‘The problem is, of course, that legislators are traditionally far behind the times, especially when it comes to innovative digital developments of this kind. You now see, for example, that legislators are still struggling with excesses around the ‘personalized’ advertisements that have been making billions for years with large tech companies such as Meta, Google and Amazon, for example.’

The CMO of market leader Smart Eye expects that analyzing biometric data to encourage people to consciously or unconsciously change behavior will develop enormously in the coming years. ‘Both in terms of technological progress and application possibilities’, she specifies. ‘For example, I expect a huge increase in the number of artificially intelligent applications that can make our daily lives easier, safer and healthier, such as our Driver Monitoring System.’

These applications go hand in hand with the ever-growing number of wearables. Consider, for example, the new generation of AR glasses, which will probably also take over many applications from our phones.

Zijderveld: ‘In order to protect our privacy, it is therefore crucial that companies such as Smart Eye help governments explain all the new possibilities and dangers. We have also included this ‘guardian’ role in our code of ethics. Of course, the government must be open to this. For example, despite being the market leader, we have heard nothing at all from the UK Information Commissioner’s Office. So we have now contacted them ourselves.’

Read more about artificial intelligence:

Leave a Comment