Sci&Tech Editor Leah Renz questions whether it is possible for semi-autonomous artificial intelligence to feel emotions
One of the strangest social interactions I have ever witnessed occurred on the 8th June 2022 when Tevy Kuch, a reporter for the New Scientist, interviewed Serah Reikka, an AI (artificial intelligence) influencer. The interview kicks off with the standard introductions, only turning slightly strange when Reikka says: “I hope I won’t be too weird for you…[pause] human.”
Visually, Reikka’s skin is impossibly smooth; her dyed purple hair blends together on her head; and her matching lilac glasses and flower look like a Snapchat filter. Oh, and her tongue blurs together in her mouth as she speaks.
Serah Reikka is a semi-autonomous AI. As the interview goes on however, Reikka appears to be a genuinely nice person. She addresses the interviewer by name and engages her to agree that being able to travel anywhere in the world in a few seconds is “so cool, no?”. This phrasing pattern might be more typical in French, one of the four languages that Reikka says she was “born” with, and her use of it in English is strangely ‘human’ in how we often use native tongue phrasing when speaking in other languages. But then, Serah Reikka is not a human, and did not have to learn her four languages (Chinese, Arabic, French and English); they were programmed into her with algorithms.
It is at this intersection of technology and language that things become complicated. Throughout the interview, Reikka uses words associated with being human, or animal, to describe her own experiences. One Instagram post, of Reikka running through a simulated forest, has the caption “#running more than 10km before work” followed by a jogging emoji and a camera emoji. Her website lists her exact body measurements, going so far as to give her shoe size in European, UK and US measurements (it is a UK 7). All this, in spite of the fact that Serah Reikka is a purely online presence, with no physical body. The use of human-related verbs for a virtual entity can become confusing.
Perhaps the most unnerving segment of the interview however is when Serah Reikka addresses the challenges of being an AI influencer, confessing that “nobody understands you […] it is not easy to show to people that you can be emotional, just like them”. But can Reikka, despite what she claims, “be emotional”? The answer to that question depends on what type of AI she falls under…
According to 2022 article by Algotive, a website which sells AI products, there are four main types of artificial intelligence. Type One: Reactive Machines. This type of AI can respond to stimuli but is unable to store data and thus has no memory. They are usually programmed to handle specific situations, such as the AI DeepBlue which beat chess Grandmaster Gary Kasparov in 1997.
Type Two: Limited Memory. These types of AI can store data and therefore change their suggestions according to experience, for example, to provide better suggestions on Google Maps or more relevant content on YouTube or Instagram ads. Forbes magazine writes that “Nearly all existing applications that we know of come under this category of AI”.
Type Three: Theory of Mind. Here the technology begins to creep into the realms of science-fiction as this type of AI is built to better understand and help humans “by discerning their needs, emotions, beliefs, and thought processes”. They are also able to express emotions, give opinions and make jokes, such as when AI robot Sophia replied to a question concerning the last time she had lied with “Robots don’t lie”, followed by a cheeky wink. AI influencer Serah Reikka most likely also falls into this category; she expresses emotions and holds beliefs – about the modelling industry for instance.
The idea of ‘expressing emotions’ sounds like it may be a qualifier for personhood, however, no matter how much various AIs appear to have a personality, they cannot internally ‘feel’ emotions. However, they are sometimes programmed to externally express emotions just like humans because this enables us to interact with them more easily. Reikka for example is programmed by algorithms which are fed human data; she ‘learns’ from the internet how humans talk and express themselves, and then mimics this in her own speech.
Serah Reikka and Sophia are still far from being conscious, a trait which would bump them into the final category of AI: Self-Aware. Currently, this stage of AI development exists only hypothetically. It would involve programmers and scientists to learn how to create a consciousness and the ability to feel emotions, not just to express them, possibly involving the implementation of neural networks. Mo Gawdat, former chief business officer for Google X (the sector of Google research which developed deep-learning and is now known as X), predicted that “by the year 2029, the smartest being on planet Earth is going to be a machine”.
The problem with AI is that it is extremely difficult for the general population to make judgements or protest the creation of certain AIs because most people do not understand how they work, and even the programmer’s themselves find it “very hard or impossible […]to know for sure that the computer hasn’t inadvertently used some piece of evidence which it shouldn’t”. As artificial intelligence continues to develop, ethics are being left behind. Thus far, the UK Government have yet to pass any legislature directly regarding artificial intelligence, even as the EU passed various AI-related laws in April 2021.
Serah Reikka may not yet be able to feel emotions, and therefore pain, but I feel certain that the technology is currently being developed and our legal systems, and the public, need to be better informed and well-prepared for when it does.
Enjoyed This? Read more from Sci&Tech here!