Genuine People Personalities
2020
  • Cellophane Magazine Issue 3: Incognito





An article I wrote for Cellophane Magazine Issue 3: Incognito in 2020 about personal robots & AI.

GENUINE PEOPLE PERSONALITIES
Or Why AI Needs Bias for Personality

In ‘Hitchhikers Guide to the Galaxy’ by Douglas Adams the protagonists meet a prototype robot, Marvin, with a ‘Genuine People Personality’: “Ghastly,” continued Marvin, “it all is. Absolutely ghastly. Just don't even talk about it. Look at this door,” he said, stepping through it. The irony circuits cut into his voice modulator as he mimicked the style of the sales brochure. “All the doors in this spaceship have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done.” [...] “Come on,” he droned, “I've been ordered to take you down to the bridge. Here I am, brain the size of a planet and they ask me to take you down to the bridge. Call that job satisfaction? 'Cos I don’t.“

Personal robots are an old science fiction dream. There they are portrayed as almost indistinguishable from humans, so real that we can fall in love with them. We imagine them to be like us, to have a personality. Today’s personal robots are smart assistants, like Siri or Alexa, whose personalities feel like stereotypical, female background characters in a sitcom. Siri shows a “sense of helpfulness and camaraderie; spunky without being sharp; happy without being cartoonish” as an article from Wired describes it. Their personalities are programmed, every sentence and joke is carefully thought through. Comedians and psychologist are employed by tech companies to come up with clever, witty responses that always stay within company policy. Nobody wants Alexa to make a dick joke. Especially not about their own dick.

Siri and Alexa are our 21st-century butlers; ready to do our shopping, read us the news, making sure our homes are perfectly temperature-controlled and help our children with their homework with endless patience. The AI-enabled smart assistants have carved their way into our lives to the point where the most intrinsically human tasks, like reading a goodnight story to one's children have turned into tasks we willingly give off to a machine.Toddlers are interacting with smart speakers how they would with their plush animals, sometimes seeing these devices as part of their families, but they are one-sided relationships. As personal robots become an integral part of our lives, they bring with them a whole host of issues. They are starting to change us and the world we live in. The children that grow up with these smart assistants show changed behaviour as studies show, for example, while they have clearer enunciation, they are less likely to speak in long sentences, have lower tolerance in not getting instant replies or have a harder time interpreting facial expressions and emotional responses. Chatbots come closer to passing the Turing Test, meaning passing as human, not because they become better but also because humans start to communicate more like machines as predictive text is more widely used and our writing styles start to become more uniform.

Personal robots are not the neutral, humbly serving assistants we like them to be, but corporate envoys highly influenced by who produces them, from the company that instils them with the ethics and personality that they deem fit, to the programmers that decide which responses these assistants give. The reply to: “Hey Siri, you’re a bitch.” used to be “I’d blush if I could” before it was changed to “I don’t know how to respond to that.” which is only marginally better. “Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves.” a UN report from 2019 finds. Even though these issues are getting
more awareness these systems are still far from unbiased. According to company reports, Google, Apple, Microsoft and Facebook have less than 25% of women in tech jobs and over 50% of employees are white, which means the programmers who code these smart assistants often overlook biases because of their own innate biased experience as white, well-educated, able-bodied men.

To understand what role biases play for smart assistants and how they will affect them in the future we need to talk about data and artificial intelligence: Siri, Alexa or Google Home are systems that use ‘Narrow AI’, which means they utilise machine learning, for example speech recognition, to create the semblance of a smart, responsive assistant that can interact with its users. Artificial Intelligence is something extremely human. We like to believe there is a magical process where out of code intelligence is born but behind it is nothing but human labour. AI relies on data sets, data, that is generated, collected and labelled by humans. We turn from being users to collaborators as we participate in these processes of filtering information with us being monitored by algorithms, solving captchas or being employed as Mechanical Turks. Mechanical Turks are ‘crowd workers’ that perform on-demand tasks that computers are currently unable to do, for example label video footage or listen and transcribe the sentences that voice recognition fails to understand. Jeff Bezos described it as ‘artificial artificial intelligence.’

To train AI, datasets are used that consist of everything from weather, political or health data as well as our most personal information from shopping preference to dating app analytics. But there are forces at play which we are not aware of: well-hidden biases deeply ingrained into society. Racist, sexist or homophobic biases are easily multiplied and applied to areas where we would never expect them, as the biased data sets are fed into AI systems. Machines and Computer programs don’t have an agenda of their own but the people that program them do. Societies biases are mirrored in code. Algorithms and AI label people as more likely to commit crimes or unfit to do jobs based on biases in data sets that discriminate against women or people of colour. Biased data is not a new problem, but as these biased data sets are fed into AI systems they acquire a life of their own and will be repeated to millions of people. Unbiased data sets will be the philosopher’s stone of the 21st century.

If the dream of the personal robot from science fiction, like Iron Mans Jarvis or Samantha from the movie Her, should ever come true, if we want our personal robots to have ‘Genuine People Personalities’, to make jokes that surprise us and have in-depth conversations with us, that don’t feel flat, repetitive or boring then we will need data sets with data that is more intimate and personal. Personality can never exist without bias. To create these personal robots they will need to be trained on biased data: the most intimate, private questions rarely have quantifiable answers. We expect everyone to be able to tell a lamppost from a traffic light, but we do not assume that everyone will give a good answer on what healthy relationship advice is or on how we want our children to be taught. These personal robots might have ‘General AI’ meaning that they will have human level intelligence, being able to fulfill any task a human could, something that is not possible yet. These computer programs might use ‚unsupervised learning’, meaning AI that is learning from unclassified, uncategorised data which means they will become a reflection of human biases. How will this biased data be collected? Will humans solve captchas that ask how love or an orgasm feels? Will Mechanical Turks try to come up with answers to all the “Why?” questions toddlers could possibly ask but in the most child-friendly way possible? Will bots analyse all movie scenes where someone is consoled after loosing a loved one to find the perfect phrase to cheer someone up? Will we want to choose which biases our personal robots have? Will we want them to be our own or will we choose for them to have different ones so we can fight and debate with them? These personal robots are still a thing of science fiction, but we are surrounded with their predecessors already. When the first Amazon Echo advertisement was released it felt eerily like an ever listening surveillance system right in our bedrooms, a computer to do our bidding produced by the richest man alive. A few years into the present we have accepted them as part of our reality. The decisions how our robot companions of the future will look and work like are made today. Nowhere is technology closer to us, to our private lives, than in the form of smart assistants. The influence they have on our lives and how they will influence the children that grow up with them are something we are just beginning to understand. In science fiction the protagonists often fall in love with personal robots, become friends with them. We are in the process of designing these future personal robots but while we’re at it we shouldn’t only make sure that they have better replies to sexist insult but also make a conscious decision on which biases their personality is built.