Sarah Lang is creeped out by things that mimic being human. Are we living in a ‘post-real’ world?
Scarlett Johannson recently revealed that she’d turned down the offer to be the voice of a new virtual assistant on OpenAI’s updated version of ChatGPT. It’s no surprise they asked her: she’d voiced a fictional, then-futuristic AI voice assistant in the 2013 film Her. But when OpenAI released five voices the other week, the one called ‘Sky’ sounded so similar to Johannson’s that her lawyers got it pulled while the matter is looked into.
This reminded me how I feel when I use Slack, which as you might know is a back-and-forth communication platform for workplaces, particularly useful for my remote work for Capsule.
When I opened Slack a few weeks ago, up came the words: ‘Welcome back! You look nice today’. In that first second, my body physically jerked. In that second second, I felt like I was being watched. In the third second, I felt creeped out. In the fourth second, I felt highly annoyed. Slack, I know you can’t see me – and if you could, that would be a problem – so please don’t comment on my appearance.
Oh dear. I just realised I’m addressing Slack as though it’s a human. I’ll reorient myself now: I’m a human, it’s a software programme.
That said, why is a WORK platform commenting on my LOOKS? What California chauvinist coder decided this is an appropriate greeting for a woman who is preparing to communicate with her colleagues? And actually, rather than enabling my work, it brought my brain to another place. Do I in fact look nice today, I wondered? Did I put make-up on? Hmmm, I do feel quite bloated…
Come on, now. Women already feel enough pressure to look ‘nice’. Facial recognition unlocks my computer but that’s where I’d like to leave it. Whatever happened to ‘have a good day?’, ‘hi’, or no message at all? I want impersonal, useful services, not fake chumminess.
The message ‘you look nice today’ still comes up every time I open Slack. I wasn’t going to waste time and energy in an attempt to give feedback that would almost definitely change nothing. But a quick search showed me an actual email address as opposed to a bot: feedback@slack.com. So I’ve sent them an email.
Something else that annoys me is when Facebook says, on my feed: ‘We care about you Sarah, so we thought you might like to look back at these memories’. It’s the ‘care about you’ words that bother me. It’s idiotic for Facebook to pretend it personally cares about the individuals who use it. Like, you know that we know that you don’t care! (And perhaps we don’t want to see certain photos right now, particularly if someone has passed away between now and then.)
When A Brand Tries To Be A Person
Eloise Pengelly, who is doing a dissertation about parasocial relationships for her Master of Strategic Communication at University of Canterbury, has some thoughts on the topic. “There’s some interesting research about how brands personify themselves, resulting in consumers anthropomorphising the brand – as in, the audience/consumers perceive the brand or company to be like a person. This is used often in spaces like social media, where some [brands and companies] use images and personalised language to make customers feel more like they’re being engaged with in an interpersonal way.”
Context is important, Eloise says. “While brands and organisations that use communication tools, such as social media or AI, may be personifying themselves and engaging in ways that are or appear to be interpersonal, it’s important to keep in mind intent. They may not be purposefully trying to make audiences and consumers think that they’re a human; however, this can be the effect that communication style has on the public.” As in, different humans interpret things differently.
Hi, AI!
The Slack and Facebook ‘greetings’ I’ve mentioned were written by coders, not by AI. But as AI continues its ascension, we’ll get more of this pretend human-ness. We may give up on trying to tell the difference.
But wait! Someone from Slack has got back to me!
Thanks for your candid feedback. I’m Ai from Slack. Although our intention was to cheer any user’s day and put a smile to their face, I hear you. I’ll share your thoughts with our product team around how it impacted you when you saw the message.
Best,
Ai [Meaning “Love” in Japanese] | Senior Customer Experience Agent |
Is this for real or am I in an episode of Black Mirror?
Because I also just got an in-mail message on LinkedIn for a “Flexible Writing Consultant Opportunity” (from a company that’s probably spamming writers around the world), asking me to be a writer that somehow trains artificial intelligence, which is something I’d NEVER consider.
Private eye
I wrote a story recently about how AI-created, unrealistic, idealised images of women on display in advertising and in the media is exacerbating body-image issues. Concerning stuff.
Now Meta, which owns Instagram and Facebook, is using images posted on those social-media platforms to train the company’s generative AI tools, including a standalone AI image generator. “On Facebook and Instagram, there are hundreds of billions of publicly shared images and tens of billions of public videos,” Mark Zuckerberg said. “We don’t train on stuff that people share with their friends, we do train on things that are public.” Er… thanks?
Look, people have different tolerance levels for AI. But certainly I, and many other people, feel uneasy about its ascension. We feel creeped out by how AI is infiltrating our world, the speed at which it’s doing so, and not knowing what happens from here. We don’t want to live in a ‘post-real’ world. We don’t want to be made redundant as humans – not just redundant from jobs, but demoted as a species.
This may be giving in to the dark side, but I asked ChatGPT (not that it’s an authority, it’s basically just read the internet) “should AI scare people?”. Here’s its answer.
“It’s understandable that some people may feel apprehensive or uncertain about AI, especially considering the rapid advancements in the field and the portrayal of AI in popular media. However, it’s important to approach AI with a balanced perspective.”
“AI itself is not inherently something to be feared. Like any technology, its impact depends on how it’s developed, deployed, and regulated. AI has the potential to bring about numerous benefits, from improving efficiency and productivity to advancing scientific research and healthcare.”
“That said, there are legitimate concerns surrounding the ethical and societal implications of AI, such as job displacement, privacy concerns, bias in algorithms, and potential misuse of AI for malicious purposes. It’s crucial to address these concerns through responsible AI development, robust regulation, and ongoing dialogue between stakeholders.”
“Rather than succumbing to fear, it’s more productive to engage with AI in an informed manner, understanding its capabilities, limitations, and potential risks. By fostering transparency, accountability, and ethical considerations in AI development and deployment, we can harness its potential while mitigating its potential negative consequences.”
Fingers crossed.



