Today.Az » Weird / Interesting » Would you let an algorithm manage your relationships?
21 June 2016 [12:00] - Today.Az
By Tim Maughan
There are few aspects of our lives that are not influenced
by algorithms. But would you let one manage your relationships with other
humans?
“I had this everyday feeling – stress about not properly
articulating my emotions in my emails to people,” artist and writer Joanne
McNeil tells me over the phone from Boston.
“I was feeling as though I had to over-do it with enthusiasm or I would sound too
sarcastic or bleak or disinterested.”
It’s a common anxiety of modern day life: in an age where we
increasingly communicate via email, text messages, and social media posts
instead of face-to-face, it can be hard to judge whether we are getting the tone
right. Are we being too formal? Are we being too familiar? Are we
unintentionally coming across as angry or unfriendly? Without the non-verbal
cues we take for granted when talking in person with someone – or even on the
phone – it can be hard to know whether what you’re saying is being taken the
right way.
But what if there was an app for that? What if you could
hand some of that responsibility over to an algorithm that calculated what to
write in an email? Would you trust a piece of software to communicate with your
boss or your loved one for you? Or – going even further – would you let it
advise you on what to say when you were on a date, or tell you which of your
friends you should hang out with, and which you should avoid?
It was while thinking over these issues that McNeil came up
for the idea of Emotional Labor, a plugin for Gmail that
scans your messages and inserts overly-familiar, lighthearted touches to make
them seem more friendly. Full stops become multiple exclamation marks, ‘lols’
are dropped into sentences, and formal sign-offs become rows of kisses.
“It's over-the-top. It's completely over-the-top,” explains
McNeil. “I could have easily had it just swap out periods for exclamation
marks, but it swaps out one period for multiple exclamation marks. I could have
used less saccharine and treacly words but I used words like hearts and stars,
and made it really overwhelmingly crazy friendly. The idea came first from
imagining this dystopian future where email would be written for you by some
automated software that would perfectly articulate a friendly tone of voice. I
was building something that works and does the job but also makes fun of the
possibility of that kind of app existing.”
McNeil’s app seemed to hit a nerve when she released it
online, getting shared virally across social media and, of course, by email. “I
didn't expect that would happen. I didn't realise how many people also have
that stress in email communication. Not just emailing friends, but emailing a
coworker and being afraid that you're not sympathetic enough to someone who is,
say, sick from work and you want to express that you wish they're healthier
soon. There aren’t many non-cliched ways of expressing that, or expressing
concern or caring for someone. We only have really limited phrases and a lot of
them are stock phrases.”
While Emotional Labor was clearly a tongue-in-cheek art
piece, McNeil believes people might be ready for some assistance with these
kinds of tricky communication tasks. “That experience of anxiety, of not
knowing what words are right – it’s not just that we might outsource it to
algorithms. We sometimes outsource it to your friends because this is just a
constant problem. How do we come up with the right words? For example, when you
meet someone and you like that person, people often have friends ‘workshop’
their texts. If you're trying to set up a date with someone you might reach out
to a bunch of your friends and say, ‘What should I say to this person? How
should I sound flirty? How should I not be too overt with a come on but also
sound interested?’”
This idea of crowd-sourcing advice on how to handle a date
or a personal interaction is also a common theme in the work of Lauren
McCarthy, an artist, software developer, and assistant professor at UCLA’s
Design Media Arts programme. Her smartphone app Crowdpilot allows
you to surreptitiously stream a conversation – say, on a tricky first date –
from your phone to a group of other people online. Some of them might be your
friends, whilst some are just randomly selected, anonymous Crowdpilot users.
While listening in they can use the app to give you tips on what to say,
suggest topics of conversation, or tell you how to react to the other person.
There’s a wonderfully cheerful, realistic advert-like
video that explains how it might work. Although, like Emotional
Labor, intended as a work of art, the app was also available to download on the
App Store. “It was important to us to build a functioning app, so that it would
go beyond speculative design fiction or sci-fi,” McCarthy says. “Because it is
a real app, when you encounter it, you are faced with choices and questions.
Will you download it? Will you use it? Maybe you find it terrifying or
dystopic, but what happens if it actually improves your life?”
“While my work deals with technology, it is at its core
dealing with being a person in modern society, which happens to involve a lot
of technology. I am most interested in the performative aspects of social life
and how we navigate relationships and interactions. Adding technologies into
that, making apps or pseudo-startups or devices, offers another way to explore
this.”
“I’ve always felt like I was sort of awkward and socially
inept, and I initially wondered if I could build technologies that would help
me out with this,” she adds. “This investigation started with a hat that would
detect if I was smiling, and stab me in the back of the head if I stopped, in
order to condition my brain to smile all the time. What I realized through
doing this work was that there was always this element of failure and dystopia
– I became really critical of the potential for tech to solve all our problems
and realised it actually created a lot of new ones, too.”
Moving on from crowdsourcing relationship advice from other
humans, McCarthy started looking at how the technology itself might start
offering guidance. Along with her partner Kyle McDonald, she developed Us+,
a rather startling app that monitors your video chat conversations and gives
direct instructions on what to say and do.
“I had been doing a lot of research into linguistic analysis
and Kyle had been doing a lot of computer vision research, and we wondered what
would happen if it was applied in real time to a conversation,” she says.
The app analyses both what you say and your facial
expressions while you talk, and will give you feedback on your performance.
It’ll tell you if you’re being too self-absorbed or aggressive, suggest you be
more positive or sympathetic, and tell you if the other person looks sad or
happy. It’ll even mute you altogether if it thinks you’re talking too much.
Again, there’s another clever video that
explains how it works. It feels very much like creepy science fiction, or
something from an episode of the TV show Black Mirror, but McCarthy says they
had surprisingly positive reactions to the project when it was released. “We
meant to pose a question – do we want a future where humans have their every
word and expression and reaction monitored and augmented by technology? We were
critical of this idea, but at the same time, there were a lot of people that
got in touch interested in Us+ as a real product – a business solution,
self-help tool, or relationship improvement app. Even when we explained it was
an art project, they didn’t really care, they still wanted it.”
Another McCarthy and McDonald art project that’s also a real
world app is pplkpr. Designed to ‘optimise your social
life’ (according to itspromo video), the app combines GPS data
from your smartphone with heart rate data from a smart watch, or Fitbit-type
wearable device, to work out when you are meeting people and how you are
reacting emotionally to them. An algorithm then crunches this data to report
back to you on which people you should be hanging out with more, and which you
should avoid. It knows who makes you happy and excited and will send them texts
asking them to hang out with you more, while it’ll even delete the contact
details of people that make you bored or angry.
“When the app was first released online, there was a huge range
of reactions and a lot of loud debate,” says McCarthy. “Some people were
outraged – they felt this represented technology going too far and the end of
humanity. Others thought it seemed practical and useful and wanted to try it
themselves.”
“We heard from VCs that wanted to fund us, doctors and
therapists that wanted to try it with their patients, researchers that wanted
to collaborate and share findings, start-ups that were working on similar ideas
and wanted to hire us, and people that felt they just really wanted and needed
it. The point was to provoke discussion and thought, and I think that can
happen regardless of whether you know it's an art project or not.”
You can still grab pplkpr from the App Store, so I installed
it and tried it for a week as I ran around to meetings here in New York. I don’t have a
Fitbit or a smart watch, so instead the app constantly nagged me via
notifications about how I felt. It seemed to me like the app was transferring
one kind of work – emotional labour – into another; it became digital labour,
the endless tasks that our so-called time-saving devices end up making us do.
In all I found the experience rather depressing, but I wondered
if that was perhaps me projecting my own fears, especially as I might be older
than many of its users. “Younger people often find it much less depressing and
are willing to engage and try it,” McCarthy says. “When we tried this with
undergrads we realized how much more open to different technologies and
interactions they were. They didn't come in with preconceptions or fear.
“They are not afraid. They are willing to understand
something before they judge it, and hopefully this will mean a future where we
can openly debate changes and new tools, and play an active role in building
the world we want to live in.”
What’s interesting is that while both McNeil’s and
McCarthy’s art pieces are forms of speculative design, since they were first
released an increasing number of ‘real’ similar apps have been developed,
apparently aimed at more serious users.
One particular example is Crystal Knows, an app that taps
into your LinkedIn account and advises you in how to write emails when dealing
with business clients, in a similar – if less fun – way to Emotional Labor. But
McNeil is doubtful about how seriously they’re used. “I haven't met a single
person that uses things like Crystal Knows on a regular basis. It’s too much of
an experience that feels inauthentic or like you're cheating. [These apps] are
out there and they've been talked about but it doesn't appear that anybody is
really incorporating them in their work. I think it's just because even if
you're unsure about the words you put in an email they're still your words.
They weren't offered by someone else.”
Not that she rules these apps out completely, and offers an
interesting comparison as to why they might appeal to some people even when
they know they’re offering an inauthentic experience. “I do think it’s like astrology
in this sense where, not everybody, but plenty of people know that it's fake
and yet still read it because there's something comforting about having
answers.”
And maybe that’s the real answer as to why some of us might
use apps and algorithms to advise us on tricky personal relationships: not
because we truly think it works, but that it gives us a little bit of hope.
Amongst all the rationalisation of our technologically regimented lives, we
still want to believe that something might support us, allow us to off-load our
self-doubt, and give us the answers we don’t have. Whether it’s advice from our
friends, horoscopes, or a smartphone app – maybe in uncertain times we all just
want to believe in something more than ourselves.
/By BBC/
|