Samaritans Radar: paved with good intentions
Samaritans Radar is a Twitter app bursting with good intentions. Unfortunately it has potential to cause great harm.
Let’s take the happy case, because Samaritans Radar is built for a perfect world in which only happy cases exist.
You sign up for the Twitter app and it promises to send you email alerts whenever someone you follow on Twitter – your friends – looks like they’re miserable and in need of some support. It does this by matching their tweets against a predetermined list of key phrases that might indicate that someone’s depressed or suicidal. Of course it’s not 100% accurate – what is? – but it’s good enough. Once in a while you get an alert, you get in touch with your friend in need, and you help them get the support they need. It might just be a quick chat over a cup of tea, or it could be a 999 call that saves their life.
It’s innovative, helpful, social. The users will love it. People in need will benefit. The charity’s reputation will soar. The media will lap it up.
What could possibly go wrong?
Now let’s rewrite that description as a more formal specification:
Samaritans Radar is a Twitter app that monitors accounts and attempts to draw inferences that the account operator is distressed and in need of support. Users are emailed alerts when this condition is detected for all accounts which they follow.
Let’s look at this in more detail.
“A Twitter app that monitors accounts”. This is a surveillance system. Viewed neutrally and taken outside the marketing context with its upbeat copy and good intentions, this at its simplest is a piece of software that monitors people and publishes information about them. “Account operator” is worth a mention on its own: Many tweets are sent by bots rather than people. Some accounts are bot-only. Some accounts mix human and bot-written tweets. Either way, it’s important to note that many tweets aren’t sent by people. Can Samaritans Radar tell the difference? If it can’t, you’re going to get a lot of false positive alerts triggered by automated tweets.
The specification doesn’t use the word “friends” because in a Twitter context this is ambiguous. In the real world, friends are people you have a mutually positive and personal relationship with. On Twitter, “friends” are just the accounts you follow, regardless of whether or how well you know them personally. Almost everyone on Twitter follows people who aren’t friends and in many cases are opponents or enemies. Broadly speaking, Samaritans Radar is an app that will let almost anyone be alerted when almost anyone on Twitter might be particularly vulnerable. Is it really responsible to build something like that on a platform that is notorious for so-called “trolling” – stalking, harrassment, racial abuse, rape threats, death threats – particularly against visible minorities?
The more powerful a tool, the more potential there is for abuse. We can’t stop people making guns, nor should we – they have many legitimate uses – but we can refrain from handing them out on the high street to anyone who wants one.
So the Samaritans are on a hiding to nothing with this. The more accurate their inferences of vulnerability are, the more powerful a weapon they’re handing to abusers. Conversely, if the inferences are inaccuarate then the system is at best useless and at worst likely to cause upset and confusion.
Let’s break it down further and look at how the system plays out for two potential users: the good friend and the bitter enemy.
For each tweet there are two inferences (vulnerable, not vulnerable) and four possible results: true positive (the tweeter is vulerable and you correctly identified that), true negative (the tweeter is not vulerable and you correctly identified that), false positive (the tweeter is not vulnerable but you wrongly inferred that they were vulnerable) and false negative (the tweeter is vulnerable but you wrongly inferred that they were not vulnerable).
How might these two users react in each case and what could the conseqeunces be?
The good friend
A friend is in need and the system sends an alert. Hopefully the alertee receives the email in time and is in a position to help. But hey, that’s three things that can go wrong already. Emails are delayed or don’t get delivered at all. Or the email is fine but the recipient just doesn’t check for it. Or perhaps it gets filtered as spam. And then there’s being in a position to help. Great if you are, but how does it feel to get that email when you’re not? So yes, there’s a positive outcome if all the ducks line up and the friend who needs help gets it. But also so many things that could go wrong which could cause users fairly or unfairly to blame either (or both) the app or themselves for anything up to the suicide of a close friend or family member. Of the eight scenarios here, only two are potentially positive. This is one of the good ones.
This is the only scenario that’s indisputably good: When Samaritans Radar correctly infers that nothing is wrong and therefore does nothing. Bear that point in mind: This only unquestionably succeeds when it does nothing.
Boy is this one tricky. The system thinks someone’s vulnerable but it’s wrong. The true friend gets sent an alert, and as above, may or may not receive it in time and be able to do something about it. In this case, hope that the email gets lost or filtered as spam, or that the recipient only checks their emails every three weeks. Because if the true friend wades in with a helping hand when it’s obviously not required they’ll have some difficult explaining to do.
This is one of the ways in which Samaritans Radar can fuck up otherwise good relationships. Users will either have to admit to their friends that they’re using surveillance software on them, or they’ll have to lie to them by omitting to tell them why they’ve got in touch. Neither of those sound like great ways to build trust.
### False negative
A good friend needs help but the system doesn’t detect it. Of course, no-one expects it to be perfect. Or do they? Guess what? People aren’t rational all the time. People whose friends have harmed themselves, or killed themselves, are likely to be a lot less rational than most of us at the best of times. But what are our expectations supposed to be? That the system generally works and we can be upset when it doesn’t? Or that it generally doesn’t work, in which case no-one will bother with it. Sure, it comes with no warranty. There’s no one to sue when it doesn’t work. But “the warning signs were all over their Twitter account but Samaritans Radar didn’t spot them” makes a great, and terrible, story.
So even when Samaritans Radar is used by genuine friend with good intentions there’s a stack of things that can go very wrong.
Now let’s put the same tool in the hands of…
The bitter enemy
You’re stalking 50 people online – exes, feminist campaigners, transsexuals – but it’s hard to keep tabs on all of them all the time. But this new Samaritans Radar app is great for letting you know when you can kick someone who’s already down. Well-intentioned people will hope that the bitter enemy doesn’t get the email or isn’t in a position to act on it in time because if they are then the vulnerable person has just been made a great deal more vulnerable. Not just to one bitter enemy either, but to a potentially unlimited number of them. Not only can any number of people sign up for alerts, but any one person who gets the alert can pass it on through other channels – not least by publishing it publicly to Twitter both to humiliate the subject and to encourage others to pile in. This is not pretty.
Seems like a case where very little harm has been done. Until you create a game where you try to get Samaritans Radar to trigger off alerts for the people you hate.
An enemy sounds like they’re down because you got an alert from Samaritans Radar? Why not pile on the abuse? Publish the alert to humiliate them. If they’re not genuinely down they soon will be.
As with the true negative, just keep up the abuse and harrassment until Samaritans Radar triggers an alert and you win your points. But with an added bonus – your victim is already miserable but Radar doesn’t realise and so hasn’t alerted their true friends.
What’s notable about these scenarios is that the accuracy of the inferences isn’t that important: significant potential for harm exists in the “true” cases even if most of the “false” results can be eliminated.
Samaritans Radar has the theoretical potential to help people in need but that needs to be seen in the context of the problems that it will inevitably cause. Notably, it causes significant problems even for people who try to use it with good intentions. It’s an indispensible tool for online abusers who doubtless will find even more devious ways to use it than those I’ve suggested. It’s very hard to see how Samaritans Radar will have an overall positive effect. It’s far more likely that it will help Twitter become an even more hostile space, providing new ways for abusers to monitor and harrass their victims.
The most surprising thing about Samaritans Radar is that it is a system designed and built by naive optimists. You’d think that people dedicated to helping the most distressed and vulnerable people would be anything but that. The daily work of the Samaritans involves a litany of misery caused by people with both good and bad intentions. There are just so many ways that things can go wrong. This software creates a few more.
Even if the Samaritans weren’t willing or able to do this kind of analysis, what about consent? Do Twitter users consent to having their tweets mined in this way? The simplistic answer is to say that your tweets are public and therefore anyone can use them for whatever they like. But most people’s expectations don’t go much beyond the idea that other people will just read what they’ve posted. Twitter data has almost infinite possibility for analysis, both on its own and in combination with other data. Most of those analyses are as yet undiscovered. How can you consent to an unknown? Why would you want to when so many of those unknowns will be harmful? If a tool like Samaritans Radar has so many potential harmful effects despite its good intentions, how about tools created with explicit malicious intent?
Raw data is either wholly opaque or, as with tweets, it has one obvious surface. Data analysts work to make the invisible visible, deriving information (if not necessarily knowledge) sometimes in creative and unexpected ways. But when the data is about people there are serious risks, so people with good intentions need to act with appropriate caution. When the data subjects are likely to be vulnerable people that applies even more strongly. Should we build an app that identifies empty homes that can be burgled? People susceptible to blackmail? Lost children? And if we do build these apps, are we cautious to ensure that they will only be used ethically by responsible people, or do we just make them available to anyone? Yes, sometimes we make guns, but we don’t hand them out in the high street.
Related: Are your public tweets fair game? and Samaritans Radar must close