26 Nov 2014

Is Facebook really capable of monitoring terrorists?

I’ve sent Sir Malcolm Rifkind a copy of October’s issue of Wired Magazine. I’m not sure whether as the busy chairman of the intelligence and security committee he’s had time to read it; but he should because there’s a great feature on social media moderation that’ll help clarify his thinking on what Facebook and others can and can’t (and should/shouldn’t) do with their users’ content.

Facebook’s monitoring of its users’ content is so astute that it’s even flagged pictures of women breastfeeding their babies as potentially pornographic. Surely, argue the likes of Sir Malcolm, such a company can keep an eye on users’ chat and spot plots to kill soldiers such as Lee Rigby?

It’s not that easy or straightforward. Facebook has a system which automatically spots flesh tones in pictures: a photo with a lot of flesh gets sent to a moderation team (partly based in the Philippines, as the Wired article shows). They then decide whether the picture breaches Facebook’s rules (note: Facebook’s rules, not the same as laws).

Now think about how that’d work for chat. Facebook could create a system that flags every reference to, say, “soldier” and “killing”. But that would then hoover up thousands of conversations related to Iraq, Afghanistan, Pakistan, Ukraine, Russia and everywhere else in the world where soldiers are being killed.

Those thousands of conversations would then need to be looked at individually by a Facebook moderator, who would then have to decide if they amount to a serious and immediate threat to life. Bearing in mind Facebook moderators sometimes have trouble telling the difference between breastfeeding and porn, I reckon they’d struggle to distinguish angry boasts from genuine threats to life: something which very highly-paid lawyers have also grappled with.

As a consequence, the moderators would tend to err on the side of caution, and the result would be thousands of “false positives” being sent to law enforcement who’d then have to leaf through them all.

(And that’s not to mention the issue of misuse by agencies in countries across the world: perhaps it’s reasonable for UK police to ask Facebook to monitor chats for signs of murder plots; but what if law enforcement in a less-than-democratic country demanded to be tipped off about anti-government discussions?)

The answer, of course, is to apply your intelligence agency knowledge and make targeted requests to Facebook so you only get the content of people who are a real threat. Which is what would have happened in the case of Michael Adebowale, if the intelligence agencies hadn’t decided prior to his Facebook chat that he wasn’t such a serious threat.

One more quick point: as we revealed last week, the UK intelligence agencies have made strenuous and costly efforts to try to get access to every bit and byte of internet traffic washing through the UK. The Tempora programme pulled in not only the metadata of users (who messaged who, when and where) but the content of the messages as well.

If that system was doing its job as described in the Snowden documents it’s likely the intelligence agencies would have access to the Adebowale chat referenced by the ISC: but because he wasn’t classified as a threat, the conversation wouldn’t have been picked up.

Follow @geoffwhite247 on Twitter