20 Nov 2013

What are the penalties if web companies fail to monitor their content?

For more than a decade, websites have been able to operate on a “publish first, question later” basis – but two news stories from the past week suggest those free-and-easy days are fast coming to an end.

Back in 2001 a landmark legal decision (Godfrey v Demon) ruled that, while web hosts could be sued for libel, they would not be expected to vet every piece of material as it was uploaded to their servers. (With 350 million photos uploaded to Facebook every day, you can see the logic). So long as they took action when a complaint was received, they were covered.

But there’s increasing evidence that web companies are now expected to vet their content before publication.

20_google_g_w

Google, for example, has agreed this week to skew its search results so that users find it harder to access images of child sexual abuse.

And the UK and Ireland policy director of Facebook has said it will post more warnings before videos with graphic violent content such as beheadings.

They’re laudable commitments. But you can’t help but wonder how it’s being talked of in the technicolor-decor offices of internet companies big and small.

They watch as millions of items flood through their servers and on to their websites. Any one of those packets of data could be infringing content – how much do they now have to do to monitor the flow? And what are the penalties if they fail?

Follow Geoff White on Twitter @geoffwhite247