9 Jun 2014

Computer posing as teenage boy passes ‘Turing test’

He is 13-years-old, likes hamburgers and sweets, and has just passed the world famous ‘Turing test’ for the first time – meet the computer program Eugene Goostman.

Eugene was “born” in 2001, and in his 13th year, has put his parents on the artificial intelligence map.

He (or “it”) has just beaten five other computer programs to win the Turing test 2014 – and has become the first to pass the test in its 65 year history, says University of Reading.

The competition at the Royal Society was based on the Q&A test “can machines think?” established by British computer scientist and code-breaker Alan Turing. He proposed that the best way to test whether a machine can think, is if its responses are not distinguishable from a human.

In its current annual incarnation, the Turing test is considered successful if a program manages to convince judges it is human 30 per cent of the time, during five minute keyboard conversations.

Eugene, posing as a 13-year-old Ukrainian boy, speaking in English, managed to convince 33 per cent of judges. According to theverge.com, Eugene helped to win over the judges by professing a love of hamburgers and sweets.

The winning “chatbot” program – which is specifically designed to communicate with real people – was developed in Saint Petersburg, Russia, by a team of researchers including Vladimir Veselov and Ukrainian born Eugene Demchenko.

Anyone can “talk” to Eugene by following this link, however when Channel 4 News attempted to ask about his win, the website had crashed under the weight of so many people trying to do the same thing. (RT.com managed to do just that here – he replied in typical teenage-boy style: “I feel about my newfound fame in quite convenient way. Nothing original.”)

‘We spent a lot of time developing a character’

Running alongside the technical developments, fleshing out a character and personality was crucial, said Mr Veselov: “Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything.

“We spent a lot of time developing a character with a believable personality.” He added: “Going forward we plan to make Eugene smarter and continue working on improving what we refer to as ‘conversation logic.”

The idea of creating a computer that can think like a human has fascinated us for decades and permeated pop culture as well as reams of conspiracy theories about computers taking over the world. 2001: A Space Odyssey’s HAL is one of our most famous fictional robots, while here in the real world, Japan’s 13-inch talking robot, named Kirobo (see below), said its first words in space last year.

But the internet has made another kind of artificial person possible: people and hackers are using the interface of the web to create different, or multiple personas, and pretending to be someone they’re not.

Channel 4 News has been delving into these issues with the creation of a fake persona – our Data Baby. Online, she appears to be a real person – the 27-year-old Rebecca Taylor – but she is run by a team of journalists, (real people, honest), using various platforms to manipulate and develop her personality. It has also allowed us to investigate how it is now possible to exploit the versions of ourselves that we post online, whether it is through the “always-on” smartphone, a Facebook account, or old phone handsets.

I feel about my newfound fame in quite convenient way. Nothing original Eugene Goostman

And as we upload more of ourselves to a massive online database, via social media, emails, and cloud storage, humans are also becoming easier to copy and replicate by computer programs. Even sarcasm is now under threat of being categorised, after the US government announced last week it was looking for a program that could seek out sarcasm on social media.

Professor Kevin Warwick from the University of Reading, which organised Saturday’s event, said Eugene managing to convince judges he was human, had big implications for society. “Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime,” he said in a statement.

“The Turing test is a vital tool for combating that threat. It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true… when in fact it is not.”

The dawn of the singularity?

This is not the first time a computer program has been announced as the prize winner. In 2011, “cleverbot” was believed to be a human by almost 60 per cent of judges. However the program uses a database of real conversations, so it was not deemed to be genuinely creating its own responses by many.

Other competitions have defined topics of conversation before the test is carried out, leading its findings to be disputed.

And already, the veracity of this test is being questioned. Robert T Gonzalez and George Dvorsky wrote on io9 that posing as a non-native Ukrainian teenager allowed the developers to get around some of the more complicated responses. “Is it fair? Technically. But it’s not the least bit impressive, in a cognitive sense.”

But Professor Warwick – and much of the computing community – think it still a significant step, if not a sign of the singularity approaching.

“This event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted,” he said.

“A true Turing test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday.”