Sales: 678.967.3854
Support: 866.252.6363
stockfresh 877384 future face concept background sizeS

Can Super Computers Dupe Humans? New Test Results Say Yes

By the DynaSis Team

One of the biggest pieces of news you might not have heard was that last week (June 10), for the first time ever, a computer program passed the “Turing Test.” This test determines if a computer can persuade 30% of questioners that they are engaging with a human being (and not a computer) during a five-minute, text-based conversation.

The tech world has been talking about artificial intelligence (AI) for years—painting a portrait of a world where computers can stand in for humans in any given situation, and no one will be the wiser. The possibilities are tantalizing—and the perils are alarming.

In this particular instance, the winning computer program, called Eugene Goostman, beat out several other AI programs and persuaded 33% of interrogators that a human was on the other end, crafting the responses. At the event, organized by the University of Reading at the Royal Society London, some participants and observers claimed “Foul!” because the program claimed to be a 13-year-old Ukrainian boy with a limited grasp of the English language.

Other successful attempts at passing similar tests have also been flawed, because they included pre-established topics or questions, which the Turing Test specifically disallows. This time, the restrictions related to the “speaker” and not the topics or questions. Such an approach is not prohibited by the test, but it is a bit questionable.

So, where does this leave us? Undoubtedly, even if this episode fails to persuade the scientific community that a computer has definitively passed Turing’s test, there’s little doubt that such a feat will be accomplished before long. When computers can truly be indistinguishable from humans, the ramifications for online communication and, for that matter, cloud computing could be considerable.

Already, more than 60 percent of Internet traffic originates from bots, per data security company Incapsula. Published reports indicate that any number of human chat-room visitors have been duped into thinking they are speaking with a human—even to the point of accepting an invitation for a date. Chat bots are also becoming increasingly common for customer service and tech support. None of us know how many times we might have interacted with a chat bot and thought it was a real person.

So, as the line between human and computer continues to blur, and the world waits for another, more robust demonstration of true AI, what should you and your company do? Our recommendation is watch and be vigilant.

Texting with a computer posing as a human is a lot different from talking to one on the phone, where we can catch vocal intonations and other nuances that clue us into “humanness.” Nevertheless, with an increasing number of communications happening by text and email in today’s business environment, companies and their employees cannot be too careful.

Warn your personnel of the dangers of chatting with strangers online, especially if they ask for personal or corporate information. Chat bots have stolen personal information from innocent victims (or sending them to Websites that did) after persuading them they were human.

Furthermore, make sure your firm’s digital perimeter defenses are strong enough that they will stop activity from suspicious websites and not let workers interact with them. The weakest link in everyone’s security chain is humans, and the next wave of assaults may be on the way. If you are not certain that your security is up to par, fill out our inquiry form or give us a call at 678.218.1769.

partner logos new
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram