Common Understanding of Turing Test Misses the Mark, Scholar Claims in New Book

Rensselaer lecturer explores the history of AI in a forthcoming book chapter

January 15, 2021

Image

A computer’s ability to convincingly respond to questions like a person — thereby “passing” what has come to be known as the Turing Test — is widely regarded as a practical measure of artificial intelligence. But Bram van Heuveln, a lecturer in the Department of Cognitive Science at Rensselaer Polytechnic Institute, contends that this common interpretation misses the important point that British mathematician Alan Turing was trying to make in his 1950 paper, “Computing Machinery and Intelligence.”

Van Heuveln makes the case for a new understanding of the Turing Test in a chapter of the book Great Philosophical Objections to Artificial Intelligence: The History and Legacy of the AI Wars, published this month by Bloomsbury.

In his chapter, van Heuveln delves into the history of Turing’s seminal paper, in which the mathematician famously proposed an experiment with a human interrogator blindly asking questions of both a human and a computer before guessing which was responding. Van Heuveln goes on to question the way that the Turing Test has since been featured in debates around the measurement and understanding of artificial intelligence.

“As Turing makes clear at the very start of his paper, computer intelligence is not something we measure, observe, or even define,” van Heuveln said. “Rather, we gradually change our thinking about ‘intelligence’ exactly because of machines starting to do increasingly complicated things that cry out for some kind of labeling, and ‘intelligence’ seems as good a word as any to describe the attribute we consider technology to possess.”

According to van Heuveln, the popular interpretation of the Turing Test has the unfortunate and almost opposite effect of suggesting that a machine becomes “intelligent” only at the precise moment it “passes” the Turing Test. Van Heuveln posits that believing such a point exists is, in fact, quite dangerous and could lead people to think that artificial human-level intelligence is preventable — or that the rise of AI is not worrisome until it reaches a certain threshold.

“Turing created his test mainly as a thought experiment to make us think twice about dismissing machines as mere cogs and wheels,” van Heuveln said.  “What I think Turing didn’t quite foresee was how human thinking would evolve from not believing technology could have intellect to now where we readily and regularly ascribe intelligence to our machines.”

The book, which features chapters contributed by several of co-authors, goes on to probe philosophical arguments for and against building machines with human-level intelligence and examining new paths of understanding in the current resurgence of AI in areas such as deep-learning and robotics.

In addition to van Heuveln, the book’s co-authors are Eric Dietrich from Binghamton University, John P. Sullins from Sonoma State University, Robin Zebrowski from Beloit College, and independent scholar Chris Fields.

Written By Jeanne Hedden Gallagher
Back to top