Gaming Editor Louis Wright discusses what ChatGPT beating the Turing Test could mean for how we as humans test AI in the future.
I propose considering the question, ‘Can machines think?’” as the core idea behind Computing Machinery and Intelligence (Alan Turing, 1950). The paper considers the idea of digital computers designed to carry out human tasks, and how they can be developed to imitate and replicate human actions, speech, and emotion in time. Therefore to be certain of whether or not a computer can accurately pass as human, a standard testing method is required.
‘The Imitation Game’ or more commonly known as ‘The Turing Test’ is the proposed test within the paper. Its simplest form consists of 2 people (labelled as B and C) and the computer system (labelled as A). C must act as an interrogator to A and B. A and B both try to convince the interrogator that they are human while C must determine the answer using only written notes from the two. If C is unable to consistently determine the human then A (the computer) wins, otherwise it loses.
ChatGPT 4.0 has beaten the Imitation Game. While not the first to pass the Turing Test, the prize for which goes to AI program Eugene Goostman, the AI turned social phenomenon is one of the few to have successfully surmounted the challenge. But what does passing the Turing Test mean for AI, and future testing procedures?
The Turing Test is, simply put, an intelligence test. It is used to see if an AI is capable of reaching a human-like intelligence. However, while an AI may be able to reach the intelligence levels of a human, this does not mean that it is aware – with the current standards of AI, anything that it produces is a replication of humanity, not self-aware thought.
Therefore, for better or for worse, ChatGPT passing the Turing Test does not mean that AI will rise to take over humanity. As it stands AI is not capable of true self-reflection and awareness, and is limited in what it can accomplish. An often repeated thought states that the computer beating the Imitation Game is not the worry, but the one intentionally failing it. The idea of an AI that purposefully hides its intelligence levels is sinister. However, due to a crucial lack of critical thinking in AI, this will likely never occur. But it does provide an insight into the limitations of the Turing Test.
Most prominently, the Turing Test is limited in the intelligence levels it can test for in a computer program. Due to the program in question having to meaningfully fool a human, an AI can only be tested up to a human level intelligence. Therefore as AI is becoming increasingly smarter new tests are necessary in order to properly assess the intelligence levels of an AI on scale with humans.
AI testing, both in terms of the Turing Test and other means, focuses on two factors. Non-determination is the ability of the software to adapt and change even with the same inputs, while fuzziness is the assessment of right or wrong answers on a scale rather than a binary. By utilising these concepts AI testing can be developed and adapted past the Turing Test.
Self-testing is a prominent method for measuring the intelligence of an artificial intelligence. The adaptive nature of AI ensures that systems are able to routinely perform the same testing procedures and alters the parameters that test the limits and capabilities of a given system. By setting the parameters in question to either be data being tested through the system, or components of the system itself, the scope of an artificial intelligence can be firmly established through the means of self-testing without limiting the measurement to that of human capabilities.
Going forward the Turing Test can still prove some utility in gauging how smart an artificial intelligence is. However, as the number of systems that pass the test steadily increases, it will prove as nothing more than a simple benchmark test.
Enjoyed this article? Check out more from Sci&Tech here: