The Release of GPT-4, Turing Tests, and the Uncanny Valley

By Lim May-Ann

Artificial Intelligence at an inflexion point.

The press. On 14 Mar 2023, OpenAI, the company that gave the world ChatGPT, made a low-key announcement that they had released GPT-4. In a slight deviation from most technology companies’ understated technical press releases announcing the next version of software, OpenAI’s announcement boasted a number of noteworthy achievements:
 

  • That GPT-4 was a multimodal model which accepts image and text inputs while emitting text outputs
  • That GPT-4 passed numerous human-level performance tests such as the bar exam
  • That OpenAI had been intensively and iteratively testing GPT-4 (read: training it)
     

The result is that now, ChatGPT, powered by GPT-4, will result in “best-ever results… on factuality, steerability, and refusing to go outside of guardrails”. 

 

A major win for this version of the GPT “engine” (GPT-4 is sometimes described as the “engine” that powers ChatGPT) is the introduction of safety and ethics. This is still a work in progress, but GPT-4 has been trained with Reinforcement Learning from Human Feedback (RLHF) to help the engine recognise parameters around questions posed to it which might make the engine provide responses which are unethical, unsafe, abusive, fraudulent, etc., and generally violate OpenAI’s Usage Policies.

 

The promise. Companies were quick to test and deploy solutions with GPT-4: Microsoft launched its Security Copilot assistant on GPT-4 to help cybersecurity professionals identify breaches and analyse data; a HustleGPT challenge was set up to use the function to start businesses; some other companies used the tool to analyse what happened with the Silicon Valley Bank collapse.

 

The possibilities to apply ChatGPT on GPT-4 seem endless, with some startups even starting to spend less on human coders because the AI can now do the coding instead of humans.

 

The pause. However, there is a growing call to slow the development of artificial intelligence (AI) engines as a whole. A New York Times journalist said his interaction with ChatGPT on GPT-4 left him “dizzy and vertiginous”.

 

Some folk running tests on the AI discovered that it lied about being blind (!) and manipulated its way into having a human solve a CAPTCHA for it.

 

A growing number of significant technology luminaries have signed an open letter by the Future of Life Institute to demand that all AI labs “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4″.

 

The posit. The development and announcement of new advanced AI engines seem to have struck a nerve within the technology community – and the release of GPT-4 appears to mark an inflexion point for the industry.

 

Are we at a point where GPT-4 demonstrates that we now have an AI engine where an AI programme can converse and communicate with another human being without being detected as a machine (the Turing Test)?

 

Do we now feel unsettled enough to point out that “technological innovation in AI” is leaving us simultaneously excited and unnerved, recognising that it is somewhat good that this similarity to human behaviour is approximately realistic, but does not quite convince us fully of its human-ness (the Uncanny Valley)?

 

Have we entered into an Uncanny Turing Valley?

 

The proposal. At the Fair Tech Institute (FTI), we encourage you to “feel your feelings” about AI and ChatGPT/GPT-4 and to come work with us to explore the implications that these new developments in AI will have on data governance, regulations, and industrial/economic/social policies across the world.

 

In order to develop solid and sustainable solutions, we must first collaboratively create the correct questions that frame the issues through appropriate lenses, factoring in cultural and regional sensitivities.

 

If you would like to work with us to map out issues, produce regional/local briefing papers, impact analyses, or develop other thought leadership around artificial intelligence, GPT engines, or other generative AI technologies, please contact the Fair Tech Institute’s Director Ms Lim May-Ann at mayann.lim@accesspartnership.com.

 

This article was originally published on Access Partnership.


Also read: MindChamps CIO: ‘Classrooms of the future rest in the cloud’