Is Interacting With Generative AI Like Talking With Another Person?

A
AlixPartners

Contributor

AlixPartners is a results-driven global consulting firm that specializes in helping businesses successfully address their most complex and critical challenges.
Welcome to Part 2 of Meaningful dialogue after ChatGPT. If this is the first article in the series you have started to read, I suggest you go back to Chapter 1 for context.
UK Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Welcome to Part 2 of Meaningful dialogue after ChatGPT. If this is the first article in the series you have started to read, I suggest you go back to Chapter 1 for context.

Gadamer says that humans all have some shared characteristics in conversation: they have a personal history that results in biases; they bring a viewpoint that can change (through a "fusion of horizons"); and they come to the conversation with a mixture of morals and emotions that he calls "virtues".

The question, then, is whether generative AI has any or all of these human attributes.

Personal history and bias?

Generative AI does have a kind of "history" represented by its training data, but it cannot trace where that data came from. In this respect, it is like a synthetic Jason Bourne (from the Bourne Identity) who knows things but doesn't know how or why. The sum total of GPT-4's working memory is about 50 pages of text restricted to previous conversations with the same user. In humans, this is akin to short-term memory, and what generative AI lacks is the equivalent of long-term memory.

However, despite this, it does manage to have prejudices. This fact has been pointed out with disapproval by commentators, who seem to have failed to consider that generative AI is trained using data from the internet, a space that amplifies societal biases more than any other.

Platform providers are promising to do better, but even if they manage to reduce the most grievous examples, Gadamer says prejudice is inevitable because objective, bias-free knowledge is a fantasy that emerged during the Enlightenment. If he is right, this means we will always live with bias in generative AI platforms; partly because it reflects our own, but also since no one will ever agree on what is impartial (consider the persistent accusations made about the BBC from both left and right).

The related challenge of generative AI producing inaccurate or contradictory responses (known as hallucinations) is similar. Gadamer suggests there are no infallible sources of knowledge, so generative AI cannot be expected to be the first. Hallucinations are slowly being addressed (currently described as a "3% problem"), but they cannot be engineered out entirely. This means we can never trust generative AI blindly and will always need robust validation controls.

Not having a long-term memory is a big problem for generative AI acting as a proxy for a human in a conversation. Ironically, the difficulties with bias and fallibility make it more like us.

Ability to change viewpoint?

When generative AI is trained, differences of views in training data are noted but not evaluated. As such, generative AI has no opinions and makes no arguments of its own. This is a kind of passive relativism that could make it a very dull conversation partner (however, we will return to this point in a later blog because generative AI is an outstanding mimic).

To make matters worse, the current platforms cannot update themselves. They have to be retrained or fine-tuned by humans. In contrast, traditional AI is capable of building a representation of reality using a (Bayesian) statistical model through unsupervised learning. There are suggestions that Meta's Llama 3 and OpenAI's GPT-5 may address this limitation, but so far we have no details beyond an aspiration to enhance reasoning, planning, and memory.

For now, this means that generative AI is unlike a person in that it has no opinions, cannot reason, and is unable to update its knowledge (so cannot fuse horizons).

Morality, emotion and virtue?

Finally, does generative AI bring its equivalent of morality and emotional engagement to the conversation?

Generative AI has been shown to be 97% accurate at providing empathetic responses to pre-prepared scenarios in a controlled test (much higher than humans), but it has no actual emotions. It also lacks any moral competence, meaning it isn't good or bad, doesn't care, and can't be held responsible. Generative AI is unable to enter a dialogue with any of the "virtues" that Gadamer considers essential to productive conversation because it lacks all the capabilities that underpin them.

Anthropomadness

In light of the above, why do we endlessly compare generative AI with humans? The answer lies in our psychology. Research has shown that we are more likely to trust technology that exhibits human characteristics.

In reality, AI is not human-like, and its purpose is often to complement us in terms of speed, accuracy, capacity, or survival. The difference is the point. Because generative AI can mimic human communication it has blurred this distinction, which brings me to the original test for intelligent machines.

Alan Turing proposed that a machine could be declared intelligent if indistinguishable from a human in a text-based question-and-answer situation. His work is remarkable in that it was proposed in 1950, five years before the term "Artificial Intelligence" existed and six years before the first working AI program. However, despite his prescience, I agree with John Searle that imitating human intelligence is very different from having a mind.

Which segues to my final point. There is a live debate aboutartificial general intelligence(AGI). The idea of AGI is that we measure AI against the benchmark of human cognitive ability. Some say we will reach this point in a few years, others decades. Elon Musk recently claimed it will happen next year or the one after.

However, the whole idea makes no sense to me. Human intelligence (measured by what the brain enables humans to do in total) also includes moral reasoning, emotional responses, survival instincts, embodiment, and consciousness. Since AGI excludes all these attributes, I am unsure what is "general" about AGI and why it is a meaningful benchmark. This seems to be another example of the obsession with human comparisons, even when they cannot be usefully made.

Conclusion

Gadamer helps us see that generative AI is not like a human in conversation. It has no knowledge of its origins, cannot reason, has no opinions, is amoral, and cannot exhibit emotions. Its greatest claims to human characteristics are bias and fallibility. Our tendency to project human-like characteristics onto technology has skewed the whole discussion on AI in unhelpful ways, including the current one on AGI. We need to step away from this illusion and evaluate AI bottom-up on its own terms.

That is my task for next week.

There is an academic version of this material available, but it is a tough read. It can be found here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More