For the record, I've never used AI to generate an answer on the forums. I'm just naturally verbose.
And yes, it does matter. We had a recent case in which someone posted (with all due transparency) a generated letter notifying staff that they had been placed at risk of redundancy, which had an excellent, compassionate tone but which, when you stripped away the bells and whistles, didn't actually tell the recipient that they were being placed at risk of redundancy - a fundamental error. And it's not one a human couldn't have made, but when we outsource our brains to these systems, we run the very real risk of - as many people have when adapting to GPS maps in their cars - assuming that the computer knows best.
As I've previously said on similar topics: the products currently being marketed as "AI" by the big technology companies are by no means intelligent and it is a very good idea to think of them as "generative systems" or "algorithmic content" and to consciously exclude the word "intelligence" from any discussion involving this technology.
I am aware that people such as Robert Peston and our former Prime Minister seem to have developed the idea that algorithmic content will usher in some new kind of industrial revolution and transform our working methods. But the reality is that the current systems are a very long way away from being able to do the things their (financially invested) promoters suggest and there are a number of major obstacles to their ever being able to do those things. And even if we could (or even should) overcome those obstacles there is an enormous risk inherent in providing a small number of self-interested, unregulated technology companies and their billionaire investors to have unparalleled influence and control over how whole economies do business.
The Turing Test has long been understood by both programmers and philosophers of intelligence to be a very low bar that says more about the ability of humans to detect humans in the context of a short, text-based dialogue with a disembodied entity than it says about the "intelligence" of machines participating in the test.
Large Language Models (LLMs) have been capable of passing Turing Tests for at least the last four years. That doesn't make them intelligent.
Douglas Hofstadter did the work back in the 70s (winning a Pulitzer Prize for his seminal work, *Godel, Escher, Bach: An Eternal Golden Braid*) both analysing what intelligence is, devoid of human assumptions, but also predicting the conditions under which it could be expected to emerge - predictions that have held true ever since. The extent of processing power needed for General Artificial Intelligence (GAI) simply doesn't exist on the entire planet.
I've been working adjacent to LLMs since long before they hit the public consciousness, because they have been disrupting the market for commercial translation for over a decade and, as Keith rightly says, they are useful tools when used as an adjunct to human intelligence. But this idea that they are "artificial intelligence" when they are no such thing is what led far too many companies to decide that the expense of a human translator could be jettisoned in favour of automatic translation. Whilst LLMs have vastly improved in capacity thanks mostly to ripping off other people's intellectual property, they have acquired not one jot more intelligence in that time.
We laughed at people who thought Google Translate alone would give you a foreign-language menu or coffee maker instructions or public notices. But it is exactly the same mentality that is leading gullible business leaders to think that they can hand over their customer service responses, software programming, legal writing or HR operations to an "Artificial Intelligence".
I am sure you are right that a LLM has been capable about passing the Turing Test but as far as I know, it was only this year they did. Perhaps this is like John Cage’s 4’33” of 4 minutes and 33 seconds of no music. People said anyone could have written that, he agreed but added no one had.
The misnomer of artificial intelligence seems to cause a lot of debate and this is a good thing. Pondering the meaning of intelligence and how this manifests sounds like a worthy area of study. However, it is not unusual for us to misname things, we all refer to fish, yet this is not a taxonomic group.
Vast amounts of money are pouring into quantum computing from organisations and governments and so far it has achieved very little but money keeps flowing as many are too scared to be left behind in case it does start to work. As an aside, currently, there are two different operating models, hot and cold. The cold computers work at a fraction about absolute zero whereas the hot works at one degree above absolute zero!
The reality is I am poorly equipped to give a meaningful response to your points, I have a vague interest but that is about it. Maybe these fancy systems will change the world or maybe they won’t. One of the consistent things about humans is just how awful we are at predicting the future.