Are we now seeing AI generated answers to questions on this forum?

Are we now seeing AI generated answers to questions on this forum and does it matter?

  • I haven't a clue.  Do you believe there are AI generated answers?   

    PS - not too sure I'd recognise one if it hit me in the face. :-)

  • Yes, AI-generated answers are now part of most forum landscapes. It only matters if you care more about who wrote the answer than how helpful it is!
  • Btw my previous answer was generated by AI but I do tend to agree with it :-)
  • For the record, I've never used AI to generate an answer on the forums. I'm just naturally verbose.

    And yes, it does matter. We had a recent case in which someone posted (with all due transparency) a generated letter notifying staff that they had been placed at risk of redundancy, which had an excellent, compassionate tone but which, when you stripped away the bells and whistles, didn't actually tell the recipient that they were being placed at risk of redundancy - a fundamental error. And it's not one a human couldn't have made, but when we outsource our brains to these systems, we run the very real risk of - as many people have when adapting to GPS maps in their cars - assuming that the computer knows best.

    As I've previously said on similar topics: the products currently being marketed as "AI" by the big technology companies are by no means intelligent and it is a very good idea to think of them as "generative systems" or "algorithmic content" and to consciously exclude the word "intelligence" from any discussion involving this technology.

    I am aware that people such as Robert Peston and our former Prime Minister seem to have developed the idea that algorithmic content will usher in some new kind of industrial revolution and transform our working methods. But the reality is that the current systems are a very long way away from being able to do the things their (financially invested) promoters suggest and there are a number of major obstacles to their ever being able to do those things. And even if we could (or even should) overcome those obstacles there is an enormous risk inherent in providing a small number of self-interested, unregulated technology companies and their billionaire investors to have unparalleled influence and control over how whole economies do business.
  • Unfortunately, many people hold AI and autonomous vehicles to much higher standards than humans, which is a common mistake. AI is a tool that serves as a useful starting point for eliminating early drafts. Like any tool, it requires additional skill to refine the output. In assessing AI-generated content, one should consider the credibility of the person posting it.

    While AI can err, humans are also prone to probably more mistakes, and many individuals on forums err without the aid of AI or other tools. However, this doesn't mean we would dismiss all human-generated content.

    AI-generated responses are valuable, but they should be reviewed and edited by humans to ensure they are accurate, contextual, and relevant. The combination of AI assistance with human expertise can lead to a powerful synergy on internet forums.

    In many cases, it's increasingly difficult to distinguish between AI and human-generated content. As recent examples have shown, AI-generated content can sometimes surpass that produced by students.

  • Yes I think we are and it irks me because I often find the answers generated by AI to be really unhelpful and sometimes downright dangerous!
  • An interesting paper was published a few months ago which indicated an AI passed the Turing Test for the first time arxiv.org/.../2405.08007
  • The autonomous vehicle standards are rather strange, the current evidence is humans are rather awful at driving. UK currently has a Killed/Serious Injury rate of about 30,000 a year, if a Tesla hits another car it makes the headlines.
  • Agreed, the AI hallucinations can range from curious to utterly bizarre.
  • The Turing Test has long been understood by both programmers and philosophers of intelligence to be a very low bar that says more about the ability of humans to detect humans in the context of a short, text-based dialogue with a disembodied entity than it says about the "intelligence" of machines participating in the test.

    Large Language Models (LLMs) have been capable of passing Turing Tests for at least the last four years. That doesn't make them intelligent.

    Douglas Hofstadter did the work back in the 70s (winning a Pulitzer Prize for his seminal work, *Godel, Escher, Bach: An Eternal Golden Braid*) both analysing what intelligence is, devoid of human assumptions, but also predicting the conditions under which it could be expected to emerge - predictions that have held true ever since. The extent of processing power needed for General Artificial Intelligence (GAI) simply doesn't exist on the entire planet.

    I've been working adjacent to LLMs since long before they hit the public consciousness, because they have been disrupting the market for commercial translation for over a decade and, as Keith rightly says, they are useful tools when used as an adjunct to human intelligence. But this idea that they are "artificial intelligence" when they are no such thing is what led far too many companies to decide that the expense of a human translator could be jettisoned in favour of automatic translation. Whilst LLMs have vastly improved in capacity thanks mostly to ripping off other people's intellectual property, they have acquired not one jot more intelligence in that time.

    We laughed at people who thought Google Translate alone would give you a foreign-language menu or coffee maker instructions or public notices. But it is exactly the same mentality that is leading gullible business leaders to think that they can hand over their customer service responses, software programming, legal writing or HR operations to an "Artificial Intelligence".