15

Are we now seeing AI generated answers to questions on this forum?

Are we now seeing AI generated answers to questions on this forum and does it matter?

2895 views
  • I haven't a clue.  Do you believe there are AI generated answers?   

    PS - not too sure I'd recognise one if it hit me in the face. :-)

  • Yes, AI-generated answers are now part of most forum landscapes. It only matters if you care more about who wrote the answer than how helpful it is!
  • In reply to Keith:

    Btw my previous answer was generated by AI but I do tend to agree with it :-)
  • For the record, I've never used AI to generate an answer on the forums. I'm just naturally verbose.

    And yes, it does matter. We had a recent case in which someone posted (with all due transparency) a generated letter notifying staff that they had been placed at risk of redundancy, which had an excellent, compassionate tone but which, when you stripped away the bells and whistles, didn't actually tell the recipient that they were being placed at risk of redundancy - a fundamental error. And it's not one a human couldn't have made, but when we outsource our brains to these systems, we run the very real risk of - as many people have when adapting to GPS maps in their cars - assuming that the computer knows best.

    As I've previously said on similar topics: the products currently being marketed as "AI" by the big technology companies are by no means intelligent and it is a very good idea to think of them as "generative systems" or "algorithmic content" and to consciously exclude the word "intelligence" from any discussion involving this technology.

    I am aware that people such as Robert Peston and our former Prime Minister seem to have developed the idea that algorithmic content will usher in some new kind of industrial revolution and transform our working methods. But the reality is that the current systems are a very long way away from being able to do the things their (financially invested) promoters suggest and there are a number of major obstacles to their ever being able to do those things. And even if we could (or even should) overcome those obstacles there is an enormous risk inherent in providing a small number of self-interested, unregulated technology companies and their billionaire investors to have unparalleled influence and control over how whole economies do business.
  • In reply to Robey:

    Unfortunately, many people hold AI and autonomous vehicles to much higher standards than humans, which is a common mistake. AI is a tool that serves as a useful starting point for eliminating early drafts. Like any tool, it requires additional skill to refine the output. In assessing AI-generated content, one should consider the credibility of the person posting it.

    While AI can err, humans are also prone to probably more mistakes, and many individuals on forums err without the aid of AI or other tools. However, this doesn't mean we would dismiss all human-generated content.

    AI-generated responses are valuable, but they should be reviewed and edited by humans to ensure they are accurate, contextual, and relevant. The combination of AI assistance with human expertise can lead to a powerful synergy on internet forums.

    In many cases, it's increasingly difficult to distinguish between AI and human-generated content. As recent examples have shown, AI-generated content can sometimes surpass that produced by students.

  • Yes I think we are and it irks me because I often find the answers generated by AI to be really unhelpful and sometimes downright dangerous!
  • In reply to Robey:

    An interesting paper was published a few months ago which indicated an AI passed the Turing Test for the first time arxiv.org/.../2405.08007
  • In reply to Keith:

    The autonomous vehicle standards are rather strange, the current evidence is humans are rather awful at driving. UK currently has a Killed/Serious Injury rate of about 30,000 a year, if a Tesla hits another car it makes the headlines.
  • In reply to Sophie:

    Agreed, the AI hallucinations can range from curious to utterly bizarre.
  • In reply to Steven :

    The Turing Test has long been understood by both programmers and philosophers of intelligence to be a very low bar that says more about the ability of humans to detect humans in the context of a short, text-based dialogue with a disembodied entity than it says about the "intelligence" of machines participating in the test.

    Large Language Models (LLMs) have been capable of passing Turing Tests for at least the last four years. That doesn't make them intelligent.

    Douglas Hofstadter did the work back in the 70s (winning a Pulitzer Prize for his seminal work, *Godel, Escher, Bach: An Eternal Golden Braid*) both analysing what intelligence is, devoid of human assumptions, but also predicting the conditions under which it could be expected to emerge - predictions that have held true ever since. The extent of processing power needed for General Artificial Intelligence (GAI) simply doesn't exist on the entire planet.

    I've been working adjacent to LLMs since long before they hit the public consciousness, because they have been disrupting the market for commercial translation for over a decade and, as Keith rightly says, they are useful tools when used as an adjunct to human intelligence. But this idea that they are "artificial intelligence" when they are no such thing is what led far too many companies to decide that the expense of a human translator could be jettisoned in favour of automatic translation. Whilst LLMs have vastly improved in capacity thanks mostly to ripping off other people's intellectual property, they have acquired not one jot more intelligence in that time.

    We laughed at people who thought Google Translate alone would give you a foreign-language menu or coffee maker instructions or public notices. But it is exactly the same mentality that is leading gullible business leaders to think that they can hand over their customer service responses, software programming, legal writing or HR operations to an "Artificial Intelligence".
  • In reply to Steven :

    The autonomous vehicle standards are rather strange

    This, again, arises from the assumption that autonomous vehicles have human-like intelligence. They absolutely do not. When a human makes a mistake in a vehicle that leads to damage, injury or death, we can identify its source - a lack of attention, impaired judgement, lack of training or similar - which we can address through laws that enforce licensing and social standards for drivers. We can't eliminate these sources of error entirely, but we can minimise their occurrence against the benefits that arise from the use of motor vehicles.

    But when an autonomous vehicle makes a mistake *we don't know why*. The algorithms that dictate their behaviour are so complex that even the best programmers can only predict what they will do by putting them into practice and watching how they behave. Input goes in, output comes out. What happens in the middle is a total mystery.

    We test the algorithms in virtual environments, but the real world is infinitely more unpredictable and chaotic, so autonomous vehicles have to be tested there and, inevitably, some of those outcomes will result in accidents. But because we don't know how the outcome is generated, we can't "fix" it. All we can do is try to refine the algorithm and test it some more.

    There are so few autonomous vehicles on roads - compared to driven vehicles - that every single accident is an invaluable data point with which to keep training the algorithms in the hope that they can learn and refine their outputs.

    AI hallucinations

    This is another popular expression that we need to banish. These algorithmic systems don't hallucinate. The idea of them hallucinating only applies is you think they are perceiving and interpreting the world around them in the same way that we do, but they don't. Their outputs fail to match our perception of reality because they are just generating outputs through an algorithm. They aren't thinking and aren't intelligent. Not only do they not perceive the world, they don't even have the intelligence to know that the world is a thing.
  • In reply to Robey:

    I am sure you are right that a LLM has been capable about passing the Turing Test but as far as I know, it was only this year they did. Perhaps this is like John Cage’s 4’33” of 4 minutes and 33 seconds of no music. People said anyone could have written that, he agreed but added no one had.

    The misnomer of artificial intelligence seems to cause a lot of debate and this is a good thing. Pondering the meaning of intelligence and how this manifests sounds like a worthy area of study. However, it is not unusual for us to misname things, we all refer to fish, yet this is not a taxonomic group.

    Vast amounts of money are pouring into quantum computing from organisations and governments and so far it has achieved very little but money keeps flowing as many are too scared to be left behind in case it does start to work. As an aside, currently, there are two different operating models, hot and cold. The cold computers work at a fraction about absolute zero whereas the hot works at one degree above absolute zero!

    The reality is I am poorly equipped to give a meaningful response to your points, I have a vague interest but that is about it. Maybe these fancy systems will change the world or maybe they won’t. One of the consistent things about humans is just how awful we are at predicting the future.
  • In reply to Steven :

    interesting
  • In reply to Steven :

    Just as an example, here's a story from 2014 about a model that passed a Turing Test:

    www.bbc.co.uk/.../technology-27762088
  • In reply to Robey:

    Happy to be corrected. 30% pass seems very low to me but if it is pass then fair enough.