Are we now seeing AI generated answers to questions on this forum?

Are we now seeing AI generated answers to questions on this forum and does it matter?

Parents
  • For the record, I've never used AI to generate an answer on the forums. I'm just naturally verbose.

    And yes, it does matter. We had a recent case in which someone posted (with all due transparency) a generated letter notifying staff that they had been placed at risk of redundancy, which had an excellent, compassionate tone but which, when you stripped away the bells and whistles, didn't actually tell the recipient that they were being placed at risk of redundancy - a fundamental error. And it's not one a human couldn't have made, but when we outsource our brains to these systems, we run the very real risk of - as many people have when adapting to GPS maps in their cars - assuming that the computer knows best.

    As I've previously said on similar topics: the products currently being marketed as "AI" by the big technology companies are by no means intelligent and it is a very good idea to think of them as "generative systems" or "algorithmic content" and to consciously exclude the word "intelligence" from any discussion involving this technology.

    I am aware that people such as Robert Peston and our former Prime Minister seem to have developed the idea that algorithmic content will usher in some new kind of industrial revolution and transform our working methods. But the reality is that the current systems are a very long way away from being able to do the things their (financially invested) promoters suggest and there are a number of major obstacles to their ever being able to do those things. And even if we could (or even should) overcome those obstacles there is an enormous risk inherent in providing a small number of self-interested, unregulated technology companies and their billionaire investors to have unparalleled influence and control over how whole economies do business.
  • Unfortunately, many people hold AI and autonomous vehicles to much higher standards than humans, which is a common mistake. AI is a tool that serves as a useful starting point for eliminating early drafts. Like any tool, it requires additional skill to refine the output. In assessing AI-generated content, one should consider the credibility of the person posting it.

    While AI can err, humans are also prone to probably more mistakes, and many individuals on forums err without the aid of AI or other tools. However, this doesn't mean we would dismiss all human-generated content.

    AI-generated responses are valuable, but they should be reviewed and edited by humans to ensure they are accurate, contextual, and relevant. The combination of AI assistance with human expertise can lead to a powerful synergy on internet forums.

    In many cases, it's increasingly difficult to distinguish between AI and human-generated content. As recent examples have shown, AI-generated content can sometimes surpass that produced by students.

  • The autonomous vehicle standards are rather strange, the current evidence is humans are rather awful at driving. UK currently has a Killed/Serious Injury rate of about 30,000 a year, if a Tesla hits another car it makes the headlines.
  • The autonomous vehicle standards are rather strange

    This, again, arises from the assumption that autonomous vehicles have human-like intelligence. They absolutely do not. When a human makes a mistake in a vehicle that leads to damage, injury or death, we can identify its source - a lack of attention, impaired judgement, lack of training or similar - which we can address through laws that enforce licensing and social standards for drivers. We can't eliminate these sources of error entirely, but we can minimise their occurrence against the benefits that arise from the use of motor vehicles.

    But when an autonomous vehicle makes a mistake *we don't know why*. The algorithms that dictate their behaviour are so complex that even the best programmers can only predict what they will do by putting them into practice and watching how they behave. Input goes in, output comes out. What happens in the middle is a total mystery.

    We test the algorithms in virtual environments, but the real world is infinitely more unpredictable and chaotic, so autonomous vehicles have to be tested there and, inevitably, some of those outcomes will result in accidents. But because we don't know how the outcome is generated, we can't "fix" it. All we can do is try to refine the algorithm and test it some more.

    There are so few autonomous vehicles on roads - compared to driven vehicles - that every single accident is an invaluable data point with which to keep training the algorithms in the hope that they can learn and refine their outputs.

    AI hallucinations

    This is another popular expression that we need to banish. These algorithmic systems don't hallucinate. The idea of them hallucinating only applies is you think they are perceiving and interpreting the world around them in the same way that we do, but they don't. Their outputs fail to match our perception of reality because they are just generating outputs through an algorithm. They aren't thinking and aren't intelligent. Not only do they not perceive the world, they don't even have the intelligence to know that the world is a thing.
Reply
  • The autonomous vehicle standards are rather strange

    This, again, arises from the assumption that autonomous vehicles have human-like intelligence. They absolutely do not. When a human makes a mistake in a vehicle that leads to damage, injury or death, we can identify its source - a lack of attention, impaired judgement, lack of training or similar - which we can address through laws that enforce licensing and social standards for drivers. We can't eliminate these sources of error entirely, but we can minimise their occurrence against the benefits that arise from the use of motor vehicles.

    But when an autonomous vehicle makes a mistake *we don't know why*. The algorithms that dictate their behaviour are so complex that even the best programmers can only predict what they will do by putting them into practice and watching how they behave. Input goes in, output comes out. What happens in the middle is a total mystery.

    We test the algorithms in virtual environments, but the real world is infinitely more unpredictable and chaotic, so autonomous vehicles have to be tested there and, inevitably, some of those outcomes will result in accidents. But because we don't know how the outcome is generated, we can't "fix" it. All we can do is try to refine the algorithm and test it some more.

    There are so few autonomous vehicles on roads - compared to driven vehicles - that every single accident is an invaluable data point with which to keep training the algorithms in the hope that they can learn and refine their outputs.

    AI hallucinations

    This is another popular expression that we need to banish. These algorithmic systems don't hallucinate. The idea of them hallucinating only applies is you think they are perceiving and interpreting the world around them in the same way that we do, but they don't. Their outputs fail to match our perception of reality because they are just generating outputs through an algorithm. They aren't thinking and aren't intelligent. Not only do they not perceive the world, they don't even have the intelligence to know that the world is a thing.
Children
No Data