Are we now seeing AI generated answers to questions on this forum?

Are we now seeing AI generated answers to questions on this forum and does it matter?

  • The autonomous vehicle standards are rather strange

    This, again, arises from the assumption that autonomous vehicles have human-like intelligence. They absolutely do not. When a human makes a mistake in a vehicle that leads to damage, injury or death, we can identify its source - a lack of attention, impaired judgement, lack of training or similar - which we can address through laws that enforce licensing and social standards for drivers. We can't eliminate these sources of error entirely, but we can minimise their occurrence against the benefits that arise from the use of motor vehicles.

    But when an autonomous vehicle makes a mistake *we don't know why*. The algorithms that dictate their behaviour are so complex that even the best programmers can only predict what they will do by putting them into practice and watching how they behave. Input goes in, output comes out. What happens in the middle is a total mystery.

    We test the algorithms in virtual environments, but the real world is infinitely more unpredictable and chaotic, so autonomous vehicles have to be tested there and, inevitably, some of those outcomes will result in accidents. But because we don't know how the outcome is generated, we can't "fix" it. All we can do is try to refine the algorithm and test it some more.

    There are so few autonomous vehicles on roads - compared to driven vehicles - that every single accident is an invaluable data point with which to keep training the algorithms in the hope that they can learn and refine their outputs.

    AI hallucinations

    This is another popular expression that we need to banish. These algorithmic systems don't hallucinate. The idea of them hallucinating only applies is you think they are perceiving and interpreting the world around them in the same way that we do, but they don't. Their outputs fail to match our perception of reality because they are just generating outputs through an algorithm. They aren't thinking and aren't intelligent. Not only do they not perceive the world, they don't even have the intelligence to know that the world is a thing.
  • I am sure you are right that a LLM has been capable about passing the Turing Test but as far as I know, it was only this year they did. Perhaps this is like John Cage’s 4’33” of 4 minutes and 33 seconds of no music. People said anyone could have written that, he agreed but added no one had.

    The misnomer of artificial intelligence seems to cause a lot of debate and this is a good thing. Pondering the meaning of intelligence and how this manifests sounds like a worthy area of study. However, it is not unusual for us to misname things, we all refer to fish, yet this is not a taxonomic group.

    Vast amounts of money are pouring into quantum computing from organisations and governments and so far it has achieved very little but money keeps flowing as many are too scared to be left behind in case it does start to work. As an aside, currently, there are two different operating models, hot and cold. The cold computers work at a fraction about absolute zero whereas the hot works at one degree above absolute zero!

    The reality is I am poorly equipped to give a meaningful response to your points, I have a vague interest but that is about it. Maybe these fancy systems will change the world or maybe they won’t. One of the consistent things about humans is just how awful we are at predicting the future.
  • Just as an example, here's a story from 2014 about a model that passed a Turing Test:

    www.bbc.co.uk/.../technology-27762088
  • Happy to be corrected. 30% pass seems very low to me but if it is pass then fair enough.