There’s a lot of talk about how AI can get facts wrong. That’s fair, but in my experience it’s correct most of the time. Even when it’s slightly off, there’s usually some useful truth in the answer. Much more frustrating is voice assistants who can’t even begin to give an answer.

But how does one judge if it’s right or not?
I think the challenge is the appearance of authority, which makes people implicitly trust it more, whether or not it’s deserving on a given topic.

most people are making sh*t up all the time. I bet it's better than the average person. :)

For me, basic facts aren't as much a concern as when it fabricates research.

That is great IF you know if it is correct or not, but if you don't and it hallucinates, then that is the problem.

@KimberlyHirsh Oh yeah. Probably no one should cite AI as a reference source on anything. I like what Bing is doing with "footnotes" to actual sources, if you need to do your own research.

Lack of source attribution is the fatal flaw for this application of LLMs. If you have a way to verify the answer you get then it becomes potentially useful, but there remains an ethical taint to the tool not giving credit to its sources.

What I mean is those specific footnotes, at least in the case of ChatGPT, are often fabricated.
