There’s a lot of talk about how AI can get facts wrong. That’s fair, but in my experience it’s correct most of the time. Even when it’s slightly off, there’s usually some useful truth in the answer. Much more frustrating is voice assistants who can’t even begin to give an answer.

Matt Kaul

But how does one judge if it’s right or not?

I think the challenge is the appearance of authority, which makes people implicitly trust it more, whether or not it’s deserving on a given topic.

Chris Handy

most people are making sh*t up all the time. I bet it's better than the average person. :)

Kimberly Hirsh

For me, basic facts aren't as much a concern as when it fabricates research.

rom

That is great IF you know if it is correct or not, but if you don't and it hallucinates, then that is the problem.

Manton Reece

@KimberlyHirsh Oh yeah. Probably no one should cite AI as a reference source on anything. I like what Bing is doing with "footnotes" to actual sources, if you need to do your own research.

Devon Greene

It's nice to see that attribution to primary-ish sources.

Matt Huyck

Lack of source attribution is the fatal flaw for this application of LLMs. If you have a way to verify the answer you get then it becomes potentially useful, but there remains an ethical taint to the tool not giving credit to its sources.

Kimberly Hirsh

What I mean is those specific footnotes, at least in the case of ChatGPT, are often fabricated.

Manton Reece @manton
Lightbox Image