ChatGPT is that while it can look reasonably convincing and persuasive, it often is not quite right. Here’s another example. I had this exchange with ChatGPT just now: Me: “My bedroom has three people in it.
I walk in to say hello. How many of us are there?" ChatGPT: “If your bedroom initially had three people in it, and then you walk in to say hello, there would still be three people in the room. You walking into the room does not change the number of people who were already there." As you see: it’s a perfectly constructed answer that is also totally wrong - one that you would never get from a human.
So what happened? As Lenat and Gary Marcus explained in a recent paper (“Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc", https://arxiv.org/pdf/2308.04445.pdf, 31 July 2023), ChatGPT’s failure here is in deduction. “A trustworthy AI," they write, “should be able to perform the same types of deductions as people do, as deeply as people generally reason." And in fact, it’s not just deduction. Lenat and Marcus list 16 different “desiderata" that they believe “a general AI which is trustworthy" must have.
Deduction is one; explanation, pro and con arguments, and analogy are three more. As you can tell, Lenat and Marcus set great store by that word “trustworthy". For ChatGPT to be truly intelligent in a human sense, you have to be able to trust its responses just as you would a human’s.
Read more on livemint.com