Wow, that was fast. It took only eight months to find ChatGPT and generative artificial intelligence’s Achilles’ heel. No, it isn’t naive management, though OpenAI CEO Sam Altman did declare that AI poses a “risk of extinction" for humanity and practically begged a Senate Judiciary subcommittee to regulate AI, and then when the European Union actually did pass regulations, he threatened to pull out of the region.
And no, it’s not the White House, through which top AI execs did a walk of shame, allowing President Biden to extract from them a pledge of voluntary safety guardrails for AI. A pledge! Is that stronger or weaker than a pinkie promise? Nor was it the Federal Trade Commission’s recent fishing-expedition letter to OpenAI demanding minuscule details of everything the company does. Well, Mr.
Altman certainly was asking for that. Given that the FTC has been on the losing end of so many cases recently, maybe the company should let ChatGPT write the answers. Instead, AI’s Achilles’ heel consists of good old-fashioned media issues.
The first is copyright. Tools like ChatGPT, Google’s Bard and Meta’s Llama are large language models, meaning they input up to a trillion parameters by reading everything they can get their servers on, including Wikipedia, the cesspool of snark formerly known as Twitter, and racy Reddit of Roaring Kitty and GameStop fame. No wonder ChatGPT is so prone to hallucination.
Social-media companies now want to be paid and have restricted the scanning of their innards. Copyrighted snippets showing up in today’s search results are considered fair use. But can OpenAI or Google or Meta legally scan copyright material and create brand-new content from it, or is generative AI’s output considered
. Read more on livemint.com