Subscribe to enjoy similar stories. Cognitive scientist, author and entrepreneur Gary Marcus was an early and vocal critic of large language models, arguing their limitations were far greater than many people thought. Lately, he has been making the case that LLMs are far riskier, too—and must be regulated.
In September, Marcus published “Taming Silicon Valley: How We Can Ensure That AI Works for Us." The book argues that technological risks and moral problems raised by today’s AI are deeply intertwined. Marcus wrote the book in roughly two months, because he said there is an urgent need for greater skepticism to enter the public conversation about AI. “The hype has led the average person to think these things are magic, but they’re not," said Marcus, a professor emeritus at New York University who founded Geometric Intelligence in 2014.
Geometric Intelligence was a machine learning company that developed new techniques for learning more from modest amounts of data. It was sold to Uber, where Marcus directed AI research for a time. “One of the craziest things that I mentioned in my book is that some senators and Congress tried to pass a law saying you couldn’t use AI to make nuclear weapon decisions without a human in the loop and they couldn’t pass that," Marcus said.
He argues for a regulatory framework that would address such challenges, and much more. There are signs that AI oversight is on the agenda of the next administration. President-elect Trump is considering naming an AI czar in the White House, Axios reported.
Read more on livemint.com