Last week, the White House issued an executive order for command and control of artificial intelligence, strangely invoking the Defense Production Act. President Biden apparently got nervous after watching the latest “Mission: Impossible" movie. Really.
Added to the order were a mishmash of unhelpful notions like “advancing equity" and collective bargaining. It’s been only 11 months since OpenAI’s ChatGPT was released into the wild. It feels longer, with so many new players and capabilities beyond text, including open-source large language models that run on PCs and maybe on phones.
Will generative AI take our jobs? Cause human extinction? Create massive wealth? Get regulated out of existence? If you’re confused, you aren’t alone. Sam Altman, who runs OpenAI, thinks AI “will do more and more of the work that people now do." Oh, and he says that by 2031 AI profits will allow $13,500 a year in universal basic income for every American, for doing nothing. On the other hand, he said, “We face serious risk.
We face existential risk." Venture capitalist Vinod Khosla told the crowd at WSJ Tech Live last month, “AI will be able to do, within 10 years, 80% of 80% of the jobs that we know of today." That was probably true for the plow, phone and Slack, but Mr. Khosla neglects to mention the 80% more and better jobs AI will create. How do you get your arms around this thing? Sometimes analogies help.
In Silicon Valley, venture capitalists often look at software or cloud-computing opportunities and put them in two buckets: painkillers or vitamins. I’ll add a bucket for psychedelics—more on that later. Painkillers lower a company’s costs by axing lower-end jobs.
Read more on livemint.com