Google's Gemini chatbot cranked out images of Black and Asian Nazi soldiers was seen as a warning about the power artificial intelligence can give tech titans.
Google CEO Sundar Pichai last month slammed as «completely unacceptable» errors by his company's Gemini AI app, after gaffes such as the images of ethnically diverse Nazi troops forced it to temporarily stop users from creating pictures of people.
Social media users mocked and criticized Google for the historically inaccurate images, like those showing a female black US senator from the 1800s — when the first such senator was not elected until 1992.
«We definitely messed up on the image generation,» Google co-founder Sergey Brin said at a recent AI «hackathon,» adding that the company should have tested Gemini more thoroughly.
Folks interviewed at the popular South by Southwest arts and tech festival in Austin said the Gemini stumble highlights the inordinate power a handful of companies have over the artificial intelligence platforms that are poised to change the way people live and work.
«Essentially, it was too 'woke,'» said Joshua Weaver, a lawyer and tech entrepreneur, meaning Google had gone overboard in its effort to project inclusion and diversity.
Google quickly corrected its