Did you hear? Google has been accused of a secret vendetta against Caucasians. Elon Musk exchanged tweets about this conspiracy on X more than 150 times this week, all on portraits generated with Google’s new AI chatbot Gemini. Like Ben Shapiro, Musk reacted to its image diversity: Female popes! African-looking Nazis! Indigenous founding fathers! Google apologized and paused the feature.
Clearly, the company did a shoddy job over-correcting tech that had a racist skew. No, CEO Sundar Pichai wasn’t infected by a woke mind virus. Rather, he’s growth obsessed.
Three years ago, Google got in trouble when its photo-tagging tool started labelling some African-American people as apes. It shut the feature and then fired two of its leading AI ethics researchers. These were the folks whose job was to make sure Google’s tech was fair in how it depicted women and minorities.
Not overly diverse like Gemini, but equitable and balanced. When Gemini started spouting images of non-Caucasian German World War II soldiers, it was a sign that the ethics team hadn’t become more powerful, but was being ignored amid Google’s race against Microsoft and OpenAI to dominate generative web search. Proper investment would have led to a smarter approach to diversity.
People who test artificial intelligence (AI) systems for safety are outnumbered by those whose job is to make them bigger and more capable by 30-to-1, according to an estimate by Center for Humane Technology. Often they are shouting into a void and told not to get in the way. Google’s earlier chatbot Bard was so faulty that it made factual errors in its marketing demo.
Read more on livemint.com