Last week saw the release of yet another artificial intelligence (AI) model, Gemini 1.5, Google’s much-awaited response to ChatGPT. As has now become the norm, on the day of its release, social media was saturated with gushing paeans about the features of this new model and how it represented an improvement over those that had come before. But that initial euphoria died down quickly.
Within days, reports started trickling in about the images generated by this new AI model, and how it was compensating so heavily to avoid some of the racial inaccuracies implicit in earlier models that its creations were woke to the point of ludicrousness—with some being downright offensive. In India, Gemini ran into problems of a somewhat different ilk. When asked to opine on the political ideologies of our elected representatives, its answer provoked the ire of the establishment.
In short order, the government announced that the output of this AI model was in violation of Indian law and that attempts at eluding liability by claiming that the technology was experimental would not fly. There is little doubt that Gemini, as released, is far from perfect. This has now been acknowledged by the company, which has paused the image generation of people while it works out how to improve accuracy.
Read more on livemint.com