Google's generative AI product Gemini has drawn angry responses of inbuilt bias, which may not be the case. The problem with inaccurate rendition of historical faces — including a 'less controversial' image of a Black Elon Musk (pic) — could be more a function of undue haste in bringing Gemini to market than any desire to subvert history that conspiracy theorists are alleging. Technology products undergo continuous refinement over their life cycle, most of which occurs after they have been released commercially.
Generative AI is still a new technology and expectations of robustness by consumers and lawmakers may be premature. This could lead to precisely the kind of reaction AI does not need at its stage of development. It merely adds to the uncertainty around the deployment of technology with potential to transform wide swathes of society.
The market is a better assessor than any externally imposed yardstick on eliminating bias in AI output.
Google's setback in the race will be used by itself and its competitors to polish algorithms till an acceptable version emerges. So far, there is no assurance that bias can be eliminated by machines fed with information created by humans. AI, at its core, remains a probabilistic determination.
The derived intelligence, such as it claims to be, is governed by statistical modelling, which is always accompanied by a margin of error. Human intervention to weed out inherent biases in the content AI trains upon also does not provide a fail-safe solution. At best, humans can correct, and in Google's case, overcorrect.
The pushback to inaccurate AI will take the form of defining no-go areas till computing power can improve mathematical modelling to the point where human intervention is