reality is less idyllic. Digital platforms often reflect the world's imperfections, complete with prejudices, biases, and toxic elements. Consequently, any AI system trained on such a diverse array of digital media is likely to inherit an average representation of these flaws.
Luckily this flaw in training AI systems was identified early and the training of AI systems involved a human in the loop which would substantially remove such biases and toxicity, if not remove them altogether. Involving a human in the training of these systems helped to make these AI systems better which seems to be the new medium of the near future. Surprisingly, a study by the Swiss Federal Institute of Technology (EPFL) found that between one-third to half of the 44 gig workers involved in training an AI model were themselves using AI systems like ChatGPT to generate training data.
This finding challenges the foundational notion that human intervention can correct the errors of AI systems, particularly when these systems already generate imperfect results. If this trend holds true on a larger scale, where human trainers delegate their tasks to existing AI models, we risk perpetuating and even amplifying the biases, prejudices, and toxic elements already present in our society. Worse yet, we may fall into the trap of thinking that these AI-generated outputs are free from such flaws.
While AI models do an excellent job of understanding things, hey misunderstand a lot more and when we use them for training new models, we only perpetuate their misunderstanding and errors. Till the time we have enough evidence to prove otherwise, one can assume that an LLM trained by other LLMs will surely carry forward all misunderstandings and errors. Pawan
. Read more on livemint.com