There is a rather telling story behind why Google’s ‘ChatGPT-killer’ was christened Gemini. Jeff Dean, chief scientist at Google and the former head of Google Brain, explained on X (formerly Twitter): “The Gemini effort came about because we had different teams working on language modelling, and we knew we wanted to start to work together. The twins are the folks in the legacy Brain team (many from the PaLM/PaLM-2 effort) and the legacy DeepMind team (many from the Chinchilla effort) that started to work together on the ambitious multimodal model project we called Gemini, eventually joined by many people from all across Google." (bit.ly/46XO1sk).
Gemini is Latin for twin, and so it is an apt name. It also hints at Google’s struggle to create a Large Language Model (LLM) rivalling OpenAI’s GPT4. It sometimes surprises people that Google ‘invented’ generative AI and LLMs—more precisely the Transformer architecture which underpins this technology.
The concept behind Transformers was revealed in a 2017 paper, ‘Attention is all you need,’ written largely by researchers at Google Brain. Google also owns the deep learning powerhouse DeepMind, making it the leader in Artificial Intelligence (AI). But it chose not to take the transformer discovery forward, and so it was OpenAI which picked up the ball and carried it to the touchline.
The reason could be the Innovator’s Dilemma; Google feared cannibalization of its Search business or a reputational risk from LLMs. It also could be fraternal rivalry between the two mighty AI twins: Jeff Dean’s Google Brain and Demis Hassabis’ DeepMind. Eventually, Google merged the two AI units, made Hassabis the leader, and set out to build Gemini.
Read more on livemint.com