As all eyes are on the Meta Connect 2024, Mark Zuckerberg comes with the biggest announcement of introducing Llama 3.2. Zuckerberg said that Meta will be bringing Llama 3.2. This update marks the entry of Meta into the world of creating ‘open-source multimodal models.’
During the much awaited Meta Connect 2024 conference, the Meta founder announced the launch of its new model Llama 3.2. Here’s a look into the fascinating world of Llama 3.2.
Llama 3.2 is Meta’s in-house open-source model, which can help developers to create more upgraded AI models. On September 25, during the Meta event held in California, Zuckerberg said, “This is our first open source, multimodal model, and it’s going to enable a lot of interesting applications that require visual understanding.” He further explained that Llama 3.2 can make it easy for developers to build advanced artificial intelligence (AI) models. For example, the tool can create AI tools which can analyze and explain long documents within minutes.
So, what is Llam 3.2 and how does it work? Llama 3.2 can be defined as an ‘open-source multimodal’ model. The model includes two vision models. This includes 11 billion parameters and 90 billion parameters. It also has two lightweight text-only models (with 3 billion parameters and 1 billion parameters). The smaller models have the ability to work on MediaTek, Qualcomm and other Arm hardware.
In addition to this the model can also help in creating augmented reality (AR ) applications or visual-based search engines. The tech giant further claimed that Llama 3.2 is easy to use and can provide a smooth user experience. Well, Meta is not the first tech giant to create their own multimodal platforms. Last year other AI developers such as
Read more on financialexpress.com