Groq Makes A Sensation On Social Media With Its Extraordinary Response Speed
Groq, the latest artificial intelligence model that appears to the surface (photo: x @GroqInc)

Groq, the latest artificial intelligence model to surface, is stealing attention on social media at speeds of response and new technologies that may eliminate the need for GPUs.

Groq became a sensation overnight after his public benchmark test went viral on social media platform X, revealing his computing speed and response that outperformed the popular ChatGPT artificial intelligence chatbot.

This is due to the team behind Groq who developed their own application-specific integrated circuit (ASIC) chips for the big language model (LLM), allowing Groq to generate around 500 tokens per second. Compared to ChatGPT-3.5, a publicly available version of the model, which can only generate about 40 tokens per second.

Groq Inc, the developer of this model, claims to have created the first language processing unit (LPU) in which they run their model, not by using a graphics processing unit (GPU) that is usually used to run artificial intelligence models.

However, the company behind Groq is nothing new. The company was founded in 2016, when the trademark Groq's name was registered. Last November, when Elon Musk's artificial intelligence model, also called Grok but worked with k getting attention, the developer behind the original Groq published a blog post highlighting Musk because of the choice of that name.

"We can understand why you want to take our name. You love fast things (rockets, hyperloops, one-letter company names) and our product, Groq LPU Inference Engine, is the fastest way to run big language models (LLMs) and other generative artificial intelligence applications. However, we have to ask you to choose another name, and fast," said executive Groq.

Since Groq went viral on social media, neither Musk nor Grok's page on X have made any comments about the similarity of the names of the two models.

However, many users on the platform have started comparing LPU models with other GPU-based models.

A user working in artificial intelligence development calls Groq a "big change" to products requiring low latents, which refers to the time it takes to process demand and get a response.

Another user wrote that the Groq LPU could offer a "big increase" compared to the GPU in meeting future artificial intelligence applications and saying that this might also be a good alternative to the "high-performance hardware" of the sought-after A100 and H100 chips produced by Nvidia.

This comes at a time when major artificial intelligence developers seek to develop in-house chips so they don't just rely on Nvidia models.

OpenAI is reportedly looking for trillions of dollars in funding from governments and investors around the world to develop their own chips to address problems in developing their products.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)