NotebookCHECK - Notebook Forum

English => News => Topic started by: Redaktion on February 28, 2024, 17:38:46

Title: Groq presents specialized language processing unit significantly faster than Nvidia's AI accelerators
Post by: Redaktion on February 28, 2024, 17:38:46
The LPU Inference Engine from Groq is designed to be considerably faster than GPGPUs when processing LLM data. To achieve this, the LPU makes better use of sequential processing and is paired with SRAM instead of DRAM or HBM.

https://www.notebookcheck.net/Groq-presents-specialized-language-processing-unit-significantly-faster-than-Nvidia-s-AI-accelerators.808177.0.html
Title: Re: Groq presents specialized language processing unit significantly faster than Nvidia's AI acceler
Post by: lmao on February 28, 2024, 18:29:27
they claim running something like mixtral8x7b on their asic is about 5-10 times faster than on H100, not sure if it's quantized in their tests or not, they don't give much details really

card price is $20K