News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Groq presents specialized language processing unit significantly faster than Nvidia's AI accelerators

Started by Redaktion, February 28, 2024, 17:38:46

Previous topic - Next topic

Redaktion

The LPU Inference Engine from Groq is designed to be considerably faster than GPGPUs when processing LLM data. To achieve this, the LPU makes better use of sequential processing and is paired with SRAM instead of DRAM or HBM.

https://www.notebookcheck.net/Groq-presents-specialized-language-processing-unit-significantly-faster-than-Nvidia-s-AI-accelerators.808177.0.html

lmao

they claim running something like mixtral8x7b on their asic is about 5-10 times faster than on H100, not sure if it's quantized in their tests or not, they don't give much details really

card price is $20K

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview