Tech startup proposes a novel way to tackle massive LLMs using the fastest memory available to mankind

Posted by:
James Thompson
Tue, 28 Jan
0 Comment
Feature image

Silicon Valley startup d-Matrix, backed by Microsoft, has developed a groundbreaking AI compute platform named Corsair. This innovative PCIe card offers 9.6 PFLOPs FP4 compute power and 2GB of SRAM-based memory, utilizing LPDDR5 technology instead of costly HBM memory. CORSAIR stands out for its efficiency, boasting 10x better performance, 3x energy efficiency, and 3x cost-effective qualities compared to traditional GPU options like Nvidia’s H100.

d-Matrix’s unique approach focuses on solving the memory wall challenge by integrating computation directly within memory, achieving incredible memory bandwidth of up to 150 terabytes per second. Their upcoming product, Raptor, will take a leap forward by incorporating 3D-stacked DRAM for enhanced reasoning workloads and expanded memory capacities. This innovative technology is slated to enter mass production in Q2 2025, ushering in a new era for AI processors.

Tags:

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments