#ai-inference-acceleration

[ follow ]
Tech industry
fromTheregister
2 days ago

A closer look at Nvidia's Groq-powered LPX rack systems

Nvidia acquired Groq for $20 billion primarily to accelerate time-to-market for SRAM-heavy inference chips rather than develop the technology independently, enabling faster token generation for AI reasoning workloads.
Artificial intelligence
fromTechzine Global
4 days ago

Nvidia's Groq 3 LPU targets agentic AI inference at GTC 2026

Nvidia's acquisition of Groq technology produces the Groq 3 LPU, a specialized inference chip delivering 40 petabytes per second bandwidth, significantly outpacing GPU inference speeds.
Silicon Valley
fromTechCrunch
1 week ago

How to watch Jensen Huang's Nvidia GTC 2026 keynote | TechCrunch

Nvidia's GTC conference features CEO Jensen Huang's keynote on AI's computing future, with expected announcements of NemoClaw enterprise AI platform and new inference acceleration chip.
Artificial intelligence
fromNature
6 months ago

Analog optical computer for AI inference and combinatorial optimization - Nature

An analog optical computer (AOC) integrates optics and analog electronics to accelerate both AI inference and combinatorial optimization with high efficiency and low latency.
[ Load more ]