#ai-inference

[ follow ]
Artificial intelligence
fromTelecompetitor
1 day ago

123NET Expands Southfield Data Center for AI and High-Density Deployments

123NET expanded Southfield DC1 with a 4 MW high-density GPU colocation, liquid/air cooling, and on-site DET-iX free peering for low-latency AI.
Artificial intelligence
fromFortune
3 days ago

Jensen Huang doesn't care about Sam Altman's AI hype fears: he thinks OpenAI will be the first "multi-trillion dollar hyperscale company" | Fortune

Relentless inference demand from accelerated AI computing will drive a generational shift away from general-purpose computing, positioning OpenAI to become a multitrillion-dollar hyperscale company.
fromSilicon Valley Journals
3 weeks ago

Baseten raises $150 million to power the future of AI inference

Baseten just pulled in a massive $150 million Series D, vaulting the AI infrastructure startup to a $2.15 billion valuation and cementing its place as one of the most important players in the race to scale inference - the behind-the-scenes compute that makes AI apps actually run. If the last generation of great tech companies was built on the cloud, the next wave is being built on inference. Every time you ask a chatbot a question, generate an image, or tap into an AI-powered workflow, inference is happening under the hood.
Venture
Artificial intelligence
fromFortune
3 weeks ago

Exclusive: Baseten, AI inference unicorn, raises $150 million at $2.15 billion valuation

Baseten provides inference infrastructure that enables companies to deploy, manage, and scale AI models while rapidly increasing revenue and valuation.
Artificial intelligence
fromInfoWorld
1 month ago

Evolving Kubernetes for generative AI inference

Kubernetes now includes native AI inference features including vLLM support, inference benchmarking, LLM-aware routing, inference gateway extensions, and accelerator scheduling.
#amd
Artificial intelligence
fromInfoQ
4 months ago

Google Enhances LiteRT for Faster On-Device Inference

LiteRT simplifies on-device ML inference with enhanced GPU and NPU support for faster performance and lower power consumption.
fromTechzine Global
4 months ago

Red Hat lays foundation for AI inferencing: Server and llm-d project

AI inferencing is crucial for unlocking the full potential of artificial intelligence, as it enables models to apply learned knowledge to real-world situations.
Artificial intelligence
Artificial intelligence
fromIT Pro
5 months ago

'TPUs just work': Why Google Cloud is betting big on its custom chips

Google's seventh generation TPU, 'Ironwood', aims to lead in AI workload efficiency and cost-effectiveness.
TPUs were developed with a cohesive hardware-software synergy, enhancing their utility for AI applications.
[ Load more ]