Nvidia, Rubin and AI chips
Digest more
Interesting Engineering on MSN
NVIDIA debuts Rubin platform at CES 2026, delivering 50 petaflops, faster AI
NVIDIA used the CES 2026 stage today to formally launch its new Rubin computing architecture, positioning it as the company’s most advanced AI hardware platform to date. CEO Jensen Huang said Rubin has already entered full production and will scale further in the second half of the year, signaling NVIDIA’s confidence in demand.
The Register on MSN
Every conference is an AI conference as Nvidia unpacks its Vera Rubin CPUs and GPUs at CES
Teasing the next generation earlier than usual CES used to be all about consumer electronics, TVs, smartphones, tablets, PCs, and – over the last few years – automobiles. Now, it's just another opportunity for Nvidia to peddle its AI hardware and software — in particular its next-gen Vera Rubin architecture.
The GPU made its debut at CES alongside five other data center chips. Customers can deploy them together in a rack called the Vera Rubin NVL72 that Nvidia says ships with 220 trillion transistors, more bandwidth than the entire internet and real-time component health checks.
Connect X9 (1.6 TB/s bandwidth), Bluefield 4 DPU (offloads storage/security), NVLink 6 switch (scales 72 GPUs as one), Spectrum X Ethernet Photonix (512 lanes, 200 Gbit optics for AI factories). 15,000 engineer-years.
Nvidia CEO Jensen Huang gave an outlook on the upcoming AI server DGX Vera Rubin with in-house ARM processor cores and new GPU architecture at CES.
IBM Corp. subsidiary Red Hat is moving more aggressively than usual to ensure its software stacks are ready the moment new generations of Nvidia Corp.’s artificial intelligence hardware reach the market, a strategy executives said is driven by surging demand for larger and more capable AI model architectures.
Those who anticipated NVIDIA CEO Jensen Huang would delay delivering an update on its next big AI chip -- the Vera Rubin processor first discussed last - Read more from Inside HPC & AI News.
TL;DR: NVIDIA's next-generation Rubin AI GPUs, featuring a chiplet design and advanced CoWoS-L packaging on TSMC's N3P node, will enter trial production in September and mass production in early 2026. These GPUs will use cutting-edge 12-layer HBM4 memory ...
Nvidia’s $20 billion strategic licensing deal with Groq represents one of the first clear moves in a four-front fight over the future AI stack. 2026 is when that fight becomes obvious to enterprise builders.