☕ Good morning! I was at a little party following a wedding yesterday. A wedding on a Tuesday. What will they think of next?
Nvidia has CPUs now too
Tristan Rayner / Android Authority
It’s not just Apple that can build a better CPU using the Arm v9 architecture: Nvidia announced a massive Arm-based SoC as it works to take on Intel and AMD in data centers, and AI research.
- It’s huge, both physically, like the Apple M1 Ultra, and in terms of what it means and performance.
- Now, it was announced at Nvidia’s annual GTC conference for AI developers, so the applications are not consumer-level stuff but data centers.
- Nvidia also announced a dedicated chip for training AI models for developers called the H100 GPU. TechCrunch wrote: “The H100 GPU will feature 80 billion transistors and will be built using TSMC’s 4nm process. It promises speed-ups between 1.5 and 6 times compared to the Ampere A100 data center GPU that launched in 2020 and used TSMC’s 7nm process.”
But the focus is on the Grace CPU Superchip (named after Grace Hopper) with Nvidia joining the CPU game part of the bigger story of how computing is evolving, at least in the eyes of Nvidia.
- The Grace CPU is interesting in many ways: 144-cores! 1 terabyte per second of memory bandwidth! A 5nm TSMC design!
- The CPU itself will be taking on high-end CPUs from AMD and Intel, again, chiefly focused on data centers, not home computing applications (yet?). Nvidia claimed a “1.5x faster” benchmark than the latest AMD EPYC 64 Core processors, which would be something, but no comparisons to Intel.
- The SoC aspect is that the CPU Superchip is two Grace CPUs connected over Nvidia’s NVLink chip-to-chip interface which is all about fast data transfer, and it’ll support the new UCIe specification for sharing chiplets — and Nvidia is now allowing other vendors to use the design for their own chiplets.
What it means:
- At least one take on the whole thing is that Nvidia is basically saying “we can do CPUs too, and in part, the CPU is our tool to move data towards the GPUs as fast as possible for the actual number crunching.” And, it’s not based on x86 anymore, but Arm.
- Another perspective is that for the longest time, we’ve had the CPU as the focus of performance and the central part of any computer. Nvidia is sort of saying “let us build the fastest possible way to pipe data to the GPU” with its CPU+NVlink+CPU approach in the CPU Superchip.
- By the way, there’s also a separate Grace Hopper Superchip, which is CPU+NVlink+GPU in a single SoC, launching next year.
- (Also, if you’re not really familiar with Nvidia’s chips and naming strategies, good luck. Nvidia did announce the Grace Hopper Superchip and Grace CPU last year, but last year called the Superchip the Nvidia Grace. So, if you thought you knew what everything was, it’s changed, and if someone says “Superchip” alone, it could mean any combination of CPU and GPU.)
- Regardless, the take-home point: Fast, quite like the Apple M1 Ultra, and built to Nvidia’s view of the world. And, possibly good news for making data centers more efficient.
📺 YouTube is taking on over-the-air TV with nearly 100 TV shows, ad-supported and US-only for now (Android Authority).
Tristan Rayner / Android Authority
Yay physics toys! Like the age-old classic of the Newton’s Cradle, these are the sort of trick toys you can’t quite believe, and when they’re good, they’re amazing, and get into the realm of magic, illusion, and trickery — and science!
- This video does a neat job of running through a bunch of clever little physics toys.
- I love the marble ramp in the gif above that just keeps throwing the marble back up to the collector, and the very first illusion toy is a weird mind-bender as well, oh, and the Moon Ramp thing? Wow.
- And I haven’t seen an oil drop timer in years?
Tristan Rayner, Senior Editor