AMD has confirmed its next-generation RDNA 4 ‘Radeon RX 8000’ Gaming GPUs & CDNA 3 ‘Instinct MI300’ Data Center APUs in its latest roadmap.
AMD 2022-2024 GPU Roadmap Confirms RDNA 3 For Gaming ‘Radeon RX 8000’ and CDNA 4 For ‘Instinct MI300’ Chips
While AMD has revealed the first details of its next-generation RDNA 3 GPUs, they also revealed the generation of gaming GPUs that comes after that, and surprise surprise, it’s called RDNA 4.
AMD RDNA 4 Gaming GPUs Arrive By 2024
The new Navi 4x lineup is expected to launch in 2024 and will be based on an advanced process node. For its CPU lineup, AMD has announced both 4nm and 3nm so it could be any one of those nodes but I would place my bets on the 4nm node due to its maturity and the fact that it also matches the naming scheme which will make for some great marketing. AMD didn’t share any numbers but we at least know that RDNA 4 is a real thing and comes after RDNA 3.
According to the rumored information, it is reported that similar to RDNA 3, the RDNA 4 MCM GPUs will comprise two different process nodes. The RDNA 4 architecture will power the Navi 4X GPU line and the flagship chip, the Navi 41, leaked out one year ago. The information states that the RDNA 4 lineup powering the Radeon RX 8000 series graphics cards will feature a common architecture and won’t split it into two different architectures as expected with RDNA 3.
There are reports that AMD’s RDNA 3 graphics architecture will only be featured on the Navi 31, 32 & possibly, the Navi 33 GPU while the rest of the lineup is going to be based on an RDNA 2 refresh. The top Navi 31 and Navi 32 RDNA 3 GPUs will feature an MCM architecture and utilize a 5nm node for the GCD (Graphics Compute Die) and a 6nm node for the MCD (Multi-Cache / IO Die). The RDNA 2 refresh GPUs will be based on a 6nm process node & will be featured as a refresh within the new graphics family.
For RDNA 4, it looks like AMD is planning to feature the same graphics architecture across its entire Navi 4X lineup and no refreshes from older GPU architectures such as RDNA 3 or RDNA 2. AMD will utilize two different nodes for its MCM chips, such as the Navi 41, namely 5nm and 3nm. The GCD will be based upon the 3nm process node while the MCD will be based upon the 5nm node. Since RDNA 3 is planned for launch in late 2022, the RDNA 4 lineup will launch sometime in 2024 and will compete against the successor to NVIDIA’s Ada Lovelace and Intel’s ARC Battle Mage or Celestial.
AMD RDNA Generational GPU Lineup
|Radeon Lineup||Radeon RX5000||Radeon RX6000||Radeon RX7000||Radeon RX 8000|
|GPU Architecture||RDNA 1||RDNA2||RDNA3 / RDNA2||RDNA4|
|GPU Family||Navi 1X||Navi 2X||Navi 3X||Navi4X|
|Flagship GPU||N / A||Navi 21 (5120 SPs)||Navi 31 (15360 SPs)||Navi 41|
|High-end GPU||Navi 10 (2560 SPs)||Navi 22 (2560 SPs)||Navi 32 (10240 SPs)||Navi 42|
|Mid-Tier GPUs||Navi 12 (2560 SPs)||Navi 23 (2048 SPs)||Navi 33 (5120 SPs)||Navi 43|
|Entry-Tier GPU||Navi 14 (1536 SPs)||Navi 24 (1024 SPs)||Navi 34 (2560 SPs)||Navi 44|
AMD’s Next-Gen CDNA 4 ‘Instinct MI300 APU’ By 2023
David Wang also announced the Compute GPU roadmap which includes Instinct-class chips for the AI and Data Center segment. It can now be confirmed that AMD is indeed working on a multi-chip & multi-IP Instinct accelerator which not only features the next-generation of CDNA 3 cores but also is equipped with the next-generation Zen 4 CPU cores. The Instinct MI300 GPU (Technically an APU) is scheduled to launch by 2023.
Coming to the details, AMD will be utilizing the 5nm process node for its Instinct MI300 ‘CDNA 3’ GPUs. The chip will be outfitted with the next generation of Infinity Cache and feature the 4th Gen Infinity architecture which enables CXL 3.0 ecosystem support.
The Instinct MI300 accelerator will rock a unified memory APU architecture and new Math Formats, allowing for a 5x performance per watt uplift over CDNA 2 which is massive. AMD is also projecting over 8x the AI performance versus the CDNA 2-based Instinct MI250X accelerators.
The CDNA 3 GPU’s UMAA will connect the CPU and GPU to a unified HBM memory package, eliminating redundant memory copies while delivering low TCO.