What’s the Difference Between GDDR and DDR Memory?

Terminology

The full name GDDR6 SDRAM stands for double data rate type six synchronous graphics random access memory. Synchronous dynamic random access memory (SDRAM) is the foundation of the memory used in desktops (DDR4), laptops (LPDDR4) and graphics processing units (GDDR). Where GDDR mainly differs from DDR is in memory bus size. Because GPU’s are meant for processing larger amounts of repetitive and “simple” workloads, a larger bit bus is required to prevent a bottleneck. GDDR typically has lower heat and power requirements than DDR, allowing for higher performance with simpler cooling systems. This is not to say that GDDR is better than DDR, it is just optimized for higher bandwidth workloads GPU’s encounter.

Speed

GDDR has been around for quite some time now, and the current mainstream version of technology utilized by modern graphics cards is GDDR5. There is an iteration named GDDR5X currently available to high-end consumers. However, GDDR5 is the more affordable and widely available technology to end-users. GDDR5 can transfer between 5 and 8 Gigatransfers per second (GT/s) which equals approximately 64 Gigabytes per second (GB/s). The upcoming standard GDDR6 promises to increase the speed to 16 GT/s which equates to 128 GB/s. What this ultimately means for consumers is lower latency, and an increase in overall GPU performance, coupled with an increase is power efficiency as seen in the chart below.

Release Time-frame

Some speculation places a likely finalization of the GDDR6 specification to be released sometime in Q4 of 2017 with a graphics card adoption soon to follow in Q1/Q2 2018. More conservative estimates place both the specification and graphics card releases to both occur in Q2/Q3 of 2018. In either case, GDDR6 will be arriving shortly, and it will bring with it substantial increases in performance worth upgrading for.