AI

AMD unveils MI300x AI chip as ‘generative AI accelerator’

amd-dc-ai-technology-premiere-keynote-deck-for-press-and-analysts-slide-61

AMD’s Intuition MI300X GPU options a number of GPU “chiplets” plus 192 gigabytes of HBM3 DRAM reminiscence, and 5.2 terabytes per second of reminiscence bandwidth. The corporate stated it’s the solely chip that may deal with giant language fashions of as much as 80 billion parameters in reminiscence. 

AMD

Superior Micro Gadgets CEO Lisa Su on Tuesday in San Francisco unveiled a chip that may be a centerpiece within the firm’s technique for synthetic intelligence computing, boasting its huge reminiscence and knowledge throughput for so-called generative AI duties corresponding to giant language fashions. 

The Intuition MI300X, because the half is thought, is a follow-on to the beforehand introduced MI300A. The chip can be a mixture of a number of “chiplets,” particular person chips which can be joined collectively in a single bundle by shared reminiscence and networking hyperlinks.

Su, onstage for an invite-only viewers on the Fairmont Resort in downtown San Francisco, referred to the half as a “generative AI accelerator,” and stated the GPU chiplets contained in it, a household often known as CDNA 3, are “designed particularly for AI and HPC [high-performance computing] workloads.”

The MI300X is a “GPU-only” model of the half. The MI300A is a mix of three Zen4 CPU chiplets with a number of GPU chiplets. However within the MI300X, the CPUs are swapped out for 2 further CDNA 3 chiplets.

Additionally: Nvidia unveils new form of Ethernet for AI, Grace Hopper ‘Superchip’ in full manufacturing

The MI300X will increase the transistor depend from 146 billion transistors to 153 billion, and the shared DRAM reminiscence is boosted from 128 gigabytes within the MI300A to 192 gigabytes. 

The reminiscence bandwidth is boosted from 800 gigabytes per second to five.2 terabytes per second. 

“Our use of chiplets on this product could be very, very strategic,” stated Su, due to the flexibility to combine and match completely different sorts of compute, swapping out CPU or GPU.

Su stated the MI300X will provide 2.4 occasions the reminiscence density of Nvidia’s H100 “Hopper” GPU, and 1.6 occasions the reminiscence bandwidth. 

amd-dc-ai-technology-premiere-keynote-deck-for-press-and-analysts-slide-62

AMD

“The generative AI, giant language fashions have modified the panorama,” stated Su. “The necessity for extra compute is rising exponentially, whether or not you are speaking about coaching or about inference.”

To show the necessity for highly effective computing, Sue confirmed the half engaged on what she stated is the preferred giant language mannequin in the meanwhile, the open supply Falcon-40B. Language fashions require extra compute as they’re constructed with larger and larger numbers of what are referred to as neural community “parameters.” The Falcon-40B consists of 40 billion parameters.

The MI300X, she stated, is the primary chip that’s highly effective sufficient to run a neural community of that dimension, solely in reminiscence, reasonably than having to maneuver knowledge, back-and-forth to and from exterior reminiscence.

Su demonstrated the MI300X making a poem about San Francisco utilizing Falcon-40B. 

“A single MI300X can run fashions as much as roughly 80 billion parameters” in reminiscence, she stated.

“If you evaluate MI300X to the competitors, MI300X gives 2.4 occasions extra reminiscence, and 1.6 occasions extra reminiscence bandwidth, and with all of that further reminiscence capability, we even have a bonus for giant language fashions as a result of we will run bigger fashions instantly in reminiscence.”

To have the ability to run your entire mannequin in reminiscence, stated Su, signifies that, “for the most important fashions, that really reduces the variety of GPUs you want, considerably rushing up the efficiency, particularly for inference, in addition to lowering the full price of possession.”

“I really like this chip, by the way in which,” enthused Su. “We love this chip.”

amd-dc-ai-technology-premiere-keynote-deck-for-press-and-analysts-slide-63

AMD
amd-dc-ai-technology-premiere-keynote-deck-for-press-and-analysts-slide-64

AMD

“With MI300X, you possibly can scale back the variety of GPUs, and as mannequin sizes continue to grow, it will turn out to be much more necessary.”

“With extra reminiscence, extra reminiscence bandwidth, and fewer GPUs wanted, we will run extra inference jobs per GPU than you would earlier than,” stated Su. That may scale back the full price of possession for giant language fashions, she stated, making the expertise extra accessible.

Additionally: For AI’s ‘iPhone second’, Nvidia unveils a big language mannequin chip

To compete with Nvidia’s DGX programs, Su unveiled a household of AI computer systems, the “AMD Intuition Platform.” The primary occasion of that may mix eight of the MI300X with 1.5 terabytes of HMB3 reminiscence. The server conforms to the {industry} commonplace Open Compute Platform spec.

“For purchasers, they will use all this AI compute functionality in reminiscence in an industry-standard platform that drops proper into their present infrastructure,” stated Su.

amd-dc-ai-technology-premiere-keynote-deck-for-press-and-analysts-slide-66

AMD

In contrast to MI300X, which is barely a GPU, the present MI300A goes up towards Nvidia’s Grace Hopper combo chip, which makes use of Nvidia’s Grace CPU and its Hopper GPU, which the corporate introduced final month is in full manufacturing.

MI300A is being constructed into the El Capitan supercomputer below building on the Division of Vitality’s Lawrence Livermore Nationwide Laboratories, famous Su.

The MI300A is being proven as a pattern presently to AMD clients, and the MI300X will start sampling to clients within the third quarter of this yr, stated Su. Each will probably be in quantity manufacturing within the fourth quarter, she stated.

You possibly can watch a replay of the presentation on the Web site arrange by AMD for the information.

Unleash the Energy of AI with ChatGPT. Our weblog gives in-depth protection of ChatGPT AI expertise, together with newest developments and sensible functions.

Go to our web site at https://chatgptoai.com/ to study extra.

Malik Tanveer

Malik Tanveer, a dedicated blogger and AI enthusiast, explores the world of ChatGPT AI on CHATGPT OAI. Discover the latest advancements, practical applications, and intriguing insights into the realm of conversational artificial intelligence. Let's Unleash the Power of AI with ChatGPT

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button