Is The New AMD MI300X Better Than The NVIDIA H100?

Category: Tags: , , , , , ,

Harness the Potential of AI Instruments with ChatGPT. Our weblog provides complete insights into the world of AI know-how, showcasing the most recent developments and sensible purposes facilitated by ChatGPT’s clever capabilities.

AMD disclosed just a few extra particulars on the MI300 GPU, due later this 12 months, with help for 192GB of reminiscence on the MI300X. Right here’s what we all know.

In at this time’s world of ChatGPT, everybody retains asking if the NVIDIA A100 and H100 GPUs are the one platforms that may ship the computational and huge reminiscence necessities of Massive Language Fashions (LLMs). And the reply is sure, not less than for now. However AMD intends to alter that later this 12 months with a brand new GPU, the MI300x. CEO Lisa Su was visibly excited to announce just a few extra particulars on her firm’s upcoming knowledge heart GPU at an information heart occasion at this time.

What Is The AMD MI300X?

Dr. Su introduced a model of the CPU/GPU APU MI300, teased at CES earlier this 12 months, that replaces 3 EPYC compute dies with two extra GPU dies, including extra compute and HBM reminiscence capability. The MI300X would be the flagship AMD providing for big AI, so this can be a very huge deal for the corporate and its buyers.

It will likely be obtainable in single accelerators in addition to on an 8-GPU OCP-compliant board, referred to as the Intuition Platform, much like the NVIDIA HGX. Nevertheless it’s going to use the Infinity Material to attach the GPUs, and can run the ROCm AI software program stack.

When In contrast To The NVIDIA H100, The MI300X Has Some Points.

Let’s do a sanity test on AMD”s ambitions. First, I have to say that the MI300 is an incredible chip, a tour de power of making use of chiplet know-how. I see why Dr. Su says she loves it! However it’s going to face just a few challenges when in comparison with the NVIDIA H100.

At the beginning, the NVIDIA H100 is delivery in full quantity at this time. In truth, NVIDIA can promote all that TSMC could make after which some. And NVIDIA has, by far, the most important ecosystem of software program and researchers within the AI trade.

Secondly, whereas the brand new high-density HBM chiplets supply 192GB of very a lot wanted reminiscence, I think that NVIDIA will supply the identical reminiscence, in all probability in the identical timeframe or maybe even earlier. So that won’t be a bonus. We level out that this new higher-density model of HBM3 might be expensive; right here’s an evaluation by Dylan Patel of Semianalysis that signifies AMD is not going to have a major price benefit versus an NVIDIA H100.

Third, and that is the true kicker, the MI300 doesn’t have a transformer engine just like the H100, which may triple efficiency for the favored LLM AI fashions. If it takes hundreds of GPUs a 12 months to coach a brand new mannequin, I doubt that anybody will say its okay to attend 2-3 extra years to get their mannequin to market, or throw 3 instances as many GPUs on the downside.

Lastly, AMD has but to reveal any benchmarks. And that’s okay; it has not but been launched! However efficiency when coaching and operating LLMs relies upon as a lot on the system design because the GPU, so we sit up for with the ability to see some actual apple to apple comparisons later this 12 months.

Is the MI300X Lifeless-on-Arrival?

Actually not! It can in all probability change into the popular second option to NVIDIA Hopper, not less than till Hopper’s alternative (the three nm Blackwell) is on the market. NVIDIA has implied Blackwell will launch subsequent Spring at GTC, and doubtless will ship late 2024 or early 2025.

However firms like OpenAI and Microsoft must have an alternative choice to NVIDIA, and we suspect that AMD will give them a proposal they’ll’t refuse. However don’t count on it to take loads of share from NVIDIA.

Conclusions

AMD is justified in its pleasure about having a aggressive GPU to compete within the skyrocketing AI market. And the MI300X seems to be to be a stable contender. However let’s not get carried away!

Disclosures: This text expresses the opinions of the writer(s), and isn’t to be taken as recommendation to buy from nor spend money on the businesses talked about. Cambrian AI Analysis is lucky to have many, if not most, semiconductor companies as our shoppers, together with Blaize, Cadence Design, Cerebras, D-Matrix, Eliyan, Esperanto, FuriosaAI, Graphcore, GML, IBM, Intel, Mythic, NVIDIA, Qualcomm Applied sciences, Si-5, SiMa.ai, Synopsys, and Tenstorrent. We’ve got no funding positions in any of the businesses talked about on this article and don’t plan to provoke any within the close to future. For extra info, please go to our web site at https://cambrian-AI.com.

Uncover the huge potentialities of AI instruments by visiting our web site at
https://chatgptoai.com/ to delve deeper into this transformative know-how.

Reviews

There are no reviews yet.

Be the first to review “Is The New AMD MI300X Better Than The NVIDIA H100?”

Your email address will not be published. Required fields are marked *

Back to top button