AI Benchmarking: Nvidia and Intel Battle for Top Spot
No attachments for this post
MLCommons, a leading artificial intelligence benchmarking group, recently released results of tests assessing the speed of premier hardware in executing AI models.
Nvidia Corp's chip emerged as the top performer when tested on a large language model, while Intel Corp's semiconductor followed closely in second place. This MLPerf benchmark focused on a 6-billion parameter language model that summarized CNN news pieces, specifically emphasizing the AI "inference" phase, a critical component behind generative AI applications.
Nvidia's submission, built around eight H100 flagship chips, showcased their dominance in AI model training, but they have yet to conquer the inference market segment. Nvidia's Dave Salvator highlighted their consistent top-tier performance across various workloads.
Intel's performance was fueled by its Gaudi2 chips, an innovation from the Habana division acquired in 2019. Their system lagged by roughly 10% behind Nvidia's. Eitan Medina, Habana's COO, expressed pride in the Gaudi2's price-performance ratio. Intel hinted at their system being more cost-effective than Nvidia's latest but remained mum on specific price details.
Nvidia, too, chose not to reveal the price of its chip. However, they did announce an impending software update anticipated to boost performance as evidenced in the MLPerf test by double.
Additionally, Google provided a sneak peek into the potential of its newest custom chip, introduced during their August cloud computing summit.
Comments on this post
No comments have been added for this post.
You must be logged in to make a comment.