In what could be a large step in the development of AI, tech hardware company AMD and AI development firm Hugging Face have entered a partnership to speed up the creation of new AI models.
Announced on June 13, both companies outlined the collaboration as an effort to accelerate the development of modern AI systems. With targeted development of CPUs and GPUs that optimize the productivity of HuggingFace’s library, end-users will be able to code and finetune their AI models faster than usual.
The AMD Hugging Face Partnership Opens Doors to Faster AI Development
With AI development gaining significant momentum as of late, it has become important for developers to fast-track their results. The AMD Hugging Face partnership aims to help with this goal through targeted hardware development that could support optimal AI model generation.
The partnership initiates from AMD joining Hugging Face’s Hardware Partner Program. Under the program, AMD will work with Hugging Face to deliver enhanced performance through its CPUs and GPUs for AI development and training models.
Since AI is currently experiencing increasing demand with little to no dedicated CPUs and GPUs for boosting transformer development and training performance, AMD’s deal with Hugging Face intends to fill this gap and bring accessible solutions to enterprises and individual developers alike.
What Type of Hardware Will Benefit From This Partnership?
During the initial phase of the AMD Hugging Face partnership, both companies will focus on the development of the Instinct MI2xx and MI3xx accelerators. These components already offer enhanced development for AI models, with the collaboration taking their results one step further.
Afterwards, the partners will move to the Radeon Navi3x. Unlike the MI2xx and MI3xx accelerators that are meant for enterprise use, the Radeon Navi3x is meant for individual utilization. This will make faster AI development more accessible to larger groups of people.
How Does AMD Hardware Differ From Its Competitors?
AMD’s CPUs and GPUs already provide an excellent performance boost for AI transformers. For instance, the MI250 has recorded 1.2x faster training rates for BERT-Large and 1.4x faster training rates for GPT2-Large as compared to its competitors.
With the AMD Hugging Face partnership, the hardware manufacturer and the AI development firm both aim to take these benefits to the next level.
For CPUs, the companies will initially focus on Ryzen and server EPYC. This can further enhance the overall performance of respective machines for AI development. For AI accelerators, the firms will also look into enhancing the Alveo V70 even further. The benefit of this focus will come in the form of improved outcomes with lower power consumption.
Which Transformers Will Be Supported?
Hugging Face’s development library supports several AI languages for technologies such as natural language processing (NLP). When the AMD hardware is combined with Hugging Face’s offerings, it can provide support for transformers such as BERT and CLIP, while also extending its benefits to generative AI models including but not limited to GPT-2, LLaMA, and OPT.
The results of the collaborations will be announced by both AMD and Hugging Face in the near future.