New NVIDIA GPUs coming to Azure accelerate HPC and AI workloads
Machine Learning, AI, and HPC are changing the way every industry thinks, including retail, manufacturing, healthcare, oil and gas, and financial services. These large computations are changing product design, end-customer experiences, enabling predictive support and leading to discoveries and innovations not previously possible. It really is an exciting time to be working in the cloud!
In the Azure compute team, we strive to make sure you have the best, the latest, and the most cost-effective infrastructure for every compute job, no matter how different they may be. To this end, we offer the most comprehensive set of GPUs in the public cloud, already offering VM sizes with NVIDIA’s K80s, M60s, P40s and P100s. Today, I’m happy to share two exciting new announcements to further support your GPU workloads:
- We are launching a new VM size on Azure, the NCv3. This new size will offer the new NVIDIA Tesla V100 GPU. You can sign up for the preview today.
- Our NCv2, offering NVIDIA P100s, and our ND-series, offering NVIDIA P40s, are exiting preview and will be GA for your production workloads starting on December 1st.
The NCv3-series virtual machines will use NVIDIA Tesla V100 GPUs, which are the latest GPUs from NVIDIA. Like our previous GPU sizes, Azure is the only cloud with dedicated InfiniBand interconnects to enable incredibly fast multi-VM computations. Our GPU sizes also offer PCIe configuration with direct support for Azure premium storage. We will open preview access to the NCv3 series in the East US region in the coming weeks.
Just a couple months ago, we released the preview of the ND-series, focusing on deep learning, AI training, and inference. The ND is powered by up to four NVIDIA Tesla P40 GPUs and provide a large GPU memory size (24GB), enabling customers to deploy much larger neural net models. We also recently released the NCv2-series, targeting traditional HPC workloads. With up to four NVIDIA Tesla P100 GPUs and our unique InfiniBand networking for low-latency interconnect, the NCv2 offers great HPC performance at a great price for scale-out workloads. Both of these SKUs will be GA on December 1st.
Of course, the hardware is only part of the story. With Azure Batch AI, you can quickly and easily run AI workloads, focusing on your jobs while letting Azure Batch handle provisioning and management. Batch AI can use every flavor of our new GPU VMs, giving you ready access to great AI hardware coupled with simple AI job execution.
Additionally, our Data Science Virtual Machine images are being updated to take advantage of the new GPUs. DSVMs are Azure Virtual Machine images, pre-configured and tested with several popular tools commonly used for data analytics, machine learning and AI training. The Data Science Virtual Machine images are great for training and education, short-term experimentation, or simply to have a cloud desktop with the latest versions of popular data science applications pre-installed.
I can barely wait to see what amazing discoveries and innovative insights you are able to ascertain with these new GPU VMs. The NCv2 and ND will be GA and ready for production workloads on December 1 in multiple regions in the US, Europe and Asia.
Finally, if you’re attending the Supercomputing conference in Denver this week, stop by booth 1501 and find out more from the team.
See ya around,
Corey
Source: Azure Blog Feed