HomeTechMicrosoft and NVIDIA Accelerate Development and Performance AI | Blog Microsoft Azure

Microsoft and NVIDIA Accelerate Development and Performance AI | Blog Microsoft Azure

Together, Microsoft and Nvidia accelerate some of the strictest innovations in AI. We are excited that we can continue innovations with several new notifications from Microsoft and NVIDIA, which further increase our cooperation with a full magazine.

Together, Microsoft and Nvidia accelerate some of the strictest innovations in AI. In the last few years, this long -term cooperation has been the core of the AI ​​revolution in the last few years, from bringing top -of -the -line industry power to the support of breakthrough border models and as ChatgPT in Microsoft Azure Openai Service and Microsoft Copilot.

Today, there are several new announcements from Microsoft and Nvidia, which further increase cooperation with a full magazine to help shape the future AI. This included the integration of the latest NVIDIA Blackwell platform with Azure AI service infrastructure, NVIDIA NIVIPRAING NIE MicroServices to Azure and Azure, and developers, startups and organizations such as NBA, BMW, Dents, Harvey and Origin. Demanding problems across domains.

Authorization of all developers and innovators with AI Agencies

Microsoft and NVIDIA cooperate deep throughout the technology magazine and with the rise of AI agent, several new offers are available in Azure AI Foundry. The first is Azure AI Foundry now offers MicroServices NVIDIA Nim. They provide optimized containers for more than two dozen popular endowment models, allowing developers to quickly deploy generative applications and AI agents. These new integrations can speed up working loads for models available on Azure and provide significant improvisation of performance, which significantly supports the growing use of AI agents. Key functions include the output of the optimized NVIDIA computing platform accelerated by computing platforms, pre -created microservisses deployed anywhere and improved AKCCRACY for specific use of boxes. In addition, we will soon integrate the model of open thinking NVIDIA LLAM NEOTRON. The reason for NVIDIA LLAM NEOTRON is a powerful AI model for advanced reasoning.

Epic, a leading company of electronic health records, It plans to use the latest NVIDIA integration on Azure AZ Foundry, which improves AI application to get rid of better health care and patient results.

The launch of NVIDIA Nim MicroServices in Azure AI Foundry offers a safe and efficient way to deploy generative AI models with open source code that improve patient care, strengthen physicians and operational efficiency, and reveal new knowledge about medical innovation management. In cooperation with UW Health and UC San Diego Health, we are also exploring methods for evaluating clinical summary with these advanced models. Together we use the latest AI technology in a way that really improves the lives of clinics and patients.

DREW MCCOMBS, VP Cloud and Analytics, Epic

Microsoft further cooperates closely with NVIDIA to optimize performance for popular model language models with an open source code and ensures that they are on Azure AZ Foundry, so customers can take full advantage of performance and efficiency from endowment models. The new addition of this collaboration is to optimize power for meta LLAM models using tenrrt-LLM. Developers can now use optimized LLAMA models from the Azure AI Foundry model catalog to experience improvisation in transmittance without further steps.

“At Synopysys, we rely on the top AI models that control innovations, and optimized META LLAMA models on the Azure AA foundry brought exceptional performance. We have seen a significant improvement in the transmission and latency, allowing us to speed up our workload and optimize costs.

ARUN Venkatchar, VP Engineering, Central Engineering Synopsys

At the same time, Microsoft is enthusiastic to expand its model catalog in Azure AA Foundry even further with the addition of a master little 3.1, which soon comes, an improved version of Mistral Small 3, representing multimodal capabilities and augmented context of up to 128k.

Microsoft also announces General availability of units of Azure container applications without server (GPU) with support for nVidia nim (GPU). GPUs without servers allow businesses, startups and software development companies to operate AI working load on request with automatic scaling, optimized cold start and billing in a second with scope to zero if it is not used to reduce operations. With NVIDIA support, development teams can easily create and deploy generative AI applications along with existing applications within the same network, security and insulation boundaries.

Azure AI infrastructure expansion with nVidia

The development of Modeling and AI agent systems transforms the landscape of artificial intelligence. The robust and efficiently built infrastructure is the key to their success. Microsoft announces them with enthusiasm today General Availability of the Azure ND GB200 V6 Virtual Machine (VM) Accelerated NVIDIA GB200 NVL72 and NVIDIA Quantum Infiniband Networking. This accessory to the Azure AI infrastructure portfolio, along with the existing virtual machines used by the NVIDIA H200 and NVIDIA H100, emphasize Microsoft’s obligation to optimize infrastructure for other wave of comprehensive AI tasks such as planning, justification and adaptation.

When we move the boundaries of AI, our partnership with Azure and the introduction of the NVIDIA Blackwell platform restores a meaningful jump forward. The NVIDIA GB200 NVL72 with unrivaled performance and connectivity is solved by the most workload AI, which allows businesses to innovate faster and safer. By integrating this technology into Azure’s safe infrastructure, we will unlock the potential of AI.

IAN BUCK, Vice President Hyperscal and HPC, NVIDIA

Combination of high -performance NVIDIA GPUs with low latency NVIDIA Infiniband Networking and Scalable Azure architecture is essential for processing new massive data permeability requirements and intensive processing. In addition, understanding the integration of security, government and monitoring tools from Azure strong support, credible AI applications that adhere to the regulatory standard.

Built with its own Microsoft infrastructure system and NVIDIA Blackwell platform at data centers each blade contains two NVIDIA GB200 GRACE ™ Blackwell Supercips and Nvidia NVLink ™ SCALE-UP NETWORK-UP, supported by up to 72 NVIDIA Blackwell GPUS. In addition, it included the latest Nvidia Quantum Infiniband, allowing tens of thousands of Blackwell GPUs to be scaled, providing twice the AI ​​performance from previous generations of GEMM analysis.

As Microsoft’s work and Nvidia are still growing and forming the future of AI, the company and also looking forward to bringing performance NVIDIA Blackwell Ultra GPU and NVIDIA RTX for 6000 Blackwell Server Edition to Azure. Microsoft is set to start NVIDIA Blackwell Ultra GPU virtual computers based on GPU later in 2025. These virtual computers promise to provide exceptional performance and efficiency for the next wave of AI agent and generative load.

The Azure AI infrastructure, advanced by Nvidia Acceleled Computing, brings high AI -scale performance, as evidenced by leading industrial benchmarks such as Top500 supercomputers and MLPERF results.1.2 Recently, the Azure virtual machines using the H200 GPU NVIDIA have achieved exceptional performance in Mlperf V4.1 benchmarks in various AI tasks. Azure showed the cloud’s front power by scaling 512 H200 GPUs in the cluster, reaching 28% acceleration compared to the H100 GPU in the latest MLCOMMONS MLCOMMONS.3 This emphasizes the ability to effectively scal large GPU clusters. Microsoft is enthusiastic that customers use this performance at Azure to train advanced models and gain efficiency for generative infrecoming.

Strengthening businesses with Azure AI infrastructure

Meter trains a large foundation model on Azure AI infrastructure on End-Tond Auton Networking End-Tond. The performance and strength of Azure will be scalp as standard AI meter training, which would help develop models with billions of parameters across text configurations, time series telemetry and structured network data. With Microsoft support, the Meter models strive to improve how networks are designed, configured and managed – solving a significant challenge for progress.

The Black Forest Labs, a generative beginning AI with a mission to develop and develop the most modern models of deep learning for the media, has expanded its partnership with Azure. The Azure AI Services infrastructure is already used to deploy its flagship models, the most popular text media models in the world, and serve millions of high -quality images daily with unprecedented speeds and creative control. The Black Forest Labs builds on this foundation, accepts the new VM GB200 V6 VM VM to speed up the development and deployment of their next -generation models and shifts the boundaries of innovation in generative AI for the media. The Black Forest Labs is a partner of Microsoft since its inspection, which cooperates on ensuring the most advanced, most efficient and scalable infrastructure for training and supplying its border models.

We are expanding our partnership with Microsoft Azure to combine the unique BFL expertise in generative AI with a powerful Azure infrastructure. This cooperation allows us to build and supply the best possible image and video models faster and larger, which provides our customers the latest visual capacitive AI for media production, advertising, product design, content creation and on.

Robin Rombach, CEO, Black Forest Labs

Creating new options for innovators across industries

Microsoft and NVIDIA launched pre -configured NVIDIA Omnivers and NVIDIA ISAAC SIM Virtual Workstations and Streaming of Omnives Kit, Azure Marketplace. This offer driven by Azure virtual machines using the NVIDIA GPU GPU provides developers that they need to start the development and self-distribution of digital simulation applications and services for the physical era of the AI. Several Microsoft and Nvidia Ecosystems, including Bright Machines, Kinetic Vision, Sight Machine and Sotterve, accept these skills to create a solution that allows another wave of digitization for world manufacturers.

There are many innovative built -up solutions of built Azure companies. OPACS helps customers to protect their data using confidential calculations; FAOS AI provides knowledge of software engineering that enables customers to optimize resources and strengthen decision -making, included the measurement of the king of their coding assistants; The Bria AI provides visual generative Plafort AI, which allows developers to use AI images correctly and guilty of the high -end traas trays on fully licensed data sets; PANGEA data bring better patient results by improving screening and treatment at the time of care; And research by Basecamp leads the discovery of biodiversity with AI and extensive genomic data sets.

Experience the latest innovations from Azure and Nvidia

Today’s announcement at the NVIDIA GTC AI conference underlines the Azure commitment to shift the AI ​​innovation boundaries. Thanks to state -of -the -art products, deep cooperation and trouble -free integration, we continue to provide technology that supports and strengthens developers and customers in designing, adapting and deploying their AI solution effectively. For more information, see this year’s event and explore the possibilities that Nvidia and Azure hold for the future.

  • Visit us at the 514 stand at Nvidia GTC.

Resources::

1November 2024 | Top500

2Benchmarka thesis Benchmarks Mlcommons

3Benchmarka Scalabibility AI with Microsoft Azure – Signal65

RELATED ARTICLES

Most Popular