Azure

What Runs ChatGPT? Inside Microsoft’s AI Supercomputer | Featuring Mark Russinovich

Get an inside look at the AI supercomputer infrastructure built to run ChatGPT and other large language models, and see how to leverage it for your workloads in Azure, at any scale.

Azure offers a suite of AI capabilities.

Take advantage of the decades of breakthrough research, responsible AI practises, and flexibility that Azure AI offers to build and deploy your own AI solutions.

Access high-quality vision, speech, language, and decision-making AI models through simple API calls, and create your own machine learning models using an AI supercomputing infrastructure, familiar tools like Jupyter Notebooks and Visual Studio Code, and open-source frameworks like TensorFlow and PyTorch.

Mark Russinovich, Azure CTO, joins Jeremy Chapman to break it down.

  • For Microsoft collaborated with NVIDIA to deliver purpose-built AI infrastructure with NVIDIA GPUs.
  • How Project Forge checkpointing works to restore job states if a long training job fails or needs to be migrated.
  • How they used LoRA fine-tuning to update a fraction of the base model for more training throughput and smaller checkpoints.
  • How UK-based company, Wayve, is using Azure’s AI supercomputer infrastructure for self-driving cars.
  • And how Confidential Computing works with Azure AI to combine datasets without sharing personally identifiable information for secure multiparty collaborations.

► AGENDA

00:00 – Introduction
01:15 – AI innovation building specialized hardware and software
04:22 – Optimizing hardware
05:40 – Improved throughput
06:17 – Project Forge
08:01 – Project Forge checkpointing demo
10:02 – LoRA fine tuning
11:29 – Use AI supercomputer infrastructure for your workloads
12:34 – How Wayve is leveraging AI supercomputer infrastructure
​​13:47 – How Confidential Computing works with Azure AI
15:21 – Wrap up

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button