Confidential AI with Ubuntu on Azure: a deep dive | Data & AI Masters | Canonical and Microsoft

Estimated read time 2 min read

Post Content

​ When performing machine learning tasks in the cloud, enterprises understandably have concerns about the potential compromise of their sensitive data privacy as well as their model’s intellectual property. Additionally, stringent industry regulations often prohibit the sharing of such data. This makes it difficult, or outright impossible, to utilise large amounts of valuable private data, limiting the true potential of AI across crucial domains.

Confidential AI tackles this problem head on, providing a hardware-rooted execution environment that spans both the CPU and GPU. This environment enhances the protection of AI data and code at runtime by helping to safeguard it against privileged system software (such as the hypervisor or host OS) and privileged operators in the cloud.

On Microsoft Azure, confidential AI is made possible thanks to a solution built on Ubuntu 22.04 confidential VMs (CVMs), using AMD 4th Gen EPYC processors with SEV-SNP, alongside NVIDIA H100 GPUs.

Join Canonical’s Ijlal Loutfi and Antoine Delignat-Lavaud (Microsoft Research Cambridge) for a technical deep dive into the core components that make confidential AI possible on Azure. We will explore memory confidentiality and integrity across both CPU and GPU, as well as the crucial role of remote attestation in ensuring the security of your AI workloads. Learn more at https://ubuntu.com/confidential-computing   Read More Canonical Ubuntu 

#linux

You May Also Like

More From Author