Accelerate PyTorch workloads with Cloud TPUs and OpenXLA

Estimated read time 1 min read

Post Content

​ Google Cloud AI accelerators enable high-performance, cost-effective training and inference for leading AI/ML frameworks. In this session, get the latest news on PyTorch/XLA, developed collaboratively by Google, Meta, and partners in the AI ecosystem to utilize OpenXLA for accelerating PyTorch workloads.

Github → https://goo.gle/3xQeFaC
Cloud Tensor Processing Units (TPUs) → https://goo.gle/tpu

Speakers: Shauheen Zahirazami

Watch more:
Check out all the AI videos at Google I/O 2024 → https://goo.gle/io24-ai-yt
Check out all the Cloud videos at Google I/O 2024 → https://goo.gle/io24-cloud-yt
Check out all the Mobile videos at Google I/O 2024 → https://goo.gle/io24-mobile-yt
Check out all the Web videos at Google I/O 2024 → https://goo.gle/io24-web-yt

Subscribe to Google Developers → https://goo.gle/developers

#GoogleIO

Products Mentioned: Cloud – AI and Machine Learning – Cloud TPU
Event: Google I/O 2024   Read More Google for Developers 

You May Also Like

More From Author

+ There are no comments

Add yours