Profiling Pytorch/XLA on TPUs with XProf

Estimated read time 1 min read

Post Content

​ Unlock the full potential of your PyTorch models running on Google TPUs. We’ll look at how to profile PyTorch/XLA workloads on TPUs using XProf, and learn how to identify and eliminate bottlenecks in your training pipeline, ensuring you’re getting maximum performance from the TPU hardware.

Resources:
PyTorch/XLA GitHub → https://goo.gle/4pFTA8p
XProf Documentation → https://goo.gle/3Y5v7gU

Subscribe to Google for Developers → https://goo.gle/developers

Speaker: Chris Achard
Products Mentioned: Google AI   Read More Google for Developers 

You May Also Like

More From Author