Post Content
Chris Gandolfo, EVP of OCI and AI Sales at Oracle, and Mark Papermaster explore what it really takes to train and deploy large language models at scale. From evolving compute needs and energy efficiency to the rise of inference, this episode dives deep into the future of enterprise AI infrastructure.
00:00 Series Intro & Host Introduction
00:12 Meet Chris Gandolfo – EVP at Oracle
01:19 AI at an Inflection Point – Oracle’s Perspective
04:48 Does Training Ever End?
05:27 OCI’s Late Entry Strategy – Learning from Rivals
07:24 Making it Easy for Enterprise
09:17 Operating in a Scarce Environment
11:40 How Far Along is Enterprise AI Adoption?
14:37 It’s a Really Great Time to be a Customer
15:03 AMD + Oracle: Performance-Driven Partnership
17:53 Cross Collaboration Across the Ecosystem is King
20:27 Enabling Edge Inference with Fewer GPUs
21:59 Co-Innovation on MI355 and Future Roadmaps
24:08 Openness: Freedom from Lock-In
25:07 The Future of AI Training and Inference
26:37 Societal Impact: Guardrails &Responsibility
28:15 Final Reflections and Episode Close
***
Subscribe: https://bit.ly/Subscribe_to_AMD
Join the AMD Red Team Discord Server: https://discord.gg/amd-red-team
Like us on Facebook: https://bit.ly/AMD_on_Facebook
Follow us on Twitter: https://bit.ly/AMD_On_Twitter
Follow us on Twitch: https://Twitch.tv/AMD
Follow us on LinkedIn: https://bit.ly/AMD_on_Linkedin
Follow us on Instagram: https://bit.ly/AMD_on_Instagram
©2025 Advanced Micro Devices, Inc. AMD, the AMD Arrow Logo, and combinations thereof are trademarks of Advanced Micro Devices, Inc. in the United States and other jurisdictions. Other names are for informational purposes only and may be trademarks of their respective owners. Read More AMD