EPISODE · May 10, 2026 · 33 MIN
AI, Data Centers, and the Power Crunch
from The Reasoning Show · host Massive Studios
SUMMARY: We explore one of the most overlooked bottlenecks in the AI boom: energy and infrastructure and why power availability is becoming the limiting factor.GUEST: Wannie Park, Founder/CEO of PADO AISHOW: 1026SHOW TRANSCRIPT: The Reasoning Show #1026 TranscriptSHOW VIDEO: https://youtu.be/satMQRxKQC8SHOW SPONSORS:ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES:1. AI’s Hidden Constraint: PowerAI growth is no longer limited only by GPUs and computePower generation, cooling, and grid interconnects are emerging as major bottlenecksData centers could account for 10–12% of North American power demand in coming years2. Why Data Centers Are Being ReimaginedTraditional data centers were built for enterprise IT, not AI-scale workloadsAI infrastructure introduces:Massive power density needsAdvanced cooling challenges3. The Grid Wasn’t Built for AIUtilities are designed around peak demand scenariosMost grids run well below peak capacity most of the timeAI workloads create volatile and unpredictable consumption patternsLong interconnection timelines are pushing companies toward alternative infrastructure models4. GPU Utilization Is Surprisingly LowGPU clusters are often underutilized because of:Scheduling inefficiencies, Cooling limitations, SLA constraintsEffective GPU utilization may be as low as 12–13% in some environments5. Cooling as a Major Optimization LayerLegacy data centers often cool entire zones inefficientlyPado AI alignsAI workloads, Cooling systems, Power allocationWorkload-aware orchestration helps optimize cooling and compute efficiency6. The Rise of “Compute Forecasting”Pado forecasts compute demand instead of energy demandThe platform models:GPU workloads, Power consumption, Cooling requirements, SLA prioritiesGoal: maximize “compute per megawatt”7. AI Workloads Become Time-AwareAI providers may increasingly:Shift workloads to off-peak periodsIncentivize delayed non-urgent jobsDynamically balance compute demandUsers are already seeing variable inference latency in real-world AI systems8. Sustainability vs Reliability vs ProfitabilityOperators must balance:Uptime expectations, Infrastructure costs, Sustainability goalsRenewable adoption is growing, but reliability still drives investment in natural gas and battery-backed systems9. Brownfield vs Greenfield OpportunitiesPado AI is focused primarily on existing (“brownfield”) data centersExisting enterprise infrastructure can often be extended and optimized instead of rebuiltEnterprises may gain significant AI capability without hyperscale GPU deploymentsFEEDBACK?Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow
NOW PLAYING
AI, Data Centers, and the Power Crunch
No transcript for this episode yet
Similar Episodes
No similar episodes found.