CV_Spring-2026

AI spending has exceeded their initial projections. Nearly half cited cloud-specific cost unpredictability, including difficulty forecasting total cost of ownership and consumption-based pricing fluctuations as barriers to expanding AI adoption. For many organizations, there is now a mismatch between workloads and pricing models. “The hyperscaler model works brilliantly for certain workloads – the dynamic, unpredictable ones that need to scale instantly,” explained Telarus vice president of cloud, Chad Muckenfuss, in a recent post. “But the truth is, 80 percent of most companies’ workloads are stable and predictable. Running those on consumption-based pricing is like paying for a hotel room every night instead of signing a lease.” Performance demands: Enterprises also are now gravitating toward on-premises infrastructure to support AI use cases that involve real-time processing, such as manufacturing quality control and video surveillance. These workloads often have latency requirements that can be difficult to meet in cloud environments. In Cloudian’s study, 75 percent of respondents identified AI workloads that would benefit from on-premises deployments. And only 4 percent of respondents said their AI use cases don’t have performance requirements that require on-premises infrastructure. Repatriation Realities: Challenges and Tradeoffs Cloudian predicts that AI repatriation will continue throughout the near-term. More than seven in 10 respondents (73 percent) said they intend to move AI workloads or adopt and expand a hybrid approach within the next 24 months. But technology advisors and IT leaders should approach this trend with caution. Repatriation is more complex than it appears, and organizations that move too quickly risk making the same mistakes made during the initial rush to the cloud. Rising infrastructure costs: Hardware costs are rising in 2026, driven by memory shortages and surging AI demand. As a result, enterprise buyers should be prepared for longer lead times and higher prices for critical components. AI workloads also place greater demands on power and cooling infrastructure. Moving AI workloads on premises may require additional upgrades, which can drive costs up even further. As such, IT leaders must be realistic when it comes to repatriation costs. Organizations may need to absorb higher upfront costs, with the understanding that returns are more likely to materialize over the long term. Operational complexity: Repatriating AI workloads also means taking on additional infrastructure responsibilities including maintaining servers, storage and networking. Making matters more complex, organizations still face a growing talent gap, with AI-related infrastructure roles becoming more specialized and harder to fill. Security and compliance: While running sensitive AI workloads on premises can offer greater control and protection, it also introduces new challenges. Organizations must take on greater responsibility for protecting infrastructure, data and workloads from evolving threats — securing on-premises environments, managing access controls and regularly auditing infrastructure. Vendor selection: Organizations that previously relied on VMware infrastructure now face a very different reality following recent changes to Broadcom’s licensing and pricing models. Broadcom’s shift toward bundled, subscription-based offerings has introduced new cost and operational considerations, causing many customers to reevaluate their on-premises virtualization strategies. For technology advisors and customers, this means reduced flexibility in how VMware products are purchased, along with higher costs in many cases. Organizations exploring alternative options may consider Nutanix, Hyper-V, Proxmox or Red Hat. Turning Challenges into MRR Technology advisors have a growing opportunity to turn repatriation challenges into long-term recurring revenue through partnerships and managed services. Asking probing questions around AI workload cost and performance can lead to broader infrastructure discussions and open the door to substantial MRR. It’s important to remember that success in this market doesn’t require deep or specialized infrastructure expertise. It starts with engaging customers in quality conversations — asking the right questions and getting them to share insights around resource constraints, pain points and goals. o Expected AI Infrastructure Strategy Evolution (Next 24 Month) Shift AI workloads to on-prem or private infrastructure 22% Adopt/expand hybrid with increased on-prem capacity 51% Maintain current infrastructure balance 12% Increase public cloud usage 10% Uncertain, strategy still being determined 5% Source: Cloudian’s 2026 Enterprise AI Infrastructure Survey Top Factors That Would Increase Likelihood of On-Premises AI Deployment Stronger data privacy and sovereignty guarantees 53% Better performance for latency-sensitive apps 43% Improved total cost of ownership vs. cloud alternatives 40% Vendor-managed support reducing internal staffing requirements 22% Turnkey solutions reducing implementation complexity 17% No interest in on-premises AI infrastructure 4% Source: Cloudian’s 2026 Enterprise AI Infrastructure 16 CHANNELVISION | SPRING 2026

RkJQdWJsaXNoZXIy NTg4Njc=