Global Study Examines the State of AI Infrastructure at Scale

ClearML today announced new research findings from a global AI survey, conducted alongside FuriosaAI and the AI Infrastructure Alliance (AIIA). This report is entitled “The State of AI Infrastructure at Scale 2024: Unveiling Future Landscapes, Key Insights, and Business Benchmarks.”

The survey includes responses from AI/ML and technology leaders at 1,000 companies of various sizes across North America, Europe and Asia Pacific. In particular, it focuses on:

  • How executives build AI infrastructure.
  • Critical benchmarks and key challenges they face.
  • How they rank priorities when evaluating AI infrastructure solutions against business use cases.
  • Scheduling, compute and AI/ML needs for training and deploying models and AI framework plans, for 2024-2025.

The study noted that a  primary driver propelling hypergrowth in the AI infrastructure market is the realization among organizations of how AI can drive their operational efficiency and workforce productivity as the leading business use case. Companies are recognizing the need for GenAI solutions to extract actionable insights from their internal knowledge bases and plan to deploy GenAI to boost their competitive edge, enhance knowledge worker productivity and impact their bottom line.

As companies navigate the AI infrastructure market, they are seeking clarity, peer insights and reviews, as well as industry benchmarks on AI/ML platforms and compute. To understand executives’ biggest pain points in moving AI/ML to production, this survey examined not only model training, but also model serving and inference.

“Our research shows that while most organizations are planning to expand their AI infrastructure, they can’t afford to move too fast in deploying Generative AI at scale at the cost of not prioritizing the right use cases,” said Noam Harel, ClearML CMO and GM, North America. “We also explore the myriad challenges organizations face in their current AI workloads and how their ambitious plans for the future signal a need for highly performant, cost-effective ways to optimize GPU utilization (or find alternatives to GPUs) and harness seamless, end-to-end AI/ML platforms to drive effective, self-serve compute orchestration and scheduling with maximum utilization.”

Key insight included:

  • 96 percent of respondents plan to expand their AI compute infrastructure with availability, cost, and infrastructure challenges weighing on their minds, with 40 percent considering more on-premise and 60% considering more cloud, and they are looking for flexibility and speed. The top concern for cloud compute is wastage/idle cost.
  • 95 percent of executives reported that having and using open source technology is important for their organization, while 96 percent are focused on customizing open source models. PyTorch is largely the framework of choice.
  • 74 percent of companies are dissatisfied with current job scheduling and orchestration tools, and face compute resource on-demand allocation and team productivity constraints. 74 percent of respondents see value in having compute and scheduling functionality as part of a single, unified AI/ML platform (rather than cobbling together an AI infrastructure tech stack of stand-alone point solutions), but only 19 percent have a scheduling tool that supports the ability to view and manage jobs within queues and effectively optimize GPU utilization. 93 percent of surveyed executives believe that AI team productivity would substantially increase if compute resources could be self-served.
  • Optimizing GPU utilization and GPU partitioning are concerns, with the majority of GPUs underutilized during peak times. 40 percent of respondents, regardless of company size, are planning to use orchestration and scheduling technology to maximize current compute investments of existing AI infrastructure. Only 42 percent of companies have the ability to manage Dynamic MiG/GPU partitioning capabilities to optimize GPU utilization.
  • To address GPU scarcity, 52 percent of respondents reported actively looking for cost-effective alternatives to GPUs for inference in 2024, as compared to 27 percent for training. 20 percent were interested in cost-effective alternatives to GPU but were not aware of existing alternatives. This indicates that cost is a key buying factor for inference solutions. We expect that while industries are still in early days for inference, the demand for cost-efficient inference compute will grow.
  • Largest compute challenges are latency, followed by access to compute and power consumption. Over half of respondents plan to use language models (such as LLama), followed by embedding models (BERT and family) (26 percent) in their commercial deployments. Mitigating compute challenges will be essential in their plans.

    To download a free copy of the global survey report, click here. For more on ClearML’s partner program, visit here.