Vultr and AMD Announce Partnership

Vultr, a privately held cloud computing platform, announced that AMD’s new Instinct MI300X accelerator and ROCm open software will be made available for its composable cloud infrastructure. The collaboration is expected to unlock new capabilities for GPU-accelerated workloads, from the data center to the edge.

“Innovation thrives in an open ecosystem,” said J.J. Kardwell, Vultr’s CEO. “The future of enterprise AI workloads is in open environments that allow for flexibility, scalability and security. AMD accelerators give our customers unparalleled cost-to-performance. The balance of high memory with low power requirements furthers sustainability efforts and gives our customers the capabilities to efficiently drive innovation and growth through AI.”

With AMD ROCm open software and Vultr’s cloud platform, enterprises can access an industry-leading environment for AI development and deployment. The open nature of AMD architecture and Vultr infrastructure lends access to thousands of open-source, pre-trained models and frameworks with a drop-in code experience, creating an optimized environment for AI development to advance projects at speed.

“We are proud of our close collaboration with Vultr, as its cloud platform is designed to manage high-performance AI training and inferencing tasks and provide improved overall efficiency,” said Negin Oliver, AMD corporate VP of business development, data center GPU business unit. “With the adoption of AMD Instinct MI300X accelerators and ROCm open software for these latest deployments, Vultr’s customers will benefit from having a truly optimized system tasked to manage a wide range of AI-intensive workloads.”

Designed for next-gen workloads, AMD architecture on Vultr infrastructure allows for true cloud-native orchestration of all AI resources. AMD Instinct accelerators and ROCm software management tools integrate seamlessly with the Vultr Kubernetes Engine for Cloud GPU to create GPU-accelerated Kubernetes clusters that can power resource-intensive workloads, worldwide. These platform capabilities give developers and innovators resources to build sophisticated AI and ML solutions.

Further benefits include:

  • improved price-to-performance
  • scalable compute and optimized workload management
  • accelerated discovery and innovation in R&D
  • optimization for AI inference

sustainable computing