Domino Data Lab, a provider of Enterprise MLOps platform, announced new integrations with NVIDIA that extend fast and flexible deployment of GPU-accelerated machine learning models across modern tech stacks – from data centers to dash cams.
Domino says it is the first MLOps platform integrated with NVIDIA Fleet Command, enabling seamless deployment of models across edge devices. New curated MLOps trial availability through NVIDIA LaunchPad fast-tracks AI projects from prototype to production, while support for on-demand Message Passing Interface (MPI) clusters and NVIDIA NGC streamline access to GPU-accelerated tooling and infrastructure, furthering Domino’s market-leading openness.
Edge Device Support Streamlines Model Deployment through MLOps
Domino’s support for the Fleet Command managed edge AI services platform reduces infrastructure friction and extends key enterprise MLOps benefits — collaboration, reproducibility, and model lifecycle management — to NVIDIA-Certified Systems in retail stores, warehouses, hospitals and city street intersections. The integration relieves data scientists of IT and DevOps burdens as they manage, build, deploy and monitor GPU-accelerated models at the edge.
Accelerated Proof-of-Concepts with MLOps Platform on NVIDIA LaunchPad
Further deepening Domino and NVIDIA’s collaboration to accelerate model-driven business, Domino is the first Enterprise MLOps platform available through the NVIDIA LaunchPad program. LaunchPad represents a curated, pre-configured environment for enterprises to prototype AI initiatives without DevOps distractions.
Teams can use LaunchPad to test AI initiatives on the complete stack underpinning joint Domino and NVIDIA AI solutions. This experience delivers MLOps benefits – collaboration and reproducibility – optimized and pre-configured for purpose-built AI infrastructure.
Validated and supported by Domino and NVIDIA, teams get the confidence that proofs-of-concept in LaunchPad can be deployed at production scale on the same complete stack they can purchase and deploy.
Support for On-Demand MPI Clusters & NVIDIA NGC Streamlines MLOps
Further integrations bring the added Enterprise MLOps benefits of interactive workspaces, collaboration, reproducibility and democratized GPU access to NVIDIA’s expanding portfolio of GPU-optimized solutions.
Support for on-demand MPI clusters allows data scientists to use NVIDIA DGX nodes in the same Kubernetes cluster as Domino. Available for Domino environments and NGC images, this integration eliminates time wasted by data scientists on administrative DevOps tasks so they can start innovating on deep learning models.
Domino also supports NVIDIA’s NGC catalog. With a hub of AI frameworks (such as PyTorch or TensorFlow), industry-specific SDKs, and pre-trained models, this GPU-optimized software simplifies and accelerates end-to-end workflows. Data science teams can run NGC containers in Domino while maintaining two-way code interoperability with raw NGC containers.
Domino at GTC
Domino is a diamond sponsor of NVIDIA GTC, which runs through March 24. Learn more about scaling MLOps in the enterprise by attending Domino Data Lab’s sessions at GTC, which include data science innovators discussing topics from building analytics centers of excellence in insurance to attracting and retaining data science talent in healthcare.