Dataiku Unveils LLM Mesh with Launch Partners

Dataiku unveils today its LLM Mesh, which addresses the need for an effective, scalable and secure platform for integrating large language models (LLMs) in the enterprise. It is joined by its LLM Mesh launch partners Snowflake, Pinecone and AI21 Labs.

While Generative AI presents myriad opportunities and benefits for the enterprise, organizations face challenges. These include an absence of centralized administration, inadequate permission controls for data and models, minimal measures against toxic content, the use of personally identifiable information and a lack of cost-monitoring mechanisms.

Additionally, many need help with establishing best practices to harness the potential of this emerging technology ecosystem.

Building on Dataiku’s Generative AI capabilities introduced in June, the LLM Mesh is envisioned to overcome roadblocks to enterprise value.

LLM Mesh provides components companies need to build safe applications using LLMs at scale efficiently. With the LLM Mesh sitting between LLM service providers and end-user applications, companies can choose the most cost-effective models for their needs, ensure the safety of their data and responses and create reusable components for scalable application development.

Components of the LLM Mesh include universal AI service routing, secure access and auditing for AI services, safety provisions for private data screening and response moderation as well as performance and cost tracking.

The LLM Mesh also provides standard components for application development to ensure quality and consistency while delivering the control and the performance expected by the business.

Learn more about delivering enterprise-grade Generative AI applications with the LLM Mesh here.

Dataiku’s new features powering the LLM Mesh will be released in previews starting in October.

Dataiku’s LLM Mesh Launch partners, Snowflake, Pinecone and AI21 Labs represent several of the key components of the LLM Mesh: containerized data and compute capabilities, vector databases and LLM builders.

Learn more about LLM Mesh on the blog, Delivering Enterprise-Grade Generative AI Applications With the LLM Mesh