into gen AI so far, only 5 percent of initiatives delivered measurable business returns at six months postpilot, exposing what MIT researchers dubbed a “widening Gen AI Divide.” McKinsey & Co. researchers, for their part, refer similarly to a “gen AI paradox,” whereas nearly eight in 10 companies have deployed gen AI in some form, but roughly the same percentage report no material impact on earnings. What’s more, only 1 percent of enterprises recently surveyed by McKinsey view their gen AI strategies as mature. “For all the energy, investment and potential surrounding the technology, at-scale impact has yet to materialize for most organizations,” said McKinsey. This divide, proclaimed MIT researchers, does not seem to be driven by business model, regulatory implications, industry vertical or company size. Rather, “the divide stems from implementation approach,” they continued. In other words, partners can use the lessons learned from the 300 public implementations studied by MIT to guide customers across or out of the divide as they experiment with gen AI or transition into the next wave of AI technologies. A Wide Divide Certainly, gen AI tools such as ChatGPT and Copilot have been widely adopted, with more than 80 percent of organizations surveyed having explored or piloted them, and nearly 40 percent reporting deployment. But these tools primarily enhance individual productivity, showed the MIT study, not P&L performance. “Despite high-profile investment, industry-level transformation remains limited,” said MIT researchers. “GenAI has been embedded in support, content creation and analytics use cases, but few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models or measurable changes in customer behavior.” All the while, the more customized, enterprise-grade systems “are being quietly rejected,” said MIT researcher. While 60 percent of organizations evaluated such tools, only 20 percent reached pilot stage, and just 5 percent reached production. Most fail due to “brittle workflows, lack of contextual learning and misalignment with day-today operations.” McKinsey shared similar findings. It cited an imbalance at the heart of its paradox between widely deployed “horizontal” copilots and chatbots, which have scaled quickly but deliver diffuse, hard-to-measure gains, and more transformative “vertical” or “function-specific” use cases, of which about 90 percent remain stuck in pilot mode. According to the MIT data, resource intensity had little bearing on success. Large enterprises, defined as firms with more than $100 million in annual revenue, lead in pilot count and allocated more staff to AI-related initiatives. Yet these organizations report the lowest rates of pilot-to-scale conversion. Interestingly enough, smaller and mid-market firms, which tend to move faster and more decisively, tended to have more success with gen AI. These top performers reported average timelines of 90 days from pilot to full implementation. Large enterprises, by comparison, took nine months or longer, showed MIT’s findings. Similar findings emerged among most vertical studied, as well. Of the nine industry segments studied, only tech and media (which often prioritize marketing, content and developer productivity) showed clear signs of structural change. 18 CHANNELVISION | FALL 2025 Source: McKinsey & Co. with one another to execute their individual missions. The steep drop from pilots to production for task-specific GenAI tools reveals the GenAI divide Source: MIT Project NANDA Why GenAI pilots fall: top barriers to scaling AI in the enterprise Users were asked to rate each issue on a scale of 1-10 0 1 2 3 4 5 6 7 8 9 10 Source: MIT Project NANDA Challenging change management Lack of executive sponsorship Poor user experience Model output quality concerns Unwillingness to adopt new tools How executives select GenAI vendors Derived from interviews and coded by category “Would you assign this task to AI or a junior colleague?” 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Source: MIT Project NANDA Flexibility when things change The ability to improve over time Clear data boundaries Minimal disruption to current toots Deep understanding of our workflow A vendor we trust General-Purpose LLMs 80% 60% 50% 20% 40% 5% Investigated Piloted Successfully Implemented Embedded or Task-Specific GenAI Perceived Fitness for High-Stakes Work Source: B2U Agents enabled by generative AI could function as hyperefficient virtual coworkers Source: McKinsey & Co. Illustration of how an agent system might execute a workflow, from prompt to output Using natural language, the user prompts the generative AI agent system to complet a task. The agent system interprets the prompt and builds a work plan. A manager agent subdivides the project into tasks assigned to specialist agents; they gather and analyze data from multiple sources and collaborate with one another to execute their individual missions. The agent team shares the draft output with the user. The agent team receives user feedback, then iterates and refines output accordingly. External systems: Agents interact with databases and systems– both organizational and external data–to complete the task. Manager agent Specialist agents Start End Analyst agent Checker agent Planner agent 1 2 3 4 The steep drop from pilots to production for task-specific GenAI tools reveals the GenAI divide Source: MIT Project NANDA Why GenAI pilots fall: top barriers to scaling AI in the enterprise Users were asked to rate each issue on a scale of 1-10 0 1 2 3 4 5 6 7 8 9 10 Source: MIT Project NANDA Challenging change management Lack of executive sponsorship Poor user experience Model output quality concerns Unwillingness to adopt new tools How executives select GenAI vendors Derived from interviews and coded by category INNOVATORS 2.5% EARLY ADOPTERS 13.5% EARLY MAJORITY 34% LATE MAJORITY 34% LAGGARDS 16% General-Purpose LLMs 80% 60% 50% 20% 40% 5% Investigated Piloted Successfully Implemented Embedded or Task-Specific GenAI
RkJQdWJsaXNoZXIy NTg4Njc=