ChannelVision Sept-Oct 2017

switch or function needs to be in- cluded in voice-first products, so that users may get the benefits without risking the downsides of constant monitoring. A reliably secure soft- ware access would also need to be in place in the products to prevent and detect hacking efforts. Even more effective The first use cases are primar- ily around voice response sys- tems – whether from a call center perspective or those implemented in our cars and smartphones. But as many of us know from firsthand experience, this works marginally at best. Recognition and contextu- alization need to be refined through technological developments before we can realistically think about enterprise-wide adoption. Research programs such as Carnegie-Mellon University’s Sphinx project continue to enhance lan- guage recognition capabilities. An Internet Trends report by Mary Meeker indicated that in 2016, Google’s voice recognition system could recognize more than five million words with around 90 percent accuracy – but that’s still not extensive or accurate enough. Is 90 percent accuracy good enough to interact with a life support system in a hospital or a utility pro- vider’s network? It’s not just about recognition of the words either, it is about what to do with the words. Here is where cognitive engines and AI come into play. Some of the biggest players in the industry – for example Microsoft, with its open source cognitive recogni- tion engine – can be leveraged to understand the context of the words. “How do I get to Penn Sta- tion?” may sound simple enough, but it needs to be put into context. Location awareness could indicate you likely mean Penn Station in New York City and assumptions about transportation mode. If you were sitting at Columbus Circle in New York City, the answer could be, “Take the A or C subway line to the 34st St. Penn Station stop.” But here we assumed it was Penn Station New York and not Penn Station Newark or Philadelphia. The real challenge comes in what is behind the voice recognition systems – both from the integration of the IoT devices to the system itself – and ensuring the commands requested make sense. Here, we need to further leverage those cog- nitive engines as a check and vali- dation system. Think of someone accidentally giving a command to “Turn off cool- ing system to reactor 4” instead of reactor 3, which has already been shut down, or a doctor using the system to prescribe a harmful dose of medication because he ac- cidently said 400 grams instead of 400 milligrams. These might be ex- treme examples, but there will need to be a holistic view of the actions that are being automated to prevent human error and bring in broader intelligence to understand the ac- tions related to voice-controlled re- quests. For example, maybe “Turn off cooling system to reactor 4” was correct, but the system would then need to understand the set of op- erational procedures to implement those actions. Creating an API An interesting element that could tie in strategically with the de- velopment of true voice-controlled enterprise environments comes from the innovations occurring in the traditional voice communication world. We are seeing the explo- sion of CPaaS (communications platform as-a-service) in the enter- prise, leveraging APIs to transform today’s applications into voice-inte- grated solutions. Some of the major voice communication vendors are now entering this market, providing CPaaS infrastructures with a stan- dardized set of APIs to enable com- panies to integrate communications into their business processes. While we traditionally look at integration as things like incorporat- ing voice and video services into existing applications – think of a banking application that allows you to move from an online application to a voice call with your banking advisor – I believe these will play a big part in that “voice-first” envi- ronment by leveraging the rich API infrastructure of CPaaS to commu- nicate to applications and things. Behind the communications in- frastructure requirements, just how CPaaS or other platforms communi- cate with devices really needs to be standardized before we will see rap- id development of voice technology. Each of today’s consumer-based voice-controlled systems has its own interface, own API integration and, as with the historic “Beta vs. VHS” battle from decades ago, may lead to product obsolescence. Just as a consumer doesn’t want to invest in the latest “smart coffee maker” only to find that the platform that controls it was just discontinued, an enter- prise wants to ensure that invest- ments they make into new technolo- gies won’t be obsolete before they are able to realize a return. The good news is there are a set of technologies in the works to help minimize potential obsoles- cence. Frameworks like IoTivity are being developed to build a stan- dardized platform. We are already seeing the value, benefits and rapid expansion of new voice applica- tions for consumers. In the near term, we will see some of the basic use cases move into the enterprise. Longer term, as advances continue to be made in voice recognition, voice security and simplification/standardization in device connectivity, we will see more and more voice-first activities in both the consumer and enter- prise world to help reduce complex- ity and improve our productivity. o Jack Jachner is vice president Cloud, North America at ALE. EMERGENT Channel Vision | September - October, 2017 18

RkJQdWJsaXNoZXIy NTg4Njc=