While voice-based digital assistants such as Amazon Alexa, Apple Siri and Google Assistant are becoming increasingly common at home – and smartphones and wearables can be used handsfree via speech – the use of voice in the workplace is just getting started.
That’s likely to change in 2020 and beyond.
The promise of voice in the workplace? More efficient employees, “smarter” voice-based assistants, easier ways of completing routine tasks and a digital experience in the office that matches what’s used at home.
A survey by 451 Research in 2019 indicated that voice UIs and digital assistants are among the most disruptive technologies for enterprises (IoT and AI are the top two), with four in 10 respondents planning to adopt voice technology within 24 months.
“I expect 2020 will be the year when voice user interfaces will become prevalent in the workplace,” said Raúl Castañón-Martínez, a senior analyst at 451 Research. “They will initially address simple tasks, but this will lay the groundwork for increasingly complex workflows.”
At first, voice assistants are likely to be used at work much as they are at home, such as starting phone calls, setting reminders and scheduling calendar events. But more workplace-specific uses are arriving quickly as software vendors integrate voice capabilities into their products.
Microsoft recently announced that Cortana – now firmly positioned as a workplace rather than consumer AI assistant – will integrate with its Outlook mobile app, enabling users to dictate messages and request emails to be read aloud. Google, meanwhile, has begun to integrate its Assistant with G Suite calendars, allowing users to check schedules via voice commands, schedule events, send emails to certain contacts and dial in to meetings.
Although these are relatively straightforward tasks, they will get more workers interacting via voice, given the reach of Office 365 and G Suite in the corporate world.
In addition, voice assistants are being embedded in hardware designed explicitly for the office, making it easier for businesses to deploy. Amazon’s Alexa for Business is now embedded in Poly’s conference phones and headsets, while Microsoft’s Cortana is integrated into its Surface earbuds.
Wider availability of voice technology on productivity applications and devices will influence adoption, said Castañón-Martínez. “This will reinforce the familiarity of voice user interfaces in the workplace, in a similar way as consumers have become familiar with Alexa and Siri, and with smart speakers like Amazon Echo and Google Home/Nest.”
While conversational AI tools such as chatbots are now common, voice interfaces have been slower to arrive, according to Hayley Sutherland, a senior research analyst at IDC. But advances in the underlying natural language processing technology has made voice-based assistants accurate enough to support regular interactions.
“We’ve seen huge leaps in natural language processing, even in the last year,” she said.
That’s important because it means the assistants are less likely to misunderstand commands, which can quickly annoy users. “If I’m working with a voice assistant and it works 80% of the time, that remaining 20% is a lot in my day-to-day job; that can add up to a lot,” she said.
Although advances in natural language processing (NLP) usually come from big tech companies like Microsoft, Amazon and Google with deep pockets for research and development, the availability of voice APIs gives more companies access to the technology. And those firms can create AI assistants better tailored to specific workplace scenarios.
One example is commercial real estate firm JLL, which unveiled its own voice and text assistant, dubbed JiLL, last summer. The smart office assistant was built on Google Cloud, using the Dialogflow conversational AI platform. It helps employees locate and book spare desks, set up meetings with colleagues and more.
“We wanted to bring the consumer experience to work,” JLL’s chief digital product officer, Vinay Goel, said in an earlier interview. “We think of JiLL as being the assistant that you have in your consumer life, whether through Alexa or Google Assistant, and we want to essentially recreate that experience with JiLL in the workplace.”
“Companies that previously didn’t have the resources to build this kind of capability themselves, or that said it is not their focus area, can now use third-party APIs to get their voice capabilities 75% of the way and customize from there,” said Sutherland. “So that could be another driver [of the technology].”
Another potential barrier to adoption involves privacy and security fears.
In the past year, Apple, Amazon, Google and Microsoft have each come under fire after reports that staff and contract workers were given access to small numbers of customer voice recordings for quality review.
According to a recent IDC survey, 44% of consumers have privacy and security concerns about the devices; those worries are likely to be higher when sensitive enterprise data is at risk. That’s especially pertinent as voice interfaces are added to business applications, such as software from the likes of Salesforce and Oracle.
Oracle was keen to highlight its stringent management of customer voice data when it announced speech inputs for its Digital Assistant in September.
“Customers don’t want their data going to public cloud vendors, or, more specifically, being accessed or listened to by third-party vendors,” Suhas Uliyar, Oracle’s vice president of AI and Oracle Digital Assistant, said in an interview at the time. “In our instance, the privacy and security is maintained. Only our customers have access to their data.
“We don’t use it to retrain our models: that’s very important for GDPR,” Uliyar said. “And while we store it in the Oracle second-generation cloud infrastructure, we at Oracle don’t touch it.”
Allaying customer concerns will be key to growth and an on-going priority for companies offering voice interfaces to business applications, said Castañón-Martínez.
“The single key barrier that has limited widespread adoption is security,” he said. “To enable complex workflows using voice commands will require some form of device and user authentication. This remains a hurdle, as this will be required to enable interactions with sensitive company resources such as data and business applications.”
While there may be resistance to using voice AI assistants in busy offices (for practical reasons), office workers on the go or staffers not bound to a desk could find them especially useful.
“Those industries where people need to use their hands a lot is where we’ll see it first and where it will have a more natural kind of adoption,” said Sutherland. “It could be a lot more natural for field workers, and the efficiencies that they gain could mean it is an easier kind of adoption.”
The healthcare industry is one area that sees promise for voice-based assistants. A variety of startups in the industry have attracted venture capital investment, including Seattle-based Saykara, which uses speech recognition to input information into electronic health record systems. This frees doctors from burdensome data-entry requirements.
“Physicians spend on average about two hours on screen time for every hour that they are seeing patients,” Harjinder Sandhu, CEO of Saykara, said in an earlier interview. “They are either doing this while they are seeing patients — typing away at their computer — or else they are spending hours in the evenings trying to document that care.”
Also developing services in this area: Google, which recently partnered with healthcare digital assistant startup Suki, and Microsoft, which announced it will work with a longstanding player in the market, Nuance.
Meetings are another area where voice assistants have proven popular, with Cisco’s Webex Assistant and Alexa for Business letting users kick off conference calls with voice commands, for instance.
Meeting room assistants could get more advanced, with the ability to search for information likely to be a key use during a meeting.
“If you are a CEO or a CFO making a presentation, you could just query Alexa or Cortana ‘What were the sales for the third quarter for the east region,’ something like that,” said Castañón-Martínez. “That could mean a very sophisticated workflow that would know who is asking the question, the information that they are asking for, the different sources where that information can be obtained, and then retrieving it and presenting it in a coherent way using natural language.
“That, in my mind, will be the next step – and will pave the way for more complex workflows,” he said.
This story, “2020: The year the office finds its voice?” was originally published by
Matthew Finnegan covers collaboration and other enterprise IT topics for Computerworld and is based in Sweden.
Copyright © 2020 IDG Communications, Inc.