When you’re looking at the biggest and best technologies coming down the pike, voice-activated interactions are huge.
Comscore projects that 50% of all searches will be voice searches by 2020 – that means a full half of our scouring the Internet for products and services and information will feature hands-free verbal communication.
What does this mean for businesses? It means it’s time to get serious about integrating voice activation into all of those modern business processes that deliver things to customers.
A More Sophisticated Future
One of the biggest barriers to robust voice activation interactions is a lack of good natural language processing technology.
To get an idea of why voice activation isn’t the dominant model yet, try interacting with Siri or Cortana or Amazon Alexa or one of the many virtual assistants that greet you when you call businesses such as utility companies.
What you’ll find is that although these technologies can recognize simple phrases, there’s not a lot of conversational intelligence behind them. They’re not extremely good at responding to verbal input in a human or relatable way. It doesn’t take much to confound and confuse these voice activation technologies – leading to enormous amounts of frustration when Siri or some other digital entity doesn’t understand, say, a name or a place or a command.
To really be popular, voice activation has to be accurate. But there are good indicators that this is going to happen sooner rather than later.
Get What You Want!
As soon as voice-activated technologies become good at getting us what we want, they’re going to become the gold standard for user experience.
However, in order to do that, they’re going to need to parse speech in new and different ways.
Think about the autocorrect technologies built into our smartphones. We can use them to streamline our text communications – but they often get it wrong. We end up with words that we never wanted to say peppered into our text messages.
The same is true with language processing technologies for voice-activated commands. In general, computers are relatively slow to understand.
To really give us what we want, the text speech processing software will have to understand the differences between phrases that are verbally identical. See these examples of frustrating confusing speech recognition here
That’s where machine learning and artificial intelligence come in. By combining that natural language processing technology with neural networks and machine learning programs, the computers can use context to make sure that they understand what we’re asking – whether it’s directions to a local pizza shop, how much we want to spend on paper towels or a question about where to go to graduate school.
The Ubiquity of Voice Activation with the Internet of Things
As we start modeling voice activation with machine learning algorithms, we need a method of delivery.
The Internet of Things has emerged at the exact perfect time. Gartner predicts billions of Internet-connected devices will go online within the next few years, and that means there will be a lot of opportunities to put voice activation where it can be most useful. You won’t have to sit in front of the computer to have a conversation with technology – you’ll be able to do it wherever you are, in front of your refrigerator, or in the car, or even walking down the street.
New specific kinds of neural network microprocessors make it easier to put machine learning systems into smart phones and small devices. That’s another major part of delivering voice activation to customers.
Unraveling the IVR Puzzle
Here’s another major way that voice activation is going to revolutionize our world.
When you call a business, you’re likely to get a robotic voice asking you to choose menu options. The caller is forced to listen along to these voices drone on about particular menu options and micro-commands. But what if instead of having to choose options and walk through a maze of menu choices, you just told the IVR what you want, and it deciphered the rest by itself?
Again, that’s going to take context. That’s going to require systems where the computer understands your speech, rather than asking you to press a button corresponding to a phrase. If the computer can understand from your speech that you want to speak particularly about a broken device, or you want to schedule an appointment with a physician or get a referral, it eliminates the need to pester you with those requests for menu-driven commands. That virtual assistant will instead know what you want and say “sure, we’ll do that for you!”
In fact, given the technology that already exists, it’s virtually amazing that companies aren’t working quickly to improve customer experience by making IVR systems much more reactive and responsive. But either way, that’s going to happen soon as voice-activated technologies become more capable and smarter.
Are you interested in learning more?