For the past couple of years, C-suite executives and industry insiders alike have predicted that chatbots are set to become the next big disruption in business technology. As major Silicon Valley players like Facebook, Google, Amazon, and Apple have moved to integrate automated conversational platforms into their devices and social media messengers, it has become increasingly difficult to deny that assertion.
Indeed, the sheer ubiquity of chatbots in our daily lives is a testament to the widespread adoption of these technologies. From online retailers to banks, airlines, and even government agencies organizations in every industry are deploying chatbots at an unprecedented rate across a variety of customer touchpoints. In a world where messaging apps are comfortably outstripping traditional social networks in popularity and usage, the importance of maintaining an engaging conversational presence on websites and other interactive portals cannot be overstated; especially in today’s highly competitive digital environment. A recent Oracle survey of international decision-makers showed that at least 80% of executives plan on utilizing chatbots for their sales, marketing, and customer service by 2020, and the global market for these tools is expected to reach $1.25 billion by 2025.
Is The Reality Falling Short of the Hype?
However, skeptics argue that chatbots are still very much in their nascency and that the technology has to evolve before consumers will accept it as the foundation for their customer service interactions. Although chatbots have generally proven to be quite effective in creating engagement, this increased attention rarely translates to more conversions. In fact, less than 40% of users actually go past the first line of conversation, in many cases, this is due to a poor initial experience with the technology. While chatbots are able to answer targeted queries quickly and accurately at a fraction of the cost of their human counterparts; in their current iteration, they have some critical limitations which can prevent them from truly winning over online users.
When users perform one-off actions that require guided assistance (booking and airline, managing your bank account) chatbots offer great functionality. But as conversations become longer and more complex, chatbots will often fail to understand the user’s intent or the context in which the conversation is taking place. Inevitably this leads to incorrect answers or the all too familiar, “I did not understand your request” refrain. According to Facebook more than 70% of its chatbots currently fail to accurately address user queries.
Of course, the reason for this is clear. While chatbots may create an illusion of intelligence, most chatbots are still decision-tree model which helps chatbots to navigate between different logical frameworks based on a series of manually programmed flowcharts. The chatbot will reference a user input against one of these pre-programmed responses and if a match is found then the corresponding actions is initiated. Often the chatbot will become confused as various rulesets across the framework contradict each other, causing the entire conversational thread to collapse. This call and response system trades conversational simplicity and limited flexibility for efficiency and responsivity.
If chatbots are to move past these basic capabilities, they will need to be taught a few key skills:
- They must be able to keep track of the flow of the conversation and use it to inform their understanding of the current query.
- They must be able to understand the user’s intent.
- They must be able to generate a grammatically correct reply that provides the correct information.
How Deep Learning Can Help
Even before chatbots came around, larger organizations faced significant issues communicating with their customers in a natural, relatable manner. The proliferation of KPIs in customer service departments across the world has led to robotic customer interactions where the needs of individuals often play second fiddle to metrics involving call handling time and number of calls answered. Amidst these concerns, artificially intelligent chatbots can take the burden off of customer service representatives’ shoulders, and offer customers the sort of meaningful, personalized interactions they seek. Deep learning may be essential to achieving this objective.
What is Deep Learning
Deep learning involves feeding large quantities of data through neural networks (computer systems that are modeled on the cognitive processes of the human brain). Each neuron of the neural network analyzes the data for a specific quality and assigns a true/false value to it on the basis of this analysis. Once the data has been processed through the entire chain it is classified on the basis of these assigned attributes. This deep learning machine can then used all of these insights to inform real-time actions.
Through this process, machines can be taught to automatically recognize and respond to a range of inputs including pictures, videos, sounds, or in the case of chatbots written language. The great thing about using deep learning for a chatbot is that your machine becomes quicker and more accurate as it processes more data in real-time. In order to unlock the potential of deep learning for chatbots, you must combine it with another area of AI known as NLP or natural language processing.
Deep Learning and NLP for Chatbots
NLP gives chatbots the ability to interpret human language as it is naturally spoken, draw key insights from it, and to translate their insights into a functional form of this language. Essentially, NLP allows chatbots to talk like we do. With NLP and deep learning, chatbots can analyze a stream of human inputs for their intent and related actions, and use them to generate more human responses. Today, websites like Google, Amazon, and Netflix use similar techniques to recommend purchases and predict search terms.
It Starts With Analysis
To model a more human chatbot system, you have to start by referencing human conversations. Extract information from your CRM systems regarding the length of the average inquiry, the complexity of the conversation, and the more common responses used by call center agents. If you receive a large volume of short single-issue calls then automating the conversational flow will be a lot easier. A longer conversation will require your chatbot to keep track of previous issues discussed over the course of the conversation.
Retrieval Based Model Vs Generative Model
In order to train your neural network, it is important to collect as much conversational data as possible and ensure that it is relevant to the type of queries the chatbot will be handling. Traditionally, this approach has been used in conjunction with a retrieval-based model which applies deep learning insights to define and contextualize chatbot responses with reference to a selection of pre-defined responses While these systems can be powerful and accurate, they are also rigid and less likely to appear human.
Today, AI research is moving towards more generative models that can create wholly unique responses based on the context provided by a given input. While these chatbots may not necessarily provide grammatically correct or even accurate responses all of the time, they will be much better equipped to deal with the improvisational flow of a longer conversation. Based on the type of customer interactions you most often receive a retrieval-based system might be more appropriate for your needs. However, if you are looking to build a smarter chatbot that can engage customers then a generative model will be required.
The RNN Model
RNN stands for recurrent neural network, this model based on a sequence-to-sequence system which takes input text and runs it through an encoder system which transforms the inputs into numerical vectors which then form the basis for the chatbot’s vocabulary. A separate decoder then takes this fixed representation and uses it to generate a variable corresponding output; this same process is undertaken for each word a sentence. Once all vectors have been generated, each input is multiplied by a fixed variable to come up with a probable output based on the context of the conversation. This output is then compared against the actual response that was generated in the conversation. If the response, matches closely then the chatbot is rewarded and it moves onto the next input.
It is important to note that each successive output generated through the RNN will be influenced by all past inputs that have occurred up to that point. Thus the generative model can provide the chatbot with vital context that produces an optimal conversational flow which closely mirrors natural conversation.
Chatbots Have a Long Way to Go But Research Is Looking Promising
It is important to note that we are still a long way from completely human conversations involving chatbots. Part of the issue is no doubt down to the inherent limitations of generative models. Because these models find probably answers based on an analysis of probably outputs, generic yes/no answers are likely to be far more common. Similarly, the piecemeal method of generating outputs may result in grammatically incorrect sentences.
However, as systems like Siri and Alexa process ever-greater volumes of human inputs in real-time, it is only a matter of time before these systems are integrated with deep learning systems that can help them understand user intent, query patterns and predict the flow of the conversation to come.