5 MINS READ
Chatbots are becoming ubiquitous.
The advancements in artificial intelligence (AI) and natural language processing (NLP) have given chatbots the capability to orchestrate human-like conversations with users. In the banking, financial services, and insurance (BFSI) industry, organizations are deploying chatbots as a comprehensive customer service channel. Consequently, chatbots are turning into representatives of BFSI firms for customer interactions in the digital self-service environment. The convenience of a chatbot answering inquiries or processing service requests in normal conversational language has the potential to outscore all other self-service options. We believe that this automated synthetic interaction tool can be leveraged to do more. Chatbots can be potentially designed to engage with customers during the decision-making phase to persuade them toward a positive task or dissuade them from negative actions. This white paper examines how persuasive chatbots can be designed and deployed by insurers and retirement plan providers (RPPs).
In an increasingly digitalized landscape, chatbots have become an indispensable component for customer engagement and self-service in the insurance industry.
However, chatbots are deployed by insurers and RPPs to perform a limited set of functions. Existing chatbots can answer inquiries and handle service requests (related to policy or scheme status, issue of statements, processing reinstatement quotes, or even submission of low-value claims). Of these, inquiries or requests for a quote for a new policy, reinstatement of a lapsed policy, enrolment in connected insurance schemes, or increasing contribution levels are seen as positive business events. Inquiries for free look cancellation, surrender of a policy, or withdrawal of money that could result in leakage are considered negative business events. In a physical environment, the customer service executives of the insurer would interact with customers to understand their situation and either persuade or dissuade them.
The digital landscape does not permit in-the-moment intervention at all times. Hence, a chatbot can be designed to become a personal advisor and engage with the user to provide additional insights rather than merely responding to queries. However, chatbots have limited scope—while users can enquire about both positive and negative business events through a chatbot, they are mostly not allowed to process a negative transaction. Nevertheless, a user who enquires about a business function that is either positive or negative could be at any stage of the decision-making process. Chatbots designed using persuasive techniques are equipped with the capability to meaningfully engage with users during the decision-making process by offering relevant information, providing alternatives, weighing the evidence, and persuading them to choose the best option. Chatbots can be designed to understand the context, have purpose-driven conversations and nudge the user toward optimal financial behavior.
While the proliferation of generative-based mature bots will throw open more sophisticated and powerful ways for persuasion, its non-availability today need not be a limiting factor. Designing chatbot-driven persuasion could still be explored by leveraging hybrid chatbots that are a blend of rule-based and retrieval-based chatbots.
The first point to consider when designing a chatbot is to ensure if it can handle the tasks performed by any average industry bot.
Secondly, it should perform the tasks reasonably well. These hygiene features contribute to the authority or status of the chatbot for it to be persuasive.
The basic structure of all customer conversations with insurers regarding a business process comprises three parts—action trigger, response, and result.
An action trigger could be an inquiry or a service request. The response involves a counter query, appeal, seeking additional information, or processing a transaction. The result is a positive or negative closure of the action trigger.
The conversation can vary based on the business event and the customer’s situation. To persuade a customer for or against an action, the chatbot should incorporate appropriate persuasion strategies that will form a part of the response. Depending on the business event and chat instance, persuasion could be either proactive or reactive.
Proactive persuasion is chatbot-driven, where the chatbot initiates a persuasive conversation with the user as an extension of another conversation flow. On the other hand, reactive persuasion is user-driven and made in response to a user inquiry. Given the relative complexity associated with proactive persuasion, to begin with, let us examine how chatbots can be designed for reactive persuasion.
Designing a persuasive chatbot requires a step-by-step approach that considers several key aspects:
Identify persuadable business events, evaluate the need and business justification for persuasion, and collate all the business rules and alternatives related to them.
Define the persuasive strategy and conversation flow by enumerating the arguments relevant to a specific event, probable counterarguments from the user, and the pertinent persuasive appeals.
Ensure the conversation flow comprises strategies and components to counter probable user intentions and logical or emotional arguments.
Design the default conversation flow keeping in mind that there could be some degree of resistance from the user.
Ensure the persuasive design is sensitive to the user context, as choosing an inappropriate occasion or overdoing it could irritate the user and become counterproductive.
Ensure the right mix of components in the persuasive strategy and render it unobtrusively.
Figure 1 shows a simple framework that insurers and RPPs can consider for designing a reactive persuasive chatbot.
Let’s contextualize this further by considering the example of a retirement plan participant. Jane, aged 53, plans to withdraw from her 401K retirement account. She initiates a conversation with a hybrid chatbot named Aida to place a service request. A withdrawal is a negative business event that will reduce the retirement fund and impact the long-term asset accumulation of the plan participant. Consequently, the chatbot must persuade Jane against the withdrawal. To persuade her, the chatbot must be built with the knowledge of various types of withdrawals allowed, customer situations when persuasion is to be avoided, rules relating to withdrawal, penalties, taxes, and other alternatives available.
Figure 2 shows the possible components of persuasive strategy that Aida could adopt during the conversation with Jane.
Figure 3 shows the likely conversation flow between Jane and Aida along with the related annotations.
A simple rule-based chatbot could have completed the above transaction sooner with fewer questions and answers. The interaction would have been considered successful if the session concluded without a single ‘Sorry, I didn’t understand’ message. A golden opportunity to engage with Jane would have been lost, whereby the chatbot could have steered her toward positive action. The core objective of a persuasive chatbot is to sow a seed at the right moment to influence customer decisions. However, it is unrealistic to expect a persuasive chatbot to successfully nudge all customers using soft skills toward a different action, as this is a challenge even for human agents.
In our example, Jane had decided early on to withdraw because of other compulsions and successfully resists all the persuasive arguments and alternatives suggested by the chatbot. Still, it is noteworthy that the RPP leveraged an opportunity to engage with Jane while she benefited through timely access to relevant information on the negative consequences of withdrawal and the alternatives available, thus enabling an informed and rational decision. Despite failures, we believe that a persuasive conversation initiated by a chatbot has the potential to prompt customers to re-evaluate their decisions and accept the persuasive suggestion.
As chatbots mature from being a nice-to-have discrete technology experiment to a must-have conversational channel, insurers will have to revisit their chatbot strategies.
They must iteratively improvise and enhance the capability of their chatbots so that they are more in sync with the progress in conversational technologies. Failing to do so could potentially drive the feature-restricted, older-generation bots toward customer disuse. While revitalizing their chatbots, in addition to focusing on the newer technological capabilities and business functions, insurers need to seriously think about other conversational nuances, such as adding persona characteristics to the bot and leveraging it for finer functions like persuasion and customer behavior-based conversation.
It can be argued that the strength of a chatbot to influence decisions is likely to be minimal, or the causal association of a chatbot conversation with a successful persuasion is difficult to prove. However, the success of persuasion can be determined by performing analytics on chat histories and subsequent real-world transactions. The conversational capability of chatbots has the potential to effectively make customers rethink and re-assess their priorities and can result in customers either opting for the path suggested by the chatbot or choosing an alternative. In our view, insurers and RPPs must implement persuasive chatbots to strengthen their digital core, improve customer engagement, and gain a competitive edge over their peers.