One of the best (and least exploited, in my opinion) opportunities for moments of true customer delight is customer experience, sometimes also called customer success and known in a bygone era as customer service/support. The reverse of this is also true: One of the biggest risks for lasting and significant brand damage lies in frontline interactions with customers, particularly during times of anxiety and duress.
This is primarily anecdotal, but will probably resonate with many: I’ve had far more disappointing, frustrating or dissatisfying customer service experiences than I have had notably positive ones. But recently, I’ve noticed a significant change in the distribution of those experiences for the better.
The common thread among most of these recent CX interactions is that they’re overtly powered in part by AI chatbots – typically acting as a layer of first response or triage. In each of these cases, the AI ‘agents’ declared themselves as such, and sometimes asked follow-up or clarifying questions, then informed me about the typical or estimated time for a response and directed my request to the relevant person or department.
In flow charts and process maps, CX is a highly structured, highly predictable function with deterministic outcomes. In reality, it’s an extremely unpredictable and inconsistent part of any business’s interaction with its customers. LLM-based AI’s non-deterministic nature is actually an asset in this case, because even in its existing configuration both ends of the equation (customer and service agent) are themselves highly non-deterministic, and even given scripts, diverge wildly from their templated roles and actions.
One thing AI is terrific at is mollifying responses and deadening any amount of extreme emotion sent its way. AI does not generally rise to antagonism (unless perhaps trained to do so). It’s also actually extremely good at providing logical responses to the words you’re actually using in communicating with it – something human CX agents can struggle with, particularly when they’re struggling to balance the need to respond to a customer’s specific complaints, and the need to adhere to a particularly rigid script or escalation flow.
Threading the needle of addressing a customer’s pain points, while simultaneously avoiding overpromising, and maintaining a grounded and realistic setting of expectations when it comes to time frame and circumstances of conflict resolution is a very delicate dance that even seasoned experts have trouble with.
It’s unclear why the admixture of a self-described AI CX agent of this ilk hits different than the bot responders of the earlier, more traditional ML-based era – maybe they’re more convincingly human, or maybe the broad success and mass market appeal of products like OpenAI’s ChatGPT have reset people’s expectations around chatbot behavior and likely responses.
The fact is, LLM-based AI is ideally suited for condensing a large corpus of data into discrete and specific answers to questions, and to communicating with people using natural language in a way that’s optimized to have broad appeal and be relatively inoffensive.
When you start to see the direction of advancements in general purpose consumer chatbots like the various takes on ‘deep research’ tools, the benefits become even more apparent than being a relatively capable, kind, informed and empathetic front door. In roles that are often overworked and under-resourced, having a highly responsive and fully informed AI copilot to navigate complex and interwoven networks of support docs, guides and scripts to provide full and correct answers to customers is a massive, easy to undervalue opportunity.