What Happens When Your Bot Can’t Solve the Problem
I typed “talk to a human.” The chatbot said: “I don’t understand your question.” I tried “representative.” The chatbot said: “I don’t understand.” I tried “none of the above.” The chatbot said: “Please select from the following list.”
Ten minutes later, I finally reached a human agent.
They asked me the same questions the bot already asked. They had no idea what I’d just been through. No idea I’d been stuck for ten minutes. No idea I was already frustrated.
That’s the handoff problem.
Failure to Design Handoffs = Customer Pain Points
Healthcare organizations are implementing AI at every touchpoint. Chatbots. Automated scheduling. Kiosks. Patient portals.
But here’s what most haven’t designed: The handoff between AI and humans.
Think about what happens when a patient gets stuck in a chatbot loop. They type their question. The bot asks them to select from a list that doesn’t match their issue. They type “talk to a human.” The bot says “I don’t understand your question.”
By the time they finally reach a human agent, they’ve been trying to get help for ten minutes. They’re already frustrated.
That agent needs to see what’s happened so far. They need immediate triggers that tell them: This person has been stuck. This person asked for a human three times. This person is asking for help with X. This requires service recovery.
But most organizations don’t have that designed into the handoff. So the agent starts fresh. Asks the same questions the bot already asked. And the patient gets even more frustrated.
Planning for Escalation
Here’s what I tell the healthcare organizations we work with: You need to program the system so that there are immediate triggers for what gets pushed to a human.
When someone types “talk to a human” or “representative” or “none of the above,” that should trigger an immediate escalation. When it does get handed off, the agent needs to see a quick summary of what has happened so far or at the very least, an alert that they are coming from a chat.
If the agent can see that this person has been stuck in the chatbot for ten minutes, typed “talk to a human” three times, and tried multiple pathways with no success, that’s a cue that you’re going to have to kick in your service recovery in high gear.
But if the handoff doesn’t pass that information along, the agent has no idea. They start from scratch. And the patient has to repeat everything. Worse yet, they are so angry that they are immediately rude and impatient with an unsuspecting call center agent.
Monitoring for Keyword Triggers
When someone types certain things, it’s a direct signal that automation has failed. Things like:
- “Talk to a human”
- “Representative”
- “Agent”
- “None of the above”
- “This isn’t working”
- “I need help”
These should be automatic escalation triggers. Not loops designed to maintain the bot interaction. And you should also be tracking how often these keywords appear. Because if 30% of your chatbot interactions include “talk to a human,” your chatbot isn’t working as intended.
Designing Clean Handoffs
Designing clean handoffs can avoid these problems and ensure that you’re maintaining a positive patient experience designed to balance both efficiency and satisfaction. You can do this by:
- Identifying escalation triggers. What keywords, behaviors, or patterns should immediately send someone to a human?
- Passing context to the agent. When someone escalates, the human needs to see what’s already happened. How long have they been stuck? What did they try? What information did they already provide?
- Training for service recovery. Agents receiving escalations need to understand: These people are already frustrated. They need a different approach than someone who called you directly.
- Monitoring handoff volume. If 30% of your automated interactions require escalation, your automation isn’t working. That’s data you need to see.
- Making escalation easy. Don’t make people type “talk to a human” three times before you let them through. Design for graceful failure.
What metrics are you using to determine the success of chatbot interactions? If your sole source of information is a single question at the tail end of the chat, you are getting a grade but not true, meaningful information. Asking, “How well did this chat meet your needs?” or “How satisfied are you with the help you received?” gives you some data about the satisfaction level but not what was missing or what needs to be improved.
Are you planning for what happens when the bot can’t solve the problem? Are you monitoring closely? Are you looking for keywords? Do you have a clear handoff designed?
Or are you just assuming that because you automated something, it’s working?
If you’re not planning for failed chats, you’re setting patients up for frustration. And you’re setting your staff up to handle already-upset people without the tools or information they need.
As we lean more and more on AI, we’ve got to make sure we’re designing the handoff. We’ve got to make sure we’re elevating the human experience, not just automating processes. Because you can design perfect automation. But if you don’t design the handoff, you’re still creating bad experiences.
Want to learn how to design effective handoffs between AI and humans?
Join me for a webinar tomorrow: Don’t Automate a Bad Experience: Why Human Skills Are Your Highest ROI Investment in an AI World
Tags: CustomerExperience, DigitalHealth, HealthcareAI, HealthcareLeadership, PatientExperience
