human centric conversational intelligence

human centric communication

humans reason as someone. every input gets filtered through who you are, what you believe, who you're talking to. over time both sides of a conversation build a model of the other, trust forms, the interactions compound. this is exactly what human centric reasoning is.

for this to work you need a point of view1identity, worldview, opinions at the start of the interaction. the frame through which everything gets filtered., the ability to adjust2as you learn about who you're talking to, you emphasise certain facets of your POV, de-emphasise others, and have your tone of voice communicate your adjusted POV within the current context. your POV through interaction, and the ability to reason3interpret objective information through your own filters before reacting, and translate your reaction into something appropriate for the setting: the intent, the social rules, the relationship. through it.

AI has none of this. it can reason and detect emotions, but without the frame to use them in a layered way, the result is persona erosion4AI has multiple issues as conversation go on such as: system prompt adherence degrades 30%+ over extended conversations. personas switch mid-conversation, over-index on short term cues, and hyper-adjust in ways that read as best case confusing, worst case bad intentioned.. users feel something is off because they project reasoning depth onto what is a flat response surface.

conversational intelligence

conversational intelligence5adapted from Naval Ravikant's definition of intelligence as "the ability to get what you want out of life." here, the situation is a conversation between an agent and a human. is "the ability to get what you want out of a conversation". for a representative working for a company, what you want is a win for the company. in practice these are often win-win situations: a customer who feels understood, who gets what they actually need, comes back, pays more, forgives errors along the way6fully connected customers are 52% more valuable (HBR, 400+ brands). emotionally connected customers have 306% higher lifetime value (Motista, 2017).. the short term win for the customer is the long term win for the company. this is why luxury demands extortionate margins, or why we return to a worse barista that makes us feel better.

unlike human centric thinking, conversational intelligence is not innate and current metrics7CSAT, NPS, resolution rate, containment, AHT. all trailing indicators that tell you what happened after the conversation, not during it. nobody tracks trust forming over time or whether the relationship is building across interactions. don't capture it. you can't improve NPS by measuring NPS harder. to push NPS up you need to measure what drives it.

four NPS drivers:

  1. understanding: get what user wants, thinks, and needs without interrogating.
  2. connection: create a consistent, positive, and improving interaction8think corner coffee shop, or extreme luxury hotels for best practices. for LT trust and connection.
  3. action quality: did the interaction lead to a concrete solution for the user that provides the value needed.
  4. pleasantness: was the interaction nice disregarding the usefulness.

this leads to a high NPS, good brand value, and deeply connected customers, explaining the high value9deeply connected customers have 1.5-3x LTV compared to other customers. HBR found fully connected customers are 52% more valuable across 400+ brands. Motista (2017) found emotionally connected customers have 306% higher LTV. of top tier support people.

as becomes clear, the difference between an upsell and churn is conversational intelligence. and right now, it's barely being measured. with this lens we can create an initial scale for the conversational intelligence of different support providers.

conversational intelligence FAQ if/else like AI guardrailed AI offshored support human benchmark trained staff luxury hospitality staff current AI cultured computers

AI currently underperforms the human benchmark. not because it can't reason or detect feelings, but because it lacks the framework to do so human-centrically. for the same price, people choose a human. but there is no first principles reason for this underperformance, and there is margin to be gained providing luxury level service at inference cost.

the solution: how to make AI leap the human benchmark

we put a POV layer between the reasoning engine10where the logic analysis and problem solving happens. this is separate from conversational intelligence, which is why we sit on top of it rather than inside it. we run on any reasoning, including non-AI. and the customer. the engine reasons. we handle everything else.

reflect + empathise + adjust reasoning engine persona customer post-reasoning pre-reasoning TOV listen

pre-reasoning11the persona preprocesses the user's message through its own identity and relationship history before the reasoning engine ever sees it. preprocesses through the persona's filters before the engine reasons. post-reasoning12after the engine responds, the persona checks the output for tone drift, persona coherence, and whether it actually fits the conversation. validates coherence after. TOV13tone of voice. not a style preset. the persona speaks in its own voice, adapted to this customer based on the relationship so far. speaks in the persona's voice, adapted to this customer. listen14reads between the lines: what the user actually means, what they're not saying, what they need but haven't asked for. context over clarifying questions. reads between the lines of what the user says and means. reflect + empathise + adjust15the persona's internal self-loop. after each interaction it updates its model of the user: what worked, what didn't, what to emphasise next time. EQ in action. is the persona's internal self-loop.

compounding improvement

the persona16the seed persona is created according to the company's best practices, brand voice, and domain knowledge. it doesn't start from scratch. self-adjusts17as it interacts with users it makes assumptions about the user base of a company and vertical, getting better at each specific user and learning what users in that deployment have in common. over time. the delta between each adjustment is a learned preference, accumulating for each user and across all users18across all personas for one client, we understand their ICP through natural trial and error, allowing us to improve the whole deployment. for a client. this gives us a dataset that enriches as other AI systems degrade, mimicking a senior employee. the system compounds, separately from and agnostic to the reasoning engine.