top of page

Conversational AI Assistant for Customer Support

This project aims to support customer service by delivering a conversational AI assistant as an enterprise SaaS to help the team respond to users' inquiries, specifically in terms of course recommendations.

At Nielsen Norman Group, I led this human-centered research from 0->1 and independently designed and developed an agentic RAG-based chatbot. To ensure the system met real stakeholder needs, I evaluated the AI-generated responses using a combination of expert-based and model-based metrics, aligning the assistant’s outputs with both user expectations and organizational goals.​

​

Background and Goals

We have some key goals in order to guide our user-centered research.

Background
  1. High volume of repetitive user questions

  2. Slow response times due to manual efforts and the high cost of human resource

  3. Limited insight and knowledge on a variety of courses taught at NN/G

  4. Fragmented workflows across the customer service support team

Generative Research Goals:

Research Methods and Process

  1. What challenges do customer service teams face when responding to course recommendation inquiries?

  2. How should AI responses be structured to be clear, helpful, and actionable for both users and customer service support staff?

  3. How well do AI-generated responses align with the organization's course catalog, policies, and user goals?

​​​​

​

Stakeholder Alignment

  • Collaborated with Product, Data & Strategy, and Customer Service teams to identify business goals and impacts, stakeholder requirements, and opportunities to support customer inquiries about courses taught in NN/G.

​

​Service Blueprinting

  • Mapped the customer service workflow through cross-team collaboration and observations to identify repetitive inquiries and opportunities for AI-supported workflow automation.

       Tools: Miro, team workshops.​

 

Data Collection

  • Aggregated large-scale datasets from Missive (Inbox collaboration platform), CRM events, the NN/g website, and course materials.

      Methods: Web scraping, API integration, and AI-assisted data processing.

 

Exploratory Analysis

  • Conducted a mixed-methods analysis on 10K+ customer records to identify common inquiry patterns and user pain points.

       Methods: Thematic analysis, clustering (supervised & unsupervised), BERTopic, AI-based topic modeling.

 

AI System Design and Development

  • Designed and developed an agentic RAG-based chatbot to address recurring inquiries about course outcomes, recommendations, and schedules. The conversational assistant has a short-term memory and delivers personalized responses

       Tech: Python, Streamlit, LangChain/LangGraph, AI agents, vector database, document indexing.

 

Evaluation

  • Evaluated the chatbot using both expert-based and model-based metrics, including correctness, relevance, similarity to human responses, task completion, latency, and hallucination.

       Methods: RLHF alignment pipelines, RAGAS evaluation framework, LLM-as-a-judge, Likert-scale expert ratings.

​

​

​

​

​

​

​

​

​

​

​

​

​​​​​​​​​​​​​​​​​

​

​

​

​

​

​

Findings and Crucuial Insights

  • Identified repetitive support requests and workflow inefficiencies
    Analyzed customer inquiries and internal workflows, uncovering recurring questions and operational bottlenecks. Proposed 10+ automation opportunities across software development, data and strategy, design processes, and content strategies.

  • Discovered the need for intent-aware recommendations
    Evaluated AI-generated responses and found that retrieving course descriptions alone was insufficient. Customer service assistants needed support in understanding user goals and providing contextual, personalized course recommendations.

  • Designed for trust and responsible AI use
    Implemented AI safety guardrails, uncertainty communication, and a feedback mechanism to support transparent and responsible interactions.

  • Prioritized accuracy over response speed
    Determined through evaluation that stakeholders valued relevance, correctness, and minimizing hallucinations more than faster response times.

  • Reduced hallucination risks with safety guardrails and fallbacks
    Implemented course verification using multiple AI agents and fallback strategies, directing users to official NN/g course pages when information confidence was low.

  • Improved reliability through continuous evaluation
    Conducted expert reviews, bias-aware prompt testing, and user feedback analysis to iteratively improve response quality and offer neutral and inclusive language in AI responses.

  • Identified broader AI opportunities across the organization
    Highlighted areas where AI tools and data products could improve users’ course discovery experience while reducing the support team workload.

​

Business Impacts

  • Reduced manual customer support effort required to respond to course-related inquiries by 28%.

  • Established an AI evaluation framework to ensure response quality and reliability.

  • Enabled faster response times through an AI-powered support assistant.

  • Improved course discovery by providing contextual, personalized recommendations aligned with user goals.

Team

Morva Saaty, Raluca Budiu, Luice Hwang

Timeline

May 2025 - Ongiong

bottom of page