Primary focus area: Hybrid AI pipelines
Secondary focus areas: AI for healthcare
Abstract:
This RBO aims to create a hybrid AI dialogue system that integrates large language models (LLMs) with domain-specific guidance to produce structured yet flexible interactions. Our prototype will target mental health self-help by supporting techniques like cognitive reframing and problem-solving. The system will dynamically adapt its responses using domain knowledge encoded in prompts, allowing meaningful user engagement while maintaining therapeutic integrity. The approach is also adaptable to domains like education and e-governance.
Gap:
Current dialogue systems are either rigidly task-based or open-ended and unstructured. Recent LLMs offer opportunities to combine these approaches but lack fine-grained domain control, especially in sensitive areas like mental health. Existing guardrails focus on general safety, not specific therapeutic scenarios. Most mental health chatbots offer advice or empathy but miss the collaborative element crucial for long-term coping. This RBO addresses the need for structured, goal-oriented dialogue rooted in clinical principles.
Objective:
Develop a domain-controlled chatbot prototype for mental health self-help, guided by clinical psychology principles, that allows open user input while adhering to structured therapeutic goals. It will detect when conversations drift beyond scope and support privacy-preserving use cases, though no real-user testing is planned at this stage.
Impact:
This work lays the foundation for domain-sensitive LLM-based systems in healthcare and beyond.
KPIs include:
- Functional prototype of mental health support chatbot
- Scenario adherence and scope detection
- User satisfaction (non-clinical pilot)
- Adaptability to other domains (e.g., education)