While keeping conversations within context is important in ensuring that the cost of running conversational AI applications is manageable, being too restrictive results in terrible UX. Refusing to reply to messages on how your domain applies to others simply won’t fly hence that app is doomed to fail.
Imagine having a conversation with someone and every time you mention words slightly out of context, they threaten to end the conversation. It would be such a terrible experience that you would end up just walking away. The same applies for conversational AI. The benchmark for conversatiomal AI apps is human conversation which typically has so many diversions hence said apps should allow for the same by now being too restrictive. If you still want to go the route of your app having singular context, keep the context borders flexible so they are just as graceful.
Seeing as the cost of running conversational AI has drastically gone down, I don’t see why anyone would want to put hard boundaries on the conversation’s context. Let it stray a bit. An important thing to keep in mind as we build these applications is that the standard is human conversation. There is simplicity in the typical human conversation. If your LLM has a fixed and prescribed structure of questions with answers mapped out then you’re better off making a bot that has a menu options.
Checkout Kachere AI, your digital lawyer here: https://www.kachere.app