I will go deeper into the prevention of hallucinations when using an AI with Smitty.
There are options with Smitty to set a number of settings on how it should respond to a question. This includes the personality of the chatbot and the creativity setting.
With the personality, you specify what it is used for and what the desired response should be. This can be set up so that it responds in the form of a teacher or a support employee, for example. You can actually put all kinds of things in this in the form of a story such as:
“You are a support employee for company X. You receive context in the form of snippets in a document. If you don’t know the answer, you indicate this.”
You also specify with the personality whether it should be factual and how it gets the context. So that the AI has information on how it is used, this helps the AI to then form a good response that is appropriate.
Furthermore, we use the creativity setting in the communication with the AI. This is a setting in the API call to the AI on how creative it may be in the answer. We can set this per chatbot and see that if the creativity setting is set low, factual answers are given if it doesn’t know something.
This combination of settings prevents hallucinations and strange answers, but that’s not all. Despite these settings, it remains possible that the AI itself provides an answer based on the generic training data it runs on. This can be a good addition, but it can also be that it is not correct. This is something you can discover by looking at the history and through testing. This happens especially when the training layer within Smitty.ai is incomplete and the AI cannot retrieve the answer from it.
What we see in practice is that it is important to maintain a good structure of the data you upload into Smitty.ai. If the data is not good, then the chatbot is not good either. We can help you with how to do this. You can also read this article which goes into this in detail https://smitty.ai/training-the-ai-with-smitty/