Discussions

Ask a Question
Back to All

Botsonic is still "extrapolating" with RESPONSE CONFIDENCE set to MORE ACCURACY

Hi guys!

Botsonic is still "extrapolating" with RESPONSE CONFIDENCE set to MORE ACCURACY.

We had hoped that by moving the slider all the way to "MORE ACCURACY" - the chatbot would stick 100% to the fine-tuning dataset we uploaded and trained on. However, with some user testing, we got the chatbot to say things that were not in the dataset. The responses were quite logical, so I don't know that I would call these hallucinations, but there was no way the knowledge came from our dataset, and therefore must clearly be coming through from the pre-trained model. This will, unfortunately, severely limit the ability to use this tool in customer-facing situations where accuracy is at a premium (such as customer service).

Is it possible you are still testing the confidence settings in your OpenAI API calls? Could the far-left end of the RESPONSE CONFIDENCE slider be even more conservative?

Thank you