![method prevents an ai model from being overconfident about wrong answers](https://utterbuzz.com/wp-content/uploads/2024/08/method-prevents-an-ai-model-from-being-overconfident-about-wrong-answers-640x360.jpg)
Thermometer, a new calibration technique tailored for large language models, can prevent LLMs from being overconfident or underconfident about their predictions. The technique aims to help users know when a model should be trusted.
Thermometer, a new calibration technique tailored for large language models, can prevent LLMs from being overconfident or underconfident about their predictions. The technique aims to help users know when a model should be trusted.