I haven't found good resources around reducing hal...
# 06-technical-discussion
I haven't found good resources around reducing hallucinations. Anyone found anything that's helped them?
You can't really "reduce" halucinations, you can only work around that problem! In one of the experiments, folks gave an all prime number input to the LLM to determine if something is prime or not. Even with CoT prompting and a bunch of other techniques, it kept producing wrong output. The only thing you can do is adjust the temperature of the model, and maybe some post processing to determine if the output is relevant. If you want to go further, use Guardrails or NeMo to keep the LLM output within an acceptable range out of outputs.
when a genAI states a true fact, it doesn't know that's a fact. there isn't a difference between fact and hallucination. both are generated statements. see "What Is ChatGPT Doing and Why Does It Work?", a short book by Stephen Wolfram. he discusses this problem. free at Amazon. also free at his site https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/