Hey everyone, So we've been digging into ways to r...
# 06-technical-discussion
s
Hey everyone, So we've been digging into ways to reduce hallucinations and make LLM's output more robust and deterministic. We're seeing some crazy good early results in our test runs 🤞 If you're running into any issues with prompt/output quality, just DM and we can chat!
✅ 5
j
We've been feeling v. frustrated on this bit. Been building bots in the travel space. Thinking of adding knowledge graphs to our bots. How are you guys approaching it?
s
Hey Jeng, thanks for getting back! DM'ing you
j
I’ve been considering going back and refactoring all my openai prompts to use functions so that I can more rigidly enforce structure, namely preserving citations from search results. Curious how the knowledge graphs are working for you @Jeng Yang Chia?
j
Let me DM you Jim