Hey everyone!
We’re well aware of the challenges posed by the sluggishness and cost associated with LLMs. The conventional remedy for these issues usually involves caching. However, we’ve taken a different route: our team has developed a classifier that exclusively permits valid prompts to interact with our LLM, and we’re quite pleased with the results.
If anyone here is intrigued by prompt injection or is curious about putting our classifier to the test by attempting to bypass it with malicious or invalid prompts, we encourage you to give it a shot. Its goal is to assist you in choosing the right wine, so it should just let through questions were the answer is an exact wine/wines.
https://playground.sommify.ai/chat