Hey everyone! We’re well aware of the challenges ...
# 07-self-promotion
w
Hey everyone! We’re well aware of the challenges posed by the sluggishness and cost associated with LLMs. The conventional remedy for these issues usually involves caching. However, we’ve taken a different route: our team has developed a classifier that exclusively permits valid prompts to interact with our LLM, and we’re quite pleased with the results. If anyone here is intrigued by prompt injection or is curious about putting our classifier to the test by attempting to bypass it with malicious or invalid prompts, we encourage you to give it a shot. Its goal is to assist you in choosing the right wine, so it should just let through questions were the answer is an exact wine/wines. https://playground.sommify.ai/chat
j
Nice! However, in the case that the output is a list of wines, can’t the security just be on the output end?
w
can you elaborate? We implemented this solution mostly because we wanted to reduce load that goes to our LLM service