Webinar next week on how to protect your LLM app
Protecting your LLM application from dangerous or malicious inputs and outputs is critical in both development and production. We’ll cover different types of guards that can be used and evaluated, as well as how these guards can be augmented through few-shot prompting leveraging the actual attacks you’re seeing in production.
Aug 13 at 10am PST
Register here:
https://arize.com/resource/ai-with-assurance