We're hosting a webinar this Thursday (`2024-01-25...
# 03-ai-events
We're hosting a webinar this Thursday (
2024-01-25 @ 18:00 UTC
) with some other teams that have taken a gamified approach to gathering LLM threat intelligence, and digging into what we've learned from these initiatives over the last year and how we can apply that to LLM security this year. https://www.lakera.ai/event/crowdsourcing-llm-threat-intelligence We'll have panelists from: • Gandalf: Our own prompt injection CTF focused on getting the LLM to expose a secret word • LVE Project: Like CVEs for language models, bust also host some interesting LLM alignment challenges • HackAPrompt: A massive effort to understand how language models are effected by prompt injection from LearnPrompting • Tensor Trust: Defend your imaginary bank account with an LLM in this multiplayer LLM challenge where you can also attack other players defenses