👋
Meetup tomorrow (Tuesday 4/16) in San Francisco on open source LLM evaluation.
Collaborate with other researchers, engineers, and open source enthusiasts who want to work towards building more reliable, unbiased, and trustworthy language models!
Here are some things we’ll do:
• Discuss the latest open source tools and frameworks for assessing LLM performance, safety, and robustness
• Share case studies and insights from researchers and practitioners working on LLM evaluation and debugging
• Collaborate on developing new open source resources, such as datasets, benchmarks, and tools to support the LLM community
• Establish best practices and guidelines for rigorous, transparent, and reproducible LLM evaluation and debugging
Starts at 5:30pm
Register here:
https://lu.ma/llm-evals