Rahul Parundekar
07/24/2023, 7:25 PMassert
statements.
• Using ChatGPT: Ask an LLM to evaluate the output qualitatively (e.g. check for language, consistency, etc.) NOTE: This makes your testing non-deterministic.
With Traces, you can track the outputs of every step of your LLM (especially if it's a distributed app) and tie different steps (e.g. feedback) with your model prediction.
A minor update was also made to the real-time view to help you visualize your recent prompt input and outputs using T-SNE!
You can find details and examples here:
Python SDK: https://github.com/ai-hero/python-client-sdk
Version: 0.2.7
With this we are supporting the entire PromptOps chain from research to deployment! Would love to hear your thoughts!