Hey all :wave: I’m curious how you’re versioning y...
# 07-self-promotion
Hey all 👋 I’m curious how you’re versioning your prompt templates, and tie them to the inputs/outputs, evaluations and the PromptOps flow? I just added
to AI Hero - https://github.com/ai-hero/python-client-sdk. Asks:Versioning Prompt Templates: Would love to talk to understand what you’ve tried (even if it is maintaining all your versions in a google doc 😄) ◦ If you have some time to spare and try, please try the library. I’d love some feedback. • Logging and Visualizing every input and output: ◦ How do you track single LLM inputs and outputs for the prompt version above. How do you trace across multiple models? ◦ How do you assess performance of different prompt templates to same input and vice-versa. • Eval and Test Cases for Prompt Templates: ◦ I’m planning to integrate evaluation and test cases next. Hit me up if that’s of interest • A/B test your different Prompt templates in production: ◦ This might be a long ways to build, but wouldn't it be awesome to try your UI with different prompt templates? Would love some feedback, and even maybe just chat and understand your PromptOps workflow. This is a super early release, so pardon the minimalistic UI. 🙂