:wave: Prompt Engineering help sought Context: I’...
# 02-general-community-chat
j
👋 Prompt Engineering help sought Context: I’m building a tool that semi-automates my workflow 1. It pieces together several pieces of information (documents and texts) 2. I use OpenAI GPT-4 with Chain of Thought to come up with a draft version of a text 3. I then do some final edits myself to the text However, the currently output of GPT-4 is rather generic and I still need to do more editing. I’m considering fine-tuning, but want to see how much gains I can make with prompt engineering. Since I have the current input/output and desired output (the text I edited), I was wondering how can I optimize my prompt to get it closer to the desired output. • Any tools out there that can help optimizing prompts? E.g. GPTs from ChatGPT store • Tools / tips to for reverse engineering prompts
i
Hi Jens W, my tech team would be happy to help with this. You can ask them questions on our Discord server: https://discord.gg/3DMCh9Ar @Jens W
s
My experience is that if you rely on prompt engineering you need to upgrade and update prompts continuesly. Fine-tuning and chaining multiple models to different parts of the drafting task can reduce uncertainties and the time and effort you need to spend updating prompts. So can also finding ways of solving as much as possible with a mechanical approach using if-else methods and just plain coding. From what you described it sounds like a problem that in part might be better solved by other things than llms. Especially if you want to piece certain parts together and must be sure they are not altered along the way.