Hi everyone, I hope everyone is doing well at the ...
# 06-technical-discussion
g
Hi everyone, I hope everyone is doing well at the hackathon! Unfortunately, I had to leave yesterday to focus on my graduate school studies this weekend as I approach the end of the semester. Out of curiosity, I was thinking of creating a ChatGPT plugin that would query relevant help from StackOverflow and GitHub forums when a programmer has compilation errors. Does this idea have any merit you think? I know GitHub Copilot X exists, but I am currently on the waitlist. Thank you!
s
It's definitely a good and validated idea. Checkout also phind.com Personally think it would be cool if we had one that could just update ChatGPT's knowledge with a user-specified repo documentation. ChatGPT is great at writing code involving libraries pre-knowledge cutoff but is limited on those more recent. A particular pain if everything you're building is in ML. This could solve that pain point.
d
I've actually found that ChatGPT and GPT4 are already quite good at answering questions about errors that I have. That said, for more obscure errors it's sometimes challenging to get the right with the right semantics. The biggest holes I"ve noticed in the current experience are: • The quality of the returned answer is really dependent on the details that you input. In other words if you're an experienced programmer working in an environment you're familiar with, you're more likely to be able to provide the relevant context that leads to better results. Something like a more guided experience that can lead the user to provide the right information could be helpful here. Problems that pop up tend to take the form of "this error is really general and GPT can't give you a helpful answer without more info". • There isn't a good way of verifying whether something is a "best practice" or not. This has led to some issues in my experience but is probably pretty tricky to solve. • A lot of answers that don't work are posted on stack overflow and I suspect GPT copies those sometimes and ends up providing misleading info. e.g. I've had it directly contradict itself in a subsequent message before without me asking it to do so. Happy to share more thoughts if it's helpful. I think there's a lot of potential value but the key is understanding the workflow really well IMO, to provide something that's differentiably more helpful than vanilla GPT4.
g
@Sean Durkin @Daniel Hsu Sean and Daniel, thank you for your input. Playing for hours on ChatGPT has made me realize it is pretty good at code debugging, but newer code will cause it to be confused and give wrong advice to the user on why an error occurs. I noticed this yesterday when I gave it a question about the Notion API. This is a problem that limits ChatGPT in helping programmers. I will look into phind.com to see how that works since I typically Google my way out of a programming problem.
@Daniel Hsu, I agree with your three points and the idea that workflow would be necessary for executing this to make it worthwhile and have tremendous value. It would have to be better than vanilla GPT4 and create more impact. I am happy to take this conversation offline if you are free to discuss this further. Please DM me if you are interested
@Sean Durkin Same as well; if you want to discuss this further, please feel free to DM me.