Per user monthly cost estimates for OpenAI model u...
# 06-technical-discussion
b
Per user monthly cost estimates for OpenAI model usage. Ouch! GPT-4 is a no-go.
đź‘€ 2
🙌 1
b
What's the source of this @Bart Trzynadlowski? Would be great to read
b
It’s a simple calculation I made. The price per token is known so you can compute these numbers directly.
I racked up $32 in gpt-4 charges in one single day testing my app. So the numbers are definitely accurate.
b
Nice one — helpful reference!
s
So out of the gate GPT4 will always cost more than the "best" GPT3.5
b
There's a small error in the calculation. These numbers are using the output token cost which is 2x the input token cost. For example, I priced this assuming each 1000 tokens of GPT-4 cost $0.06/1k-tokens. The details will depend on your implementation -- sometimes a prompt can generate output that is 2-3x longer (think writing code). But if you assume a 50/50 split between input and output, the GPT-4 effective price per 1000 tokens is $0.045 (i.e., multiply these values by 0.75, which is only a small change). So, the numbers are still very much in the right ballpark, but 25% lower might be a fairer assessment.
p
My 2 cents: -does your use case really need to use the entire 4K context window? Most other implementations I’ve seen will first use vector database search and inserting the right segment/chunk into the prompt for the response, resulting in much smaller token usage with the same or even better results -are you able to drive enough value and charge to highly offset the costs? With that much generation you’re probably generating pretty enormous value to a business assuming it’s a big pain point you’re addressing -for now perhaps use free Azure credits for GPT4 (pretty easy to get $25K free, then up to $150K free), I’m sure you’ll figure out some creative avenues to save cost or increase value with that runway, I wouldn’t let costs stop you from proving product/market fit at such early stages
b
Yes, there are a lot of use cases, even with vector databases, that need to operate on large chunks of information. Code-writing is one of those. In my case, I use the LLM to generate program actions and to maintain state. State itself is small but the prompt necessary to instruct GPT how to manipulate the state is relatively large.
I do think that this pricing analysis shows why there are few deployed apps at scale. It's extremely cost prohibitive to create a natural language interface that a user can use freely throughout the day with GPT-4-like capabilities. A code-writing extension for VScode currently in beta is charging $20/mo. but is openly using GPT-4 to write code. The developer is swallowing the substantial cost of his closed pool of early users and hoping investors will fund it.
p
I can definitely see why the chunk size has got to be as thorough as possible for the purposes of coding. Thanks for the context. Good to know to watch out for!
Here’s the free $150K of OpenAI I’m talking about https://www.microsoft.com/en-us/startups Maybe the bet here is that the chatgpt4 pricing will go down in the future, and when that happens then those who “ate” the costs and got the revenue will be at the best position to further scale
❤️ 1
b
Oh nice! That's a great resource. Thanks for sharing!
👍 1