Skip to content

We dive into GPT3.5 and GPT4

We offer access to GPT 3.5, GPT 4, and GPT 4 turbo in We dive deeper into the context limits and costs. And on how Smitty handles these things. We start explaining how the system is set up by Open Ai, at the end of the post we dive deeper in the Smitty part of this.

GPT 3.5

Release: March 15, 2022

Knowledge of events: Up to September 2021

Parameters: 175 billion

Input: Text only

Context limit: 16000 tokens

Factual responses: Occasional errors



Release: March 14, 2023

Knowledge of events: Up to April 2023

Parameters: 100 trillion

Input: Text and images

Context limit: 128000 tokens

Factual responses: 40% more accurate


Tokens and words broken down

The AI models we currently support operate with tokens, which roughly translate to about 4 characters of text for regular English text. This roughly translates to 3/4 of a word. So, 100 tokens is approximately 75 words.

We did some research to know the maximum context limits it can handle and on how much words this is approximately.

Number of words and tokens:

GPT 3.5: 4,096 tokens, approximately 3,000 words

GPT 3.5: 16,000 tokens, approximately 11,680 words

GPT 4: 32,000 tokens, approximately 23,260 words

GPT 4 turbo: 128,000 tokens, approximately 93,440 words


Additionally, the training data for GPT-3.5 Turbo, unlike GPT-4, is up-to-date from April 2023, while the free version of GPT 3.5 relies only on data up to September 2021. Essentially, this new version allows users to input and consider longer texts when composing prompts or questions they seek answers to. Consequently, ChatGPT can now account for more intricate details and data, resulting in more accurate and comprehensive responses.



Source information as of February 2023              

input costs              output costs

GPT 3.5 turbo 4k              0.0015 cents per 1000 tokens        0.0020 cents/ 1k tokens

GPT 3.5 turbo 16k            0.0005 cents per 1000 tokens        0.0015 cents/ 1k tokens

GPT 4 32k                             0.06 cents per 1000 tokens             0.12 cents/ 1k tokens

GPT 4 turbo 128k              0.01 cents per 1000 tokens          0.03 cents/ 1k tokens


Costs GPT 4 turbo

So, a quick calculation shows that a large document of 128,000 tokens with GPT 4 turbo costs $1.28 to process, and the output is the AI’s response, which can also be lengthy. The costs can vary from $1 to $2 depending on the length of the response. With just one question, this can quickly amount to $3. Let alone a conversation where previous answers and context are taken into account. That quickly adds up.

Costs GPT 3.5 turbo 16k

The most cost-effective GPT is the 3.5 turbo with a 16k context. A calculation shows that for 16,000 tokens / 1000 tokens * 0.0005 cents = 0.008 cents. Chatting with this bot is very cheap.

For the latest prices, check the OpenAI website:


Word count limits of AI and how Smitty handles them

As you can see, there is a limit on the number of words an AI can process. We have a solution for this within Smitty and split documents into parts. Within Smitty, it is possible to split a document into large chunks of up to 10,000 characters. Due to how Smitty is set up, entire documents are not sent to the AI, but rather the relevant part is retrieved when the question is asked.

Because we work with a smart search system for relevant information, this saves costs. This saves costs in using our credits for the OpenAI integration but also saves costs from the business because there is no need to program a Q&A.

You just upload your document / data and Smitty is trained with this information.