GPT-4 Turbo is more powerful and a lot cheaper. Input tokens cost 3 times less and output tokens half. Practical use would be about 2.75 times cheaper for most customers.
The context window of a large language model (LLM) is a measure of how long the memory of the conversation lasts. If a model has a context window of 4,000 tokens (about 3,000 words), then everything in the chat beyond 4,000 tokens ago is ignored and answers may become less accurate or even contradict previous answers. This new version now allows 128k context length, which makes the "awareness" of the model a lot wider. This ensures that conversations are remembered longer.
Along with the times
One of most heard comments when using ChatGPT and its GPT models, is that the data on which the model was trained was from 2021. That made it outdated in many cases, so many use cases proved unhelpful.
GPT-4 Turbo is based on data up to April 2023, which is a serious leap in time.
Images and voice
GPT-4 Turbo now accepts images as input parameters. It can analyze the images, classify them and generate descriptions.
The text-to-speech model allows you to have voice voices read text that sounds (eerily) real. If you don't believe us, feel free to listen for yourself. We'll wait. There are 8 different voices that can be used.
At first glance, this doesn't seem like such a groundbreaking feature, but it makes interacting with your AI assistant back more approachable and natural. Combined with Whisper V3, the voice recognition software, you can build powerful interactions with it.
Protection against copyright claims
By introducing a Copyright Shield, OpenAI guarantees that users of GPT Enterprise and its API, will be protected from copyright claims. Sam's exact words were "defend our customers" and "pay the costs incurred." I wonder if that will actually play out like it should...