Today at its OpenAI DevDay event, OpenAI announced a number of updates across its platform.
One of the announcements is that the company now allows users to create their own custom versions of ChatGPT, called GPTs, for specific purposes.
Some examples of GPTs that have already been created include Game Time, which explains rules to board games; Tech Advisor, which provides step-by-step guides on troubleshooting technology; and The Negotiator, which helps people improve their negotiation skills.
Anyone can create a GPT, even if they don’t have any coding experience. Once created, GPTs can also be shared with others.
“We believe the most incredible GPTs will come from builders in the community. Whether you’re an educator, coach, or just someone who loves to build helpful tools, you don’t need to know coding to make one and share your expertise,” OpenAI wrote in an announcement.
Later this month, OpenAI will launch the GPT Store so that you can find and use GPTs created by others. It will also begin spotlighting the most useful GPTs across different categories.
Companies can create and deploy GPTs internally without sharing with those outside of the company, as well.
OpenAI also revealed the first preview for the next iteration of GPT-4, called GPT-4 Turbo. The new model is more capable than the current iteration and also has access to events up to April 2023. ChatGPT Plus has also been updated to include information up to that date.
Another major change is the addition of a 128k context window, which fits the equivalent of 300 pages of text into a single prompt, OpenAI explained.
The underlying technology was also significantly improved, allowing OpenAI to offer GPT-4 Turbo input tokens at a price that is three times cheaper than GPT-4 and output tokens for two times cheaper.
According to OpenAI, other features in GPT-4 Turbo include improvements to the accuracy of function calling, improved instruction following, support for JSON mode, reproducible outputs, and log probabilities for the most likely output tokens.
The company is also releasing an updated version of GPT-3.5 Turbo, which will support a 16K context window and enable parallel function calling.
In addition to GPT-4 Turbo, OpenAI also announced the Assistants API, which can be used to create AI assistants. Currently the API supports three types of tools: Code Interpreter, Retrieval, and Function calling, the company explained.
Code Interpreter is used to allow assistants to run code iteratively and solve coding and math problems. Retrieval allows the API to use knowledge outside of OpenAI models, like proprietary domain data, product information, or user-provided documents. Function calling allows assistants to invoke functions and incorporate the response into their messages.
Another announcement made during OpenAI DevDay was new multimodal capabilities across its APIs. These include accepting images as prompt inputs, image creation with DALL-E 3, and text to speech.
The company also announced Copyright Shield, which the company says is a new offering where it will defend customers that face legal claims around copyright, including paying costs incurred.
The final announcements were the release of Whisper v3 and the open sourcing of Consistency Decoder. Whisper is a speech recognition model, and this release offers improved performance across several languages. Consistency Decoder is an alternative to the Stable Diffusion VAE decoder that improves images.