GPT models

GPT models

The GPT models by OpenAI have been trained for the comprehension of both natural language and code. They produce textual outputs based on given inputs, which are often called “prompts”. Crafting a prompt essentially means programming a GPT model, typically by providing instructions or examples of task completion.

With the use of GPTs, applications can be developed to:

  • Compose documents
  • Generate computer code
  • Respond to queries about a database
  • Conduct text analysis
  • Develop conversational agents
  • Implement a natural language interface in software
  • Provide tutoring in various subjects
  • Translate between different languages
  • Create game characters
  • And much more!

To interact with a GPT model via the OpenAI API, a request with the inputs and your API key is sent. You can read How to get ChatGPT API Key? article. In response, you receive the output generated by the model. The latest models as of June 2023, gpt-4 and gpt-3.5-turbo, can be accessed via the chat completions API endpoint. As of now, only the older legacy models can be reached through the completions API endpoint.

Newer models (2023–)gpt-4, gpt-3.5-turbo
Older models (2020–2022)text-davinci-003, text-davinci-002, davinci, curie, babbage, ada

Generally, you can experiment with GPTs in the playground. If you’re not sure which model to use, then use gpt-3.5-turbo or gpt-4. More advanced chatgpt-5 model is still being developed.

GPT best practices

Understanding the optimal strategies for operating with gpt-4 and upcoming gpt-5 models can greatly enhance the performance of your application. The modes in which GPTs fail and the strategies to mitigate or rectify these issues may not always be self-evident. There is a particular skill set, known as “prompt engineering”, that is crucial for effectively working with GPTs. Some of recommendations on how to use Prompts listed in GPT Best Practices.

However, as the field has advanced, its ambit has expanded from merely crafting the prompt to designing systems that utilize model queries as elements. For more insights, you can refer to our guide on GPT best practices. This guide discusses techniques to amplify model reasoning, minimize the chances of model hallucinations, and much more. Additional helpful resources, including code examples, can be found in the OpenAI Cookbook.

Which GPT model should I use?

OpenAI primarily suggests employing either gpt-4 or gpt-3.5-turbo, depending on the complexity of the tasks you intend the models to perform. Generally, gpt-4 outperforms in an extensive range of evaluations and is adept at diligently adhering to intricate instructions.

On the other hand, gpt-3.5-turbo tends to follow only a segment of a complex multi-part instruction. gpt-4 is less prone to “hallucination”, a term referring to the generation of fabricated information, compared to gpt-3.5-turbo.

Moreover, gpt-4 boasts a larger context window with a maximum capacity of 8,192 tokens, as opposed to 4,096 tokens for gpt-3.5-turbo. However, gpt-3.5-turbo delivers outputs with decreased latency and is significantly more cost-effective per token.

It’s advisable to experiment in the playground to identify which models offer the best cost-performance balance for your specific use case. A prevalent design pattern involves utilizing multiple distinct query types, each dispatched to the most suitable model for handling them.


Why do model outputs vary?

The API is non-deterministic by default, implying that even with the same prompt, the completion might vary with each call. Although setting the temperature to 0 will make the outputs primarily deterministic, a small level of variability will persist.

How should the temperature parameter be adjusted?

The temperature parameter impacts the consistency and creativity of the outputs. Lower values will yield more consistent outputs, while higher values will produce more diverse and imaginative results. The temperature value should be chosen based on the balance between coherence and creativity required for your specific application.

Are the latest models available for fine-tuning?

Currently, fine-tuning is only possible for the base GPT-3 models (davinci, curie, babbage, and ada). The fine-tuning guide offers more details on how to use fine-tuned models.

Is the data input into the API stored?

As of March 1st, 2023, API data is retained for 30 days, but it is no longer utilized to enhance our models. More information is available in OpenAI’s data usage policy. Zero retention is offered by some endpoints.

How can I enhance the safety of my application?

To add a moderation layer to the outputs of the Chat API, you can follow the moderation guide. This will prevent the display of content that breaches OpenAI’s usage policies.

Should I opt for ChatGPT or the API?

While ChatGPT offers a chat interface to the models in the OpenAI API and several built-in features such as integrated browsing, code execution, ChatGPT plugins, etc., the OpenAI API offers more flexibility.

Read related article: