Chat Models

Chat Models

OpenAI’s Chat GPT models are designed to accept a list of messages as input, with the output being a message generated by the model itself. The design of the chat format is specifically tailored to facilitate multi-turn dialogues. However, it can be just as functional for single-turn tasks that don’t involve any conversation.

Here’s an example of what an API call might look like:

import openai

openai.ChatCompletion.create(
model=”gpt-3.5-turbo”,
messages=[
{“role”: “system”, “content”: “You are a helpful assistant.”},
{“role”: “user”, “content”: “Who won the world series in 2020?”},
{“role”: “assistant”, “content”: “The Los Angeles Dodgers won the World Series in 2020.”},
{“role”: “user”, “content”: “Where was it played?”}
]
)

You can access the comprehensive API reference documentation here.

Inputs

The primary input for any GPT Model is the ‘messages’ parameter. This must be an array of message objects, where each object carries a role—either “system”, “user”, or “assistant” – and the message content. Conversations can range from a single message to multiple exchanges between user and assistant.

Conversations are usually structured with a system message leading the way, followed by alternating messages between the user and the assistant.

The system message plays a crucial role in defining the assistant’s behavior. You can tweak the assistant’s persona or give particular instructions about its conduct throughout the conversation using this message. However, it’s worth noting that the system message is not compulsory; the model’s behavior without a system message would likely mirror that of using a generic instruction like “You are a helpful assistant.”

The user’s messages are the commands or remarks that the assistant responds to. Assistant messages primarily store prior responses from the assistant, but you can also craft these to provide examples of the desired behavior.

Keeping a record of the conversation history is crucial when user instructions make references to previous messages. For instance, the user’s ultimate question,

Where was it played?

in the given example, only makes sense considering the previous messages about the 2020 World Series. Since the chat models don’t remember past requests, all relevant information must be included in the conversation history for each request. If a conversation exceeds the model’s token limit, you’ll need to truncate it somehow.

Read more related articles:


Posted

Tags: