ChatGPT API Functions

ChatGPT API Functions

OpenAI announcing updates including more steerable API models, ChatGPT API Functions calling capabilities, longer context, and lower prices. As you know, the Team had unveiled gpt-3.5-turbo and gpt-4 earlier in the year, and within a span of a few months, developers had constructed astonishing applications utilizing these models.

They were now introducing a series of thrilling updates:

  • A new function calling capability included in the Chat Completions API. They had updated and made gpt-4 and gpt-3.5-turbo more steerable. They launched a new 16k context version of gpt-3.5-turbo as an improvement over the standard 4k version.
  • There were considerable reductions in costs, with a 75% decrease in the price of their cutting-edge embeddings model and a 25% cost cut on input tokens for gpt-3.5-turbo. They also declared the deprecation timeline for the gpt-3.5-turbo-0301 and gpt-4-0314 models.
  • All of these models retained the same data privacy and security assurances that were first announced on March 1.

ChatGPT API Functions calling

Developers now have the capability to define functions for gpt-4-0613 and gpt-3.5-turbo-0613. The models have been designed to intelligently select to produce a JSON object containing arguments to invoke these functions. This presents a reliable way to link GPT’s capabilities with external tools and APIs.

Both models have undergone fine-tuning to identify when a function call is required based on user input, and to respond with JSON that aligns with the function signature. Function calling enables developers to get structured data from the model in a more consistent manner. For instance, developers can:

  • Build chatbots that respond to queries by leveraging external tools, such as ChatGPT Plugins.
  • Transform inquiries like “Email Anya to see if she wants to get coffee next Friday” into a function call like send_email(to: string, body: string), or “What’s the weather like in Boston?” into get_current_weather(location: string, unit: ‘celsius’ | ‘fahrenheit’).
  • Change natural language into API calls or database queries.
  • Convert a question like “Who are my top ten customers this month?” into an internal API call like get_customers_by_revenue(start_date: string, end_date: string, limit: int), or “How many orders did Acme, Inc. place last month?” into a SQL query using sql_query(query: string).
  • Derive structured data from text.
  • Define a function named extract_people_data(people: [{name: string, birthday: string, location: string}]) to capture all people referenced in a Wikipedia article.

These scenarios are facilitated by new API parameters in their /v1/chat/completions endpoint. Developers can get started with their documentation and are encouraged to add evals for potential enhancements in function calling.

New models


The gpt-4-0613 is an enhanced model that includes the newly incorporated function calling feature. As for the gpt-4-32k-0613, it not only carries the same improvements as the gpt-4-0613 but also possesses an extended context length for superior understanding of larger texts.

With these enhancements, OpenAI plans to extend invitations to many more individuals from the waitlist to experiment with GPT-4 over the coming weeks. The goal is to entirely eliminate the waitlist with this model. OpenAI expresses gratitude towards those who have been patiently waiting and is eager to see what these individuals will create with GPT-4.


The gpt-3.5-turbo-0613 offers the same function calling feature as GPT-4, as well as improved steerability through the system message. These two features equip developers to guide the model’s responses more effectively.

The gpt-3.5-turbo-16k provides four times the context length of the gpt-3.5-turbo but at double the price: $0.003 per 1K input tokens and $0.004 per 1K output tokens. The 16k context implies that the model can now handle roughly 20 pages of text in a single request.