Beginner’s Guide to OpenAI’s GPT-3.5-Turbo Model

From GPT-3 to GPT-3.5-Turbo: Understanding the Latest Upgrades in OpenAI’s Language Model API.

Olivia Brown
9 min readMar 8, 2023
Photo by Levart_Photographer on Unsplash

This tutorial explores the advantages of using the GPT-3.5-Turbo Model over other models, including GPT-4. We’ll examine the changes that have been made, discuss new use cases, and provide code snippets to demonstrate how to implement it.

If coding isn’t your thing, feel free to skip the ‘How to Use’ section.

What is the GPT-3.5-Turbo Model

If you’ve landed on this tutorial, chances are that you’re already familiar with ChatGPT or OpenAI APIs, so I won’t go into the details of their history.

Despite the availability of the GPT-4 model, the GPT-3.5-Turbo Model remains a powerful and cost-effective option. It powers the widely popular ChatGPT and offers users the potential to create their own chatbot with similar capabilities.

One of the key advantages of the GPT-3.5-Turbo model is its multi-turn capability, allowing it to accept a series of messages as input. This feature is an improvement over the GPT-3 model, which only supported single-turn text prompts. With this feature, users can utilize preset scenarios and prior responses as context to enhance the quality of the generated response. We’ll delve into these features in more detail in a later section.

Comparison between GPT-3 vs GPT-3.5-Turbo vs GPT-4

While the GPT-4 model delivers superior quality results, the GPT-3.5-Turbo Model is a significantly more cost-effective option. It offers results of good enough quality, similar to those achieved by ChatGPT, along with faster API responses and the same multi-turn chat completion API mode.

If you wish to explore model pricing and how the token is calculated in more detail, I recommend checking out the relevant section of the new GPT-4 tutorial. In that tutorial, I demonstrate how to build a chatbot similar to ChatGPT that can be used in the terminal.

In comparison, the GPT-3 model, unless fine-tuned, is less attractive due to its higher cost and lower quality results compared to the GPT-3.5-Turbo. For most use cases, I recommend the GPT-3.5-Turbo model. It uses the same API call methods as GPT-4, and you can upgrade to the latter at any time.

In the following example, we’ll see the difference between the message-style query used by the new GPT-3.5-Turbo Model and the old prompt-style query. The new model allows us to incorporate more context and even prior responses into the conversation, making it a more powerful tool.

// GPT-3 model prompt example
const GPT3Prompt = `Give an example of How to send an openai api request in JavaScript`;

// GPT-3.5-Turbo model prompt example
const GPT35TurboMessage = [
{ role: "system", content: `You are a JavaScript developer.` },
{
role: "user",
content: "Which npm package is best of openai api development?",
},
{
role: "assistant",
content: "The 'openai' Node.js library.",
},
{ role: "user", content: "How to send an openai api request"},

How to Use the GPT-3.5-Turbo Model

Upgrading to the new GPT-3.5-Turbo Model API is a straightforward process, and this tutorial will demonstrate how to do so using Node.js. However, this concept is applicable to other programming languages as well.

The tutorial includes a code snippet with examples for both the previous GPT-3 method and the new GPT-3.5-Turbo method.

Before proceeding, make sure you have acquired your OpenAI API key and set up your project accordingly. For more information, you can refer to my tutorial “Getting Started with OpenAI API”.

At the time of writing this tutorial, the createChatCompletion method has not been officially released in the openai-node library. However, upon digging through their GitHub code, it appears that the method is indeed available for use in the latest version.

import dotenv from "dotenv";
import { Configuration, OpenAIApi } from "openai";

dotenv.config();

// Creating an instance of OpenAIApi with API key from the environment variables
const openai = new OpenAIApi(
new Configuration({ apiKey: process.env.OPENAI_KEY })
);

const topic = "JavaScript";
const question = "How to send an openai api request";

// Setting values for the prompt and message to be used in the GPT-3 and GPT-3.5-Turbo
const GPT3Prompt = `Give an example of ${question} in ${topic}`;
const GPT35TurboMessage = [
{ role: "system", content: `You are a ${topic} developer.` },
{
role: "user",
content: "Which npm package is best of openai api development?",
},
{
role: "assistant",
content: "The 'openai' Node.js library.",
},
{ role: "user", content: question },
];

// Function to generate text using GPT-3 model
let GPT3 = async (prompt) => {
const response = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 500,
});
return response.data.choices[0].text;
};

let GPT35Turbo = async (message) => {
const response = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: message,
});

return response.data.choices[0].message.content;
};

// Log the generated text from the GPT-3 and GPT-3.5-Turbo models to the console
console.log("### I'm GPT-3. ####", await GPT3(GPT3Prompt));
console.log("### I'm GPT-3.5-TURBO. ####", await GPT35Turbo(GPT35TurboMessage));




/* ****** Response Section ******

### I'm GPT-3. ####

//Include the OpenAI JavaScript Client Library
const openAiClient = require('openai-client');

//Provide your OpenAI API key
let apiKey = 'YOUR_API_KEY';

//Set up the OpenAI client with your API key
const client = openAiClient(apiKey);

//Create the request object with your desired parameters
let requestOptions = {
engine: 'davinci',
prompt: 'Are you feeling okay?',
max_tokens: 25
};

//Send the request to OpenAI
client.request(requestOptions, function(err, result){
if (err) {
console.log(err)
} else {
//The response from OpenAI will be in the 'result' object
console.log(result);
}
});



### I'm GPT-3.5-TURBO. ####

To send an OpenAI API request using the 'openai' Node.js library, you would follow these general steps:

1. First, you need to install the 'openai' package from npm using the command: `npm install openai`
2. Next, you would need to import the library into your file using `const openai = require('openai');`
3. Authenticate your API key by setting it as an environment variable, or passing it as a parameter to the `openai.api_key` property.
4. Use one of the various API methods provided by the `openai` library to send a request to OpenAI. For example, to use the 'Completion' endpoint, you could use the following code:

```javascript
// Set the API key
openai.api_key = 'YOUR_API_KEY';

// Send a 'Completion' API request
const prompt = 'Hello, my name is';
const requestOptions = {
prompt,
temperature: 0.5,
max_tokens: 5,
n: 1,
stream: false,
stop: '\n'
};

openai.completions.create(requestOptions)
.then(response => {
console.log(response.data);
})
.catch(error => {
console.error(error);
});
```

In this example, we are using the `openai.completions.create()` method to send a request to the 'Completion' endpoint, which generates text completion based on the provided prompt. We are then logging the response from the API to the console. Note that the `requestOptions` object contains various parameters that can be used to customize the request.
*/
  • The GPT3Prompt variable is set to a string that includes the question and topic variables and will be used as input to the GPT-3 model.
  • The GPT35TurboMessage variable is set to an array of objects that simulate a conversation between a user, an assistant, and a system, and will be used as input to the GPT-3.5-Turbo model.
  • Two functions are defined to generate text using the GPT-3 and GPT-3.5-Turbo models: GPT3 and GPT35Turbo, respectively.
  • The GPT3 function uses the openai.createCompletion() method to generate text based on the GPT3Prompt variable and the text-davinci-003 GPT-3 model.
  • The GPT35Turbo function uses the openai.createChatCompletion() method to generate text based on the GPT35TurboMessage variable and the gpt-3.5-turbo GPT-3.5-Turbo model.
  • Finally, the generated text from the GPT-3 and GPT-3.5-Turbo models is logged to the console using console.log().

The output produced by the GPT-3.5-Turbo model is significantly better than that of the GPT-3 model because I provided more context to the request. In contrast, the GPT-3 model went off on its own path and suggested a non-existent library when generating the response.

As you can see from the code, upgrading to the new GPT-3.5-Turbo model doesn’t require many changes. Additionally, you have the option to include more context in the messages to better tailor your request to the task at hand.

Best Practices for Using the GPT-3.5-Turbo Model

It’s too early to provide best practices since the new model was released only a couple of days ago. However, I can offer a few suggestions to guide you in the right direction.

  1. Always use the latest model available.
  2. Multi-turn conversations usually produce better results.
  3. System messages can help establish desired behavior.
  4. Both assistant and user messages provide additional context.
  5. Specify the desired output format in the request.
  6. The models do not retain memory of past requests, so include all relevant context in message exchanges.
  7. All additional parameters, such as temperature and max_tokens, still function properly as before.
  8. Like ChatGPT, the model API can sometimes be temperamental, so I recommend implementing an auto delay and retry wrapper on your API requests to handle errors such as server congestion and maximum token limit.

Some Thoughts on Potential Use Cases

Given the rapid evolution, decreasing costs, and expanding options available for various AI technologies, I strongly urge any mid to large-sized business to start considering their AI strategy before it becomes too late.

I think any business that tries to replace its entire workforce with AI doesn’t really understand the technology.

While multi-turn conversations can be useful, it’s important to note that there are limitations to the amount of information that can be stored and processed due to memory constraints

AI should be viewed as a tool that empowers human workers, rather than as a substitute for them.

Let’s explore a couple of potential use cases that can benefit your business.

Customer Support

I know that customer support is an area where many businesses face challenges.

  • Customers may not be happy with the quality of service they receive, or they may have to wait too long to speak with a representative.
  • Employees may feel overworked and not have the resources they need to do their jobs properly.
  • Management may be dissatisfied with the cost of providing customer support, which can negatively impact the company’s bottom line.

Incorporating AI into your customer support workflow can offer numerous benefits. For example,

  • Text-based queries commonly received through channels such as online chat, email, and help desk tickets can be efficiently managed by employing a customized chatbot with a comprehensive understanding of your products and services, as well as common troubleshooting steps. This can help you quickly filter and resolve basic customer inquiries, potentially saving valuable time and resources.
  • Voice-based queries can be addressed using AI tools such as OpenAI’s Whisper, which can transcribe incoming voice messages into text for processing by the same chatbot. You can also leverage text-to-speech services like Amazon Polly to enhance the overall customer experience.

Legal Service

AI can be a valuable tool for law firms and other professional service entities. For instance,

  • Initial consultations, similar to customer support cases, can be handled by AI-powered chatbots for basic inquiries that don’t require complex analysis, freeing up time for human staff to focus on more complex issues.
  • Using natural language processing AI such as GPT models can help streamline the process of reading through complex legal documentation such as contracts, legal briefs, court filings, laws and regulations, and legal opinions. This allows for more efficient and accurate analysis, as these models can parse through thousands of lines of text in just a fraction of a second.

Note that OpenAI no longer utilizes the data sent via the API for training models. This change was made to address privacy concerns raised by many companies.

Wrap Up

I believe that most businesses can benefit from implementing AI technology to some extent, depending on their budget and technical expertise. With the increasing availability and affordability of AI solutions, we can expect to see a growing number of companies offering tailored AI services to businesses of all sizes in the near future.

--

--

Olivia Brown

I write about best practices, and innovative solutions in software development, business strategy, and online marketing, with a focus on technology.