We Use Cookies

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with this.

See our cookie policy.

Send a prompt to ChatGPT and assign the response to a variable. Prompts can be assigned a Conversation Id if previous prompts/responses should be included when part of a conversation.

Built-In Action

Send a prompt to OpenAI ChatGPT and assign the response to a variable. You can send single one-off prompts or prompts that are part of a conversation. The ThinkAutomation ChatGPT action enables you to automate requests to ChatGPT and then use the response further in your Automation.

Before you can use this action you must create an account with OpenAI. Go to OpenAI and click the Get Started link to create an account.

On your account page select API Keys and generate a new secret key. Make a note of this key as it is only displayed once. This is your OpenAI API Key.

Specify your OpenAI API Key. You can specify a different OpenAI API Key on each ChatGPT action. If you will only be using a single OpenAI Account then you can enter your API key in the ThinkAutomation Server Settings - Integrations - ChatGPT section. This key will be used by default if you do not specify one on the ChatGPT action itself.

Specify the Operation. This can be one of:

  • Ask ChatGPT To Respond To A Prompt
  • Add Context To A Conversation
  • Clear Conversation Context
Ask ChatGPT To Respond To A Prompt

The System Message is optional. This can help to set the behavior of the assistant. For example: 'You are a helpful assistant'.

The Prompt is the text you want a response to.


What category is the email below? Is it sales, marketing, support or spam? Respond with just sales, marketing, support or spam.
Subject: %Msg_Subject%
Response: sales              

Extract the name and mailing address from this email:
Dear Kelly,
It was great to talk to you at the seminar. I thought Jane's talk was quite good.
Thank you for the book. Here's my address 2111 Ash Lane, Crestview CA 92002
Name: Maya
Mailing Address: 2111 Ash Lane, Crestview CA 92002              

I am flying from Manchester (UK) to Orlando. What are the airport codes? Respond with just the codes separated by comma.
Prompts you send to ChatGPT have a limit of approximately 500 words.

Tip: When using ChatGPT to analyze incoming emails, you can use the %Msg_Digest% built-in field instead of %Msg_Body%. The %Msg_Digest% contains the last reply text only with all blank lines and extra whitespace removed. It is also trimmed to the first 750 characters. This is usually enough to categorize the text and will save your usage count.

The Model entry allows you to select the OpenAI Model to use. You can select from:

  • gpt-3.5-turbo
  • text-davinci-003
  • text-davinci-002
  • code-davinci-002
  • Your own fine-tuned model name

See the OpenAI documentation for details about the different models. GPT-3.5-turbo is the default and works for most scenarios, it is also the least expensive.

Specify the variable to receive the response from the Assign Response To list.

You can also optionally assign the number of tokens used for the prompt/response. Select the variable to receive the tokens used from the Assign Used Token Count To list. OpenAI charges are based on tokens used. For example, the current pricing for gpt-3.5-turbo is $0.002 per 1000 tokens.


You can optionally specify a Conversation Id. This is useful if multiple ChatGPT requests will be made within the same Solution and you want to include previous prompts/responses for context, or if you want to add your own context prior to asking ChatGPT for a response.

The Conversation Id can be any text. For example, setting it to %Msg_FromEmail% will link any requests for the same incoming email address.

The Max Conversation Lines entry controls the maximum number of previous prompts/response pairs that are included with each request. For example, if the Max Conversation Lines is set to 10 then the last (most recent) 10 prompt/response pairs will be sent prior to the current prompt. As the conversation grows, the oldest items will be removed to prevent the total prompt text going over the ChatGPT token limit.

Conversations are shared by all Automations within a Solution and conversation lines older than 48 hours are removed.

For example:

Suppose you send 'What is the capital city of France?' in one prompt and receive a response. If you then send another separate prompt of 'What is the population?' with the same conversation id then you will receive a correct response about the population of Paris because ChatGPT already knows the context. This would work across multiple Automation executions for up to 48 hours, as long as the conversation id is the same.

Add Context To A Conversation

You can add context to a conversation. Context is used to help ChatGPT give the correct answer to a question. The can be Static Text or you can search articles based on the incoming message from the Embedded Knowledge Store and send the most relevant articles to provide context.

You could also lookup context any other way (via a database or web lookup). For example: If the customer provides an email at the start of the chat you could lookup customer & accounting/order information and add this to the context in case the customer asks about outstanding orders.

The same context wont be added to a conversation if the conversation already has it. So you can add standard context (for example, general information about your business) along with searched for context within your Automation prior to asking ChatGPT to a response.

You can add multiple ChatGPT - Add Context To A Conversation actions in your Automation prior to the ChatGPT - Ask ChatGPT To Respond To A Prompt action.

For example: Suppose you have a company chat bot on your website using the Web Chat message source. A user asks 'what is the current price for widgets?'. You first add some general context about your business, you then do a knowledge base search with the Search Text set to the incoming question. You add the most relevant articles relating to widgets to the conversation as context.

The context itself does not appear in the chat or get saved anywhere - it simply gets added to the prompt sent to ChatGPT to provide context to help ChatGPT answer the users question. The benefit of this is that you can use the standard ChatGPT models without training - and you can always provide up to date information by keeping your local knowledge base updated. This is a much faster way of creating a working bot, and is a much more cost effective solution than training your own model or using 3rd party hosted services.

Clear Conversation Context

This operation will clear any Context added to a conversation. Specify the Conversation Id.

ChatGPT Rate Limits

Your OpenAI account will set a rate limit for the maximum requests per minute. The Open AI API Key - Rate Limit Retries setting determines how many times ThinkAutomation will retry the request if a rate limit error is returned. It will automatically increase the wait time for each retry. The default wait period is 30 seconds. If the request still fails after the retries then an error will be raised.

ChatGPT Notes

ChatGPT has many uses. Other than being a regular chat bot that has knowledge of many subjects, you can use it to:

  • Create a Chat Bot using the Web Chat Message Source type.
  • Parse unstructured text and extract key information.
  • Summarize text.
  • Classify emails.
  • Translate text.
  • Correct grammar/spelling.
  • Convert natural language into code (SQL, PowerShell etc).

and much more. See: Examples - OpenAI