
GPT-3 (ChatGPT) in Python
🔵 🟡 Python-GPT 🟡 🔵
This is the name I chose for the code of my virtual assistant Rachel. A chatbot written in python that implements OpenAI GPT-3 (Generative Pre-trained Transformer 3) artificial intelligence model.
🧠 I became aware of the release of #OpenAI’s public APIs from Davide Severini‘s post in which he talked about how this type of chatbot (specifically trained to meet his needs) could be useful in his work at Thauma – Digital Performance Intelligence.
Personally, I’m not sure how it can be useful to me immediately, but having a virtual assistant at hand to ask anything doesn’t hurt.
❓ What are the next steps?
Definitely implementing GPT-4 once the public APIs are released, while I’m studying the feasibility in java and maybe one of the next posts will be about its implementation!

Code Analysis
Phase 1
First of all we create a new virtual environment via venv and install the openai package via pip
pip install openai
In our project folder, we create a .json file called secrets. We will use the json file to store our access key
{
"api_key": "your-secret-key"
}
Phase 2
We create a python file and import the json and openai modules, and import our secret key to authenticate ourselves by reading it from the json file via the appropriate module.
import json
import openai
# Import API KEY
with open("secrets.json") as f:
secrets = json.load(f)
api_key = secrets["api_key"]
openai.api_key = api_key
We now create what will be the function that will generate a response to our messages.
The function will take a list called messages as an argument, which we can then pass to the create() function.
In order to dialogue with the AI model, we can now use openai.ChatCompletion.create()
We define a response variable to which we pass the response obtained from the OpenAI backend.
To create the model in personal assistant mode, we now simply create an infinite while loop in which we request input from the user, make requests to the backend for a response, and output everything.
We use if __name__ == “__main__” to make sure that this code is executed whenever the file where we are executing this code is executed as a script.
We also define messages and define in this a first message with role: system
# Funtion to get the response
def get_response(messages:list):
response = openai.ChatCompletion.create(
model = "gpt-3.5-turbo",
messages=messages,
temperature = 1.0
)
return response.choices[0].message
if __name__ == "__main__":
messages = [
{"role": "system", "content": "You are a virtual assistant called Rachel and you speak Italian and English."}
]
try:
while True:
username = "Enry"
user_input = input("n" + username + ": ")
messages.append({"role": "user", "content": user_input})
new_message = get_response(messages=messages)
print(f"Rachel: {new_message['content']}")
messages.append(new_message)
# Closing phrase, when the session will close the terminal show this sentence
except KeyboardInterrupt:
print("nBye " + username + " see you next time!")