How to Set Up ChatGPT on a Raspberry Pi Pico W

by mahmoodmustafashilleh in Circuits > Raspberry Pi

1940 Views, 12 Favorites, 0 Comments

How to Set Up ChatGPT on a Raspberry Pi Pico W

Copy of Copy of Untitled Design.png

ChatGPT's ability to understand natural language inputs and its versatility make it a useful tool for developing various applications with the Raspberry Pi Pico W. An example can be voice recognition, with the help of external microphones and speakers, ChatGPT can be used to develop voice-controlled systems that can perform various tasks, such as controlling home appliances or responding to user queries.

This tutorial demonstrates how to set up ChatGPT for your Raspberry Pi Pico W using their API in this tutorial. By the end of it, you will be sending ChatGPT prompts and getting responses in return.

-----

Before we delve into the topic, we invite you to support our ongoing efforts and explore our various platforms dedicated to enhancing your IoT projects:

  • Subscribe to our YouTube Channel: Stay updated with our latest tutorials and project insights by subscribing to our channel at YouTube - Shilleh.
  • Support Us: Your support is invaluable. Consider buying me a coffee at Buy Me A Coffee to help us continue creating quality content.
  • Hire Expert IoT Services: For personalized assistance with your IoT projects, hire me on UpWork.

Explore our Stores for Premium Products:

  • ShillehTek Store: Access exclusive discounts on Arduino, Raspberry Pi sensors, and pre-soldered components at our ShillehTek Website.

Shop on Amazon:

Supplies

Raspberry Pi Pico W

Micro USB Cable

You only need the Raspberry Pi Pico W and a power supply.

Generate an API Key

API keys can be generated here: https://platform.openai.com/account/api-keys

The Chat GPT API key is a unique code that allows you to access the Chat GPT service through its API (Application Programming Interface).

When you make a request to the Chat GPT API, the API key is used to authenticate your request and identify your account. This ensures that only authorized users can access the Chat GPT service and that your usage is tracked for billing and security purposes. Eventually, it will ask for billing information, as the API key is only free for a certain amount of usage, so keep that in mind when testing. You will need the Chat GPT API key if you want to use the Chat GPT service in your application or website. Without the API key, you won't be able to make requests to the Chat GPT API and receive responses from the language model.

Connect to the Internet and Use API

import json
import network
import time
import urequests

# Internal libs
import constants

def chat_gpt(ssid, password, endpoint, api_key, model, prompt, max_tokens):
"""
Description: This is a function to hit chat gpt api and get
a response.

Parameters:

ssid[str]: The name of your internet connection
password[str]: Password for your internet connection
endpoint[str]: API enpoint
api_key[str]: API key for access
model[str]: AI model (see openAI documentation)
prompt[str]: Input to the model
max_tokens[int]: The maximum number of tokens to
generate in the completion.

Returns: Simply prints the response
"""
# Just making our internet connection
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
wlan.connect(ssid, password)

# Wait for connect or fail
max_wait = 10
while max_wait > 0:
if wlan.status() < 0 or wlan.status() >= 3:
break
max_wait -= 1
print('waiting for connection...')
time.sleep(1)
# Handle connection error
if wlan.status() != 3:
print(wlan.status())
raise RuntimeError('network connection failed')
else:
print('connected')
print(wlan.status())
status = wlan.ifconfig()

## Begin formatting request
headers = {'Content-Type': 'application/json',
"Authorization": "Bearer " + api_key}
data = {"model": model,
"prompt": prompt,
"max_tokens": max_tokens}

print("Attempting to send Prompt")
r = urequests.post("https://api.openai.com/v1/{}".format(endpoint),
json=data,
headers=headers)

if r.status_code >= 300 or r.status_code < 200:
print("There was an error with your request \n" +
"Response Status: " + str(r.text))
else:
print("Success")
response_data = json.loads(r.text)
completion = response_data["choices"][0]["text"]
print(completion)
r.close()

chat_gpt(constants.INTERNET_NAME,
constants.INTERNET_PASSWORD,
"completions",
constants.CHAT_GPT_API_KEY,
"text-davinci-003",
"Write a tagline for an ice cream shop. ",
100)


In this code we do a series of actions:

  • We start by connecting to the internet, as you need an active internet connection to access their API. Please pass in an internet name and password.
  • We pass in the remaining information needed for the ChatGPT model, most importantly the API key but also the model and the max_tokens variables. You can find all information here in their documentation.
  • We are using the “completions” API but there are also several other API paths that have different purposes that you can look into, this includes things such as edits or embeddings which can also have powerful use cases.
  • It is important to note that there are many other configurable variables that can be passed in, and you would need to slightly modify the function to incorporate other parameters.
  • A very interesting parameter you can add in as an experiment is the temperature parameter. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. It is generally recommended to alter this or top_p but not both.


Conclusion

Do not forget to like, comment and subscribe to the channel. If you have any questions please let us know in the comment section and thanks for reading. Stay tuned!