Call OpenAI models directly from Python, beyond the ChatGPT interface.
In this project, you will stop using ChatGPT through the browser and start calling OpenAI models directly from Python. The OpenAI API gives you control over things the chat interface does not, like customizing the system message, adjusting temperature, setting token limits, and integrating LLM responses into your own code.
Create an OpenAI account and get an API key
Install openai and store the key as an environment variable
Set up the client using the current syntax:
Make your first API call using client.chat.completions.create()
Experiment with the following parameters:
temperature — controls randomness
max_tokens — limits response length
system message — sets the model's behavior
Build a small script that takes user input and returns a model response
Compare outputs at different temperature values
Python
openai >= 1.0.0
python-dotenv
Jupyter Notebook or any Python IDE
You will understand how to interact with LLMs programmatically, what tokens are, and how parameters like temperature change model behavior. This is the foundation for every LLM project that follows.
A full walkthrough of this project is available on Towards Data Science: 🔗 Cracking Open the OpenAI Python API.
Note: The article was written in 2023 and uses the legacy OpenAI Python client syntax (
openai.ChatCompletion.create()), which was deprecated in late 2023. The concepts are still valid, but make sure to use the updated client syntax shown above.
Join the Community
roadmap.sh is the most starred project on GitHub and is visited by hundreds of thousands of developers every month.
Roadmaps Best Practices Guides Videos FAQs YouTube
roadmap.sh by @kamrify @kamrify
Community created roadmaps, best practices, projects, articles, resources and journeys to help you choose your path and grow in your career.
Login or Signup
You must be logged in to perform this action.