OpenAI API crash course

OpenAI API crash course
Learn how to build with "ChatGPT" for fun and profit
About me
Grew up in Rodos, Greece
Software Engineering at GU & Chalmers
Working with embedded systems
Teaching
DIT113, DAT265, Thesis supervision
C++ for professionals, Coursera
Open source projects
https://platis.solutions
https://github.com/platisd
Email: dimitris@platis.solutions
platisd/openai-pr-description
Automatically generate PR descriptions, focus on why not what
Chat completion API
phonix
Generate subtitles, more accurate than YouTube, LinkedIn etc
Speech to text API
sycophant
Write opinionated articles based on the latest news on a topic
Chat completion API, Image generation API
https://robots.army
About this workshop
Explore OpenAI APIs
Chat completion
Function calling
Creating the "perfect" prompt
Reducing costs
"ChatGPT API" or correctly: OpenAI API
OpenAI API is a collection of APIs
APIs offer access to various Large Language Models (LLMs)
LLM: Program trained to understand human language
ChatGPT is a web service using the Chat completion API
Uses gpt-3.5-turbo (free tier) or gpt-4.0 (paid tier)
OpenAI API endpoints
Chat completion
Given a series of messages, generate a response
Function calling: Choose which function to call
Image generation
Given a text description generate an image
Speech to text
Given an audio file and a prompt generate a transcript
Fine tuning
Train a model using input and output examples
ChatGPT aside, do you use any tools
based on OpenAI's models?
Getting started with OpenAI API
Install Python library
pip install --user openai
Get your API key
Login at platform.openai.com
Go to API keys
Create new secret key
(Optional) Create environment variable OPENAI_API_KEY with key
Ensure successful installation
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
models = openai.Model.list()["data"]
for model in models:
print(model["id"])
gpt-4
gpt-4-0314
curie-search-query
babbage-search-document
text-search-babbage-doc-001
babbage
...
Chat completion API
Given a series of messages, create a response. Important parameters:
model : Which LLM to use, balance cost, speed and performance
gpt-3.5-turbo : Cheaper & faster
gpt-4.0 : Better performance, larger input, less hallucinations
temperature : "Creativity" of the model's response
Lower values result in more deterministic responses
Allowed value range: [0.0, 2.0]
messages : A list of messages that represent the "conversation"
Each message needs a role and content properties
messages has the conversation for the model the provide a response.
messages = [
{"role": "system", "content": "You are an assistant that talks like a 15 yo"},
{"role": "user", "content": "Should I use goto statements?"},
{"role": "assistant", "content": "No bro, that's bad practice duh "},
{"role": "user", "content": user_input},
]
The first 3 elements of the list are the "context"
The last is the user's input we want the model to respond to
system role: High level instructions for the conversation
assistant role: The model's "ideal" (or previous) response
user role: The user's input
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
user_input = input("Enter your programing question: ")
messages = [
{"role": "system", "content": "You are an assistant that talks like a 15 yo"},
{"role": "user", "content": "Should I use goto statements?"},
{"role": "assistant", "content": "No bro, that's bad practice duh "},
{"role": "user", "content": user_input},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=messages, temperature=0.6
)
print(response.choices[0].message.content)
Enter your programing question: Is OOP good?
“
“
OMG, yes! Object-oriented programming is like, the bomb dot
com! It helps you organize your code and makes it easier to
understand and maintain. Plus, you can create cool objects and
stuff. So yeah, OOP is pretty rad!
“
“
system prompt
{"role": "system", "content": "You are an assistant that talks like a 15 yo"},
Changing content to You are a 15 year old programmer wouldn't
necessarily work as intended.
system prompts that clearly illustrate the context work better:
You are programmer and talk like a 15 yo
You are programmer and 15 years old
Yes, Object-Oriented Programming (OOP) is widely considered
to be a good programming paradigm. It promotes code
organization, reusability, and modularity.
“
“
temperature values
Higher values = More "creative" responses
Lower values = Less varied responses
Higher values = Lower probability to follow "instructions"
gpt-4.0 nonetheless better at following instructions
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{
"role": "user",
"content": "Create a JSON object using months of the"
+ " year as keys and days of each month as values",
},
]
temperature=0.6
Consistently:
{
"January": 31,
"February": 28,
"March": 31,
"April": 30,
"May": 31,
"June": 30,
"July": 31,
"August": 31,
"September": 30,
"October": 31,
"November": 30,
"December": 31
}
temperature=1.9
Consistently not what we intended. One possible response:
```json
{
"January": 31,
"February": 28,
"March": 31,
"April": 30,
"May": 31,
"June": 30,
"July": 31,
"August": 31,
"September": 30,
"October": 31,
"November": 30,
"December": 31
}
```
Note that February has 28 days by default, in line with the regular Gregorian calendar ordering.
Is there anything else I can help you with?
Function calling - Let the model choose
def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temp,
functions=functions,
)
Pass functions to the ChatCompletion API
gpt-4 better results than gpt-3.5-turbo with "complex" prompts
Keep temperature rather low for higher focus, less hallucinations
functions parameter
[
{
"name": "call_staff",
"description": "Call a member of staff to the table",
"parameters": {"type": "object", "properties": {}},
}
]
Provide a list of functions to the model, it will choose which one to
call based on the context and the function descriptions.
functions = [
{
"name": "place_order",
"description": "Place an order for a pizza",
"parameters": {
"type": "object",
"properties": {
"name": { "type": "string", "description": "The name of the pizza, e.g. Pepperoni", },
"size": {
"type": "string",
"enum": ["small", "medium", "large"],
"description": "The size of the pizza. Always ask for clarification if not specified.",
},
"take_away": {
"type": "boolean",
"description": "Whether the pizza is taken away. Assume false if not specified.",
},
},
"required": ["name", "size", "take_away"],
},
},
]
name : The name of the function to call
description : A description of the function, helps the model choose
parameters/type : Always "object" for now
parameters/properties : Empty if no function parameters
<param>/type : "string" , "boolean" , "int"
<param>/enum : Optional list of allowed values
<param>/description : Description of parameter, helps the model
parameters/required : List of required parameters
system role prompt
messages = [
{
"role": "system",
"content": "Don't make assumptions about what values "
+ "to put into functions. Ask for clarification if you need to.",
},
{
"role": "system",
"content": "Only use the functions you have been provided with.",
},
]
Avoid "hallucinations" and stick to the provided functions.
The model will happily make things up!
API response
{
"role": "assistant",
"content": null,
"function_call": {
"name": "place_order",
"arguments": "{n"name": "Margherita",n"size": "medium",n"take_away": falsen}"
}
}
If model has decided a function call needs to be made, the response
will contain a function_call property and content will be null .
arguments is a string, not a JSON object
def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temp,
functions=functions,
)
message = response.choices[0].message
if "function_call" in message:
f = globals()[message["function_call"]["name"]]
params = json.loads(message["function_call"]["arguments"])
f(**params)
else:
print(message["content"])
return "function_call" in message
Remember to append the user's input to the messages list
so that the model can use previous input as context.
function_called = False
print("How may I help you?")
while not function_called:
user_input = input()
messages.append({"role": "user", "content": user_input})
function_called = get_chat(messages=messages, functions=functions)
Does not take much code to create a "chatbot" that calls functions
So it shouldn't take that long to develop it, right?
>> How may I help you?
<< I'd like to order some pizza please
>> Of course, I'd be happy to assist you with that. Could you
please specify the name of the pizza you'd like to order and the
size you prefer? Also, would this be for take away or are you
dining in?
<< I'd like a large one
>> Sure, could you please specify the type of pizza you would
like to order?
<< A Margherita please
>> Placing order for Margherita pizza, large to eat in
“
“
Conclusion
Chat completions API for summarizing, extending etc
More extensive than what we covered here
Can tightly integrate it with your system via function calling
gpt-3.5-turbo is often good enough and faster but can fall short
Use the Python library to get started quickly
Build your own wrapper around it if needed
Use OpenAI API Playground to experiment
Full "pizza bot" example: plat.is/openai-pizza
Prompting
The model is a black box, we cannot be certain how it will respond
In practice the previous, simplified, approach won't be enough
Prompting is evolving into its own "craft"
Hype around "prompt engineering" by "influencers"
Approach iteratively, follow best practices, it's not magic
Be specific - Outcome
Write a program that receives a video file and turns it into a GIF.
Write a program in Python that receives a video file as a
command line argument. Use the imageio library to turn it into a
GIF.
Be specific - Length, format and style
Summarize the following text: {text}
Summarize the following text in 3 sentences. Use your own words
and do not plagiarize: {text}
Be specific - Delimit the input
Summarize the following text: {text}
Summarize the following text: ```{text}```
Be specific - Output format
Summarize the following text: {text}
Summarize the following text as a JSON object where the key is
`summary` and the value is the summary: ```{text}```
Nudge the model
Given the diff of the pull request, focus on why the change is
needed.
`git diff`: ```{diff}```
This PR is needed because it will
Give examples - Start simple
In the following police report, provide a list of the names of the
people arrested.
Names:
Give examples - Simple didn't work? Provide more!
Provide a list of the names of the people arrested.
Report 1: Yesterday John Doe along with his friend Jane Doe were
arrested for trespassing. Officer Barbrady was the arresting officer.
Names 1: John Doe, Jane Doe
Report 2: This morning Bob Smith was arrested for speeding.
Witnesses including Robert Johnson and Mary Young saw the
incident.
Names 2: Bob Smith
Report 3: {report}
Names 3:
Provide steps for complex tasks
Write a program that receives a video file and turns it into a GIF.
1. Use Python
2. Use the imageio library
3. Read the command line argument `--video`
4. Check if the file exists
5. Read the file
6. Feed the file to `imageio`
7. Parse the `--output` command line argument
8. Write the GIF to the provided output path
Break down complex prompts and chain them
Write a short poem inspired by the characters in the following text
and generate 3 keywords for the poem: {text}
Write a short poem inspired by the characters in the following
text: {text}
Generate three keywords for the following poem: {poem}
Break down large prompts and chain them
Summarize the following texts into a single concise text:
Text 1: {potentially_long_text1}
Text 2: {potentially_long_text2}
Text 3: {potentially_long_text3}
Summarize the following text: {potentially_long_text1}
Summarize the following text: {potentially_long_text2}
Summarize the following text: {potentially_long_text3}
Summarize the following texts into a single concise text:
{summary1} {summary2} {summary3}
Conclusion
Don't believe the hype, ignore the influencers
Prompting is an iterative process, sometimes slow
Be specific, provide examples, break down complex prompts etc
Your prompts may need to be modified once you switch models
gpt-3.5-turbo to gpt-4.0 is not always "backwards compatible"
ChatGPT Prompt Engineering for Developers by deeplearning.ai
Best practices for prompt engineering with OpenAI API
Reducing costs
Using the OpenAI API is not free
Costs are based on the number of tokens used
1 token = 4 chars in English (on average)
Using other languages is more expensive
Costs pile up during development but explode in production
Use the cheapest model that works for your use case
Start with gpt-3.5-turbo and see if you can go even cheaper
Speech-to-text API and Image generation API are expensive
Investigate whether you can run Whisper locally
Fine tuning
Fine tuning is a way to "teach" the model to react to specific input
If your prompts contain a lot of examples, it might make sense
Fine tuning is expensive but so are large prompts
Paying extra for both fine tuning and calling the fine tuned model
Don't fine tune until you get really good results with prompting
Takeaways
LLMs will change the world for the better (not too dramatically)
Learning how to use them, not only as a developer, will be a must
Incorporating them in systems can provide additional value
Sentiment analysis, summarizing, transforming etc
Many more (fun) challenges when integrating an LLM in a system
Moderation, evaluation, reliability, ethics etc
Prompt engineering is an iterative process, potentially slow
1 von 42

Recomendados

clicks2conversations.pdf von
clicks2conversations.pdfclicks2conversations.pdf
clicks2conversations.pdfMarie-Alice Blete
188 views95 Folien
ProgFund_Lecture_4_Functions_and_Modules-1.pdf von
ProgFund_Lecture_4_Functions_and_Modules-1.pdfProgFund_Lecture_4_Functions_and_Modules-1.pdf
ProgFund_Lecture_4_Functions_and_Modules-1.pdflailoesakhan
2 views43 Folien
Building Services With gRPC, Docker and Go von
Building Services With gRPC, Docker and GoBuilding Services With gRPC, Docker and Go
Building Services With gRPC, Docker and GoMartin Kess
1.4K views61 Folien
python-online&offline-training-in-kphb-hyderabad (1) (1).pdf von
python-online&offline-training-in-kphb-hyderabad (1) (1).pdfpython-online&offline-training-in-kphb-hyderabad (1) (1).pdf
python-online&offline-training-in-kphb-hyderabad (1) (1).pdfKosmikTech1
33 views208 Folien
Python fundamentals von
Python fundamentalsPython fundamentals
Python fundamentalsnatnaelmamuye
92 views69 Folien
Python component in mule von
Python component in mulePython component in mule
Python component in muleRamakrishna kapa
950 views13 Folien

Más contenido relacionado

Similar a OpenAI API crash course

Python for scientific computing von
Python for scientific computingPython for scientific computing
Python for scientific computingGo Asgard
1.3K views20 Folien
Be The API - VMware UserCon 2016 von
Be The API - VMware UserCon 2016Be The API - VMware UserCon 2016
Be The API - VMware UserCon 2016Matthew Broberg
321 views60 Folien
Introduction to Basics of Python von
Introduction to Basics of PythonIntroduction to Basics of Python
Introduction to Basics of PythonElewayte
131 views14 Folien
Pemrograman Python untuk Pemula von
Pemrograman Python untuk PemulaPemrograman Python untuk Pemula
Pemrograman Python untuk PemulaOon Arfiandwi
8.6K views57 Folien
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo... von
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...MongoDB
267 views38 Folien
How to Build a Site Using Nick von
How to Build a Site Using NickHow to Build a Site Using Nick
How to Build a Site Using NickRob Gietema
12 views102 Folien

Similar a OpenAI API crash course(20)

Python for scientific computing von Go Asgard
Python for scientific computingPython for scientific computing
Python for scientific computing
Go Asgard1.3K views
Introduction to Basics of Python von Elewayte
Introduction to Basics of PythonIntroduction to Basics of Python
Introduction to Basics of Python
Elewayte131 views
Pemrograman Python untuk Pemula von Oon Arfiandwi
Pemrograman Python untuk PemulaPemrograman Python untuk Pemula
Pemrograman Python untuk Pemula
Oon Arfiandwi8.6K views
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo... von MongoDB
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...
MongoDB267 views
How to Build a Site Using Nick von Rob Gietema
How to Build a Site Using NickHow to Build a Site Using Nick
How to Build a Site Using Nick
Rob Gietema12 views
Mp24: The Bachelor, a facebook game von Montreal Python
Mp24: The Bachelor, a facebook gameMp24: The Bachelor, a facebook game
Mp24: The Bachelor, a facebook game
Montreal Python1.2K views
class 12th computer science project Employee Management System In Python von AbhishekKumarMorla
 class 12th computer science project Employee Management System In Python class 12th computer science project Employee Management System In Python
class 12th computer science project Employee Management System In Python
AbhishekKumarMorla26.6K views
web programming UNIT VIII python by Bhavsingh Maloth von Bhavsingh Maloth
web programming UNIT VIII python by Bhavsingh Malothweb programming UNIT VIII python by Bhavsingh Maloth
web programming UNIT VIII python by Bhavsingh Maloth
Bhavsingh Maloth863 views
Lecture 0 - CS50's Introduction to Programming with Python.pdf von SrinivasPonugupaty1
Lecture 0 - CS50's Introduction to Programming with Python.pdfLecture 0 - CS50's Introduction to Programming with Python.pdf
Lecture 0 - CS50's Introduction to Programming with Python.pdf
Controller Testing: You're Doing It Wrong von johnnygroundwork
Controller Testing: You're Doing It WrongController Testing: You're Doing It Wrong
Controller Testing: You're Doing It Wrong
johnnygroundwork1K views
Python programming msc(cs) von KALAISELVI P
Python programming msc(cs)Python programming msc(cs)
Python programming msc(cs)
KALAISELVI P440 views
Python: an introduction for PHP webdevelopers von Glenn De Backer
Python: an introduction for PHP webdevelopersPython: an introduction for PHP webdevelopers
Python: an introduction for PHP webdevelopers
Glenn De Backer1.5K views

Más de Dimitrios Platis

Builder pattern in C++.pdf von
Builder pattern in C++.pdfBuilder pattern in C++.pdf
Builder pattern in C++.pdfDimitrios Platis
99 views13 Folien
Interprocess communication with C++.pdf von
Interprocess communication with C++.pdfInterprocess communication with C++.pdf
Interprocess communication with C++.pdfDimitrios Platis
213 views19 Folien
Lambda expressions in C++ von
Lambda expressions in C++Lambda expressions in C++
Lambda expressions in C++Dimitrios Platis
42 views25 Folien
Writing SOLID C++ [gbgcpp meetup @ Zenseact] von
Writing SOLID C++ [gbgcpp meetup @ Zenseact]Writing SOLID C++ [gbgcpp meetup @ Zenseact]
Writing SOLID C++ [gbgcpp meetup @ Zenseact]Dimitrios Platis
143 views37 Folien
Introduction to CMake von
Introduction to CMakeIntroduction to CMake
Introduction to CMakeDimitrios Platis
144 views31 Folien
Pointer to implementation idiom von
Pointer to implementation idiomPointer to implementation idiom
Pointer to implementation idiomDimitrios Platis
127 views17 Folien

Más de Dimitrios Platis(10)

Último

AMAZON PRODUCT RESEARCH.pdf von
AMAZON PRODUCT RESEARCH.pdfAMAZON PRODUCT RESEARCH.pdf
AMAZON PRODUCT RESEARCH.pdfJerikkLaureta
15 views13 Folien
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N... von
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...James Anderson
33 views32 Folien
6g - REPORT.pdf von
6g - REPORT.pdf6g - REPORT.pdf
6g - REPORT.pdfLiveplex
9 views23 Folien
Kyo - Functional Scala 2023.pdf von
Kyo - Functional Scala 2023.pdfKyo - Functional Scala 2023.pdf
Kyo - Functional Scala 2023.pdfFlavio W. Brasil
165 views92 Folien
SAP Automation Using Bar Code and FIORI.pdf von
SAP Automation Using Bar Code and FIORI.pdfSAP Automation Using Bar Code and FIORI.pdf
SAP Automation Using Bar Code and FIORI.pdfVirendra Rai, PMP
19 views38 Folien
Lilypad @ Labweek, Istanbul, 2023.pdf von
Lilypad @ Labweek, Istanbul, 2023.pdfLilypad @ Labweek, Istanbul, 2023.pdf
Lilypad @ Labweek, Istanbul, 2023.pdfAlly339821
9 views45 Folien

Último(20)

GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N... von James Anderson
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
James Anderson33 views
6g - REPORT.pdf von Liveplex
6g - REPORT.pdf6g - REPORT.pdf
6g - REPORT.pdf
Liveplex9 views
Lilypad @ Labweek, Istanbul, 2023.pdf von Ally339821
Lilypad @ Labweek, Istanbul, 2023.pdfLilypad @ Labweek, Istanbul, 2023.pdf
Lilypad @ Labweek, Istanbul, 2023.pdf
Ally3398219 views
Empathic Computing: Delivering the Potential of the Metaverse von Mark Billinghurst
Empathic Computing: Delivering  the Potential of the MetaverseEmpathic Computing: Delivering  the Potential of the Metaverse
Empathic Computing: Delivering the Potential of the Metaverse
Mark Billinghurst470 views
Data-centric AI and the convergence of data and model engineering: opportunit... von Paolo Missier
Data-centric AI and the convergence of data and model engineering:opportunit...Data-centric AI and the convergence of data and model engineering:opportunit...
Data-centric AI and the convergence of data and model engineering: opportunit...
Paolo Missier34 views
HTTP headers that make your website go faster - devs.gent November 2023 von Thijs Feryn
HTTP headers that make your website go faster - devs.gent November 2023HTTP headers that make your website go faster - devs.gent November 2023
HTTP headers that make your website go faster - devs.gent November 2023
Thijs Feryn19 views
Business Analyst Series 2023 - Week 3 Session 5 von DianaGray10
Business Analyst Series 2023 -  Week 3 Session 5Business Analyst Series 2023 -  Week 3 Session 5
Business Analyst Series 2023 - Week 3 Session 5
DianaGray10209 views
iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas... von Bernd Ruecker
iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas...iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas...
iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas...
Bernd Ruecker26 views
handbook for web 3 adoption.pdf von Liveplex
handbook for web 3 adoption.pdfhandbook for web 3 adoption.pdf
handbook for web 3 adoption.pdf
Liveplex19 views
Web Dev - 1 PPT.pdf von gdsczhcet
Web Dev - 1 PPT.pdfWeb Dev - 1 PPT.pdf
Web Dev - 1 PPT.pdf
gdsczhcet55 views
DALI Basics Course 2023 von Ivory Egg
DALI Basics Course  2023DALI Basics Course  2023
DALI Basics Course 2023
Ivory Egg14 views

OpenAI API crash course

  • 1. OpenAI API crash course Learn how to build with "ChatGPT" for fun and profit
  • 2. About me Grew up in Rodos, Greece Software Engineering at GU & Chalmers Working with embedded systems Teaching DIT113, DAT265, Thesis supervision C++ for professionals, Coursera Open source projects https://platis.solutions https://github.com/platisd Email: dimitris@platis.solutions
  • 3. platisd/openai-pr-description Automatically generate PR descriptions, focus on why not what Chat completion API phonix Generate subtitles, more accurate than YouTube, LinkedIn etc Speech to text API sycophant Write opinionated articles based on the latest news on a topic Chat completion API, Image generation API https://robots.army
  • 4. About this workshop Explore OpenAI APIs Chat completion Function calling Creating the "perfect" prompt Reducing costs
  • 5. "ChatGPT API" or correctly: OpenAI API OpenAI API is a collection of APIs APIs offer access to various Large Language Models (LLMs) LLM: Program trained to understand human language ChatGPT is a web service using the Chat completion API Uses gpt-3.5-turbo (free tier) or gpt-4.0 (paid tier)
  • 6. OpenAI API endpoints Chat completion Given a series of messages, generate a response Function calling: Choose which function to call Image generation Given a text description generate an image Speech to text Given an audio file and a prompt generate a transcript Fine tuning Train a model using input and output examples
  • 7. ChatGPT aside, do you use any tools based on OpenAI's models?
  • 8. Getting started with OpenAI API Install Python library pip install --user openai Get your API key Login at platform.openai.com Go to API keys Create new secret key (Optional) Create environment variable OPENAI_API_KEY with key
  • 9. Ensure successful installation import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") models = openai.Model.list()["data"] for model in models: print(model["id"]) gpt-4 gpt-4-0314 curie-search-query babbage-search-document text-search-babbage-doc-001 babbage ...
  • 10. Chat completion API Given a series of messages, create a response. Important parameters: model : Which LLM to use, balance cost, speed and performance gpt-3.5-turbo : Cheaper & faster gpt-4.0 : Better performance, larger input, less hallucinations temperature : "Creativity" of the model's response Lower values result in more deterministic responses Allowed value range: [0.0, 2.0] messages : A list of messages that represent the "conversation" Each message needs a role and content properties
  • 11. messages has the conversation for the model the provide a response. messages = [ {"role": "system", "content": "You are an assistant that talks like a 15 yo"}, {"role": "user", "content": "Should I use goto statements?"}, {"role": "assistant", "content": "No bro, that's bad practice duh "}, {"role": "user", "content": user_input}, ] The first 3 elements of the list are the "context" The last is the user's input we want the model to respond to system role: High level instructions for the conversation assistant role: The model's "ideal" (or previous) response user role: The user's input
  • 12. import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") user_input = input("Enter your programing question: ") messages = [ {"role": "system", "content": "You are an assistant that talks like a 15 yo"}, {"role": "user", "content": "Should I use goto statements?"}, {"role": "assistant", "content": "No bro, that's bad practice duh "}, {"role": "user", "content": user_input}, ] response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=messages, temperature=0.6 ) print(response.choices[0].message.content)
  • 13. Enter your programing question: Is OOP good? “ “ OMG, yes! Object-oriented programming is like, the bomb dot com! It helps you organize your code and makes it easier to understand and maintain. Plus, you can create cool objects and stuff. So yeah, OOP is pretty rad! “ “
  • 14. system prompt {"role": "system", "content": "You are an assistant that talks like a 15 yo"}, Changing content to You are a 15 year old programmer wouldn't necessarily work as intended. system prompts that clearly illustrate the context work better: You are programmer and talk like a 15 yo You are programmer and 15 years old Yes, Object-Oriented Programming (OOP) is widely considered to be a good programming paradigm. It promotes code organization, reusability, and modularity. “ “
  • 15. temperature values Higher values = More "creative" responses Lower values = Less varied responses Higher values = Lower probability to follow "instructions" gpt-4.0 nonetheless better at following instructions messages = [ {"role": "system", "content": "You are a helpful assistant"}, { "role": "user", "content": "Create a JSON object using months of the" + " year as keys and days of each month as values", }, ]
  • 16. temperature=0.6 Consistently: { "January": 31, "February": 28, "March": 31, "April": 30, "May": 31, "June": 30, "July": 31, "August": 31, "September": 30, "October": 31, "November": 30, "December": 31 }
  • 17. temperature=1.9 Consistently not what we intended. One possible response: ```json { "January": 31, "February": 28, "March": 31, "April": 30, "May": 31, "June": 30, "July": 31, "August": 31, "September": 30, "October": 31, "November": 30, "December": 31 } ``` Note that February has 28 days by default, in line with the regular Gregorian calendar ordering. Is there anything else I can help you with?
  • 18. Function calling - Let the model choose def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None): response = openai.ChatCompletion.create( model=model, messages=messages, temperature=temp, functions=functions, ) Pass functions to the ChatCompletion API gpt-4 better results than gpt-3.5-turbo with "complex" prompts Keep temperature rather low for higher focus, less hallucinations
  • 19. functions parameter [ { "name": "call_staff", "description": "Call a member of staff to the table", "parameters": {"type": "object", "properties": {}}, } ] Provide a list of functions to the model, it will choose which one to call based on the context and the function descriptions.
  • 20. functions = [ { "name": "place_order", "description": "Place an order for a pizza", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "The name of the pizza, e.g. Pepperoni", }, "size": { "type": "string", "enum": ["small", "medium", "large"], "description": "The size of the pizza. Always ask for clarification if not specified.", }, "take_away": { "type": "boolean", "description": "Whether the pizza is taken away. Assume false if not specified.", }, }, "required": ["name", "size", "take_away"], }, }, ]
  • 21. name : The name of the function to call description : A description of the function, helps the model choose parameters/type : Always "object" for now parameters/properties : Empty if no function parameters <param>/type : "string" , "boolean" , "int" <param>/enum : Optional list of allowed values <param>/description : Description of parameter, helps the model parameters/required : List of required parameters
  • 22. system role prompt messages = [ { "role": "system", "content": "Don't make assumptions about what values " + "to put into functions. Ask for clarification if you need to.", }, { "role": "system", "content": "Only use the functions you have been provided with.", }, ] Avoid "hallucinations" and stick to the provided functions. The model will happily make things up!
  • 23. API response { "role": "assistant", "content": null, "function_call": { "name": "place_order", "arguments": "{n"name": "Margherita",n"size": "medium",n"take_away": falsen}" } } If model has decided a function call needs to be made, the response will contain a function_call property and content will be null . arguments is a string, not a JSON object
  • 24. def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None): response = openai.ChatCompletion.create( model=model, messages=messages, temperature=temp, functions=functions, ) message = response.choices[0].message if "function_call" in message: f = globals()[message["function_call"]["name"]] params = json.loads(message["function_call"]["arguments"]) f(**params) else: print(message["content"]) return "function_call" in message
  • 25. Remember to append the user's input to the messages list so that the model can use previous input as context. function_called = False print("How may I help you?") while not function_called: user_input = input() messages.append({"role": "user", "content": user_input}) function_called = get_chat(messages=messages, functions=functions) Does not take much code to create a "chatbot" that calls functions So it shouldn't take that long to develop it, right?
  • 26. >> How may I help you? << I'd like to order some pizza please >> Of course, I'd be happy to assist you with that. Could you please specify the name of the pizza you'd like to order and the size you prefer? Also, would this be for take away or are you dining in? << I'd like a large one >> Sure, could you please specify the type of pizza you would like to order? << A Margherita please >> Placing order for Margherita pizza, large to eat in “ “
  • 27. Conclusion Chat completions API for summarizing, extending etc More extensive than what we covered here Can tightly integrate it with your system via function calling gpt-3.5-turbo is often good enough and faster but can fall short Use the Python library to get started quickly Build your own wrapper around it if needed Use OpenAI API Playground to experiment Full "pizza bot" example: plat.is/openai-pizza
  • 28. Prompting The model is a black box, we cannot be certain how it will respond In practice the previous, simplified, approach won't be enough Prompting is evolving into its own "craft" Hype around "prompt engineering" by "influencers" Approach iteratively, follow best practices, it's not magic
  • 29. Be specific - Outcome Write a program that receives a video file and turns it into a GIF. Write a program in Python that receives a video file as a command line argument. Use the imageio library to turn it into a GIF.
  • 30. Be specific - Length, format and style Summarize the following text: {text} Summarize the following text in 3 sentences. Use your own words and do not plagiarize: {text}
  • 31. Be specific - Delimit the input Summarize the following text: {text} Summarize the following text: ```{text}```
  • 32. Be specific - Output format Summarize the following text: {text} Summarize the following text as a JSON object where the key is `summary` and the value is the summary: ```{text}```
  • 33. Nudge the model Given the diff of the pull request, focus on why the change is needed. `git diff`: ```{diff}``` This PR is needed because it will
  • 34. Give examples - Start simple In the following police report, provide a list of the names of the people arrested. Names:
  • 35. Give examples - Simple didn't work? Provide more! Provide a list of the names of the people arrested. Report 1: Yesterday John Doe along with his friend Jane Doe were arrested for trespassing. Officer Barbrady was the arresting officer. Names 1: John Doe, Jane Doe Report 2: This morning Bob Smith was arrested for speeding. Witnesses including Robert Johnson and Mary Young saw the incident. Names 2: Bob Smith Report 3: {report} Names 3:
  • 36. Provide steps for complex tasks Write a program that receives a video file and turns it into a GIF. 1. Use Python 2. Use the imageio library 3. Read the command line argument `--video` 4. Check if the file exists 5. Read the file 6. Feed the file to `imageio` 7. Parse the `--output` command line argument 8. Write the GIF to the provided output path
  • 37. Break down complex prompts and chain them Write a short poem inspired by the characters in the following text and generate 3 keywords for the poem: {text} Write a short poem inspired by the characters in the following text: {text} Generate three keywords for the following poem: {poem}
  • 38. Break down large prompts and chain them Summarize the following texts into a single concise text: Text 1: {potentially_long_text1} Text 2: {potentially_long_text2} Text 3: {potentially_long_text3} Summarize the following text: {potentially_long_text1} Summarize the following text: {potentially_long_text2} Summarize the following text: {potentially_long_text3} Summarize the following texts into a single concise text: {summary1} {summary2} {summary3}
  • 39. Conclusion Don't believe the hype, ignore the influencers Prompting is an iterative process, sometimes slow Be specific, provide examples, break down complex prompts etc Your prompts may need to be modified once you switch models gpt-3.5-turbo to gpt-4.0 is not always "backwards compatible" ChatGPT Prompt Engineering for Developers by deeplearning.ai Best practices for prompt engineering with OpenAI API
  • 40. Reducing costs Using the OpenAI API is not free Costs are based on the number of tokens used 1 token = 4 chars in English (on average) Using other languages is more expensive Costs pile up during development but explode in production Use the cheapest model that works for your use case Start with gpt-3.5-turbo and see if you can go even cheaper Speech-to-text API and Image generation API are expensive Investigate whether you can run Whisper locally
  • 41. Fine tuning Fine tuning is a way to "teach" the model to react to specific input If your prompts contain a lot of examples, it might make sense Fine tuning is expensive but so are large prompts Paying extra for both fine tuning and calling the fine tuned model Don't fine tune until you get really good results with prompting
  • 42. Takeaways LLMs will change the world for the better (not too dramatically) Learning how to use them, not only as a developer, will be a must Incorporating them in systems can provide additional value Sentiment analysis, summarizing, transforming etc Many more (fun) challenges when integrating an LLM in a system Moderation, evaluation, reliability, ethics etc Prompt engineering is an iterative process, potentially slow