A brief intro highlighting what you'll learn.
The completions API allows users to type a prompt, then have OpenAI respond with generated text. You can think of this like the ChatGPT service you already know and love.
This service is similar to the standard completions API, but with a minor twist. The chat completions API allows GPT to maintain a conversation and refer back to previous messages that have been sent (unlike the completions API which focuses on one response at a time).
As the name suggests, a custom GPT allows you to create a bespoke version of ChatGPT that can be tailored to your specific app or business needs.
With custom GPTs, the responses your end-users receive will be tailored to a specific task or knowledge set.
User can upload an image to OpenAI, then have the model review the contents of the file. Once it understands the image, it can provide a response based on its knowledge of the photo.
Train a custom GPT model on your own unique dataset. Have it refer to the pre-determined knowledge you provide.
Real-world example: Upload a PDF that outlines best practices for trading stocks. Your assistant will train itself on this PDF, then refer to this as a reference point when responding to user queries.
To put it simply, this API allows OpenAI to transcribe audio files into text.
Transcribe audio recorded in a foreign language to English.
This service allows you to transform written text into spoken audio.
Allows you to turn written user prompts into photo-realistic images.
Inpainting allows you to modify an existing image using the DALL-E 2 model. The image is altered based on a prompt you provide.
The image variations API allows you to upload an image, then have OpenAI generate an alternative version. The model will generate a new image at its own discretion.
In this video, you’ll learn how to build an end-to-end meeting summary tool. This will start by transcribing an audio recording of a meeting.
Once OpenAI generates the text transcription, it’ll pass this on to the completions API which will be used to summarise this text into actionable bullet points.
Using OpenAIs speech-to-text model, you’ll transcribe a TED talk that was delivered in German.
Once this audio file is turned into text, you’ll then use the text-to-speech to generate a new audio file that’s spoken in English.
We’ll build an AI-powered recipe app that allows users to type in a specific food they’d like to make.
We’ll connect DALL-E 3 to generate a photorealistic image of that meal, then have the Completions API generate a list of the ingredients, as well as the step-by-step instructions.
Additional insights to conclude this course.