Prompt Engineering with OpenAI and ChatGPT: How to Give Clear and Effective Instructions

Prompt Engineering
Prompt Engineering

Prompt Engineering with OpenAI and ChatGPT: How to Give Clear and Effective Instructions

If you’re just getting started with OpenAI API, we recommend reading the Introduction and Quickstart tutorials first to familiarize yourself with the basics. In this article, we will explore the best practices for prompt engineering to unlock the full potential of GPT-3 and Codex models by providing clear and effective instructions.

Also Read: ChatGPT Tutorial: How to Enable Developer Mode – A Step-by-Step Guide

How Prompt Engineering Works?

Prompt engineering involves structuring your instructions and providing context to guide the models’ output. Different tasks may require specific prompt formats to achieve optimal results. While the following are reliable prompt formats, feel free to experiment and adapt them to your specific needs.

Also Read: ChatGPT Tutorial: Utilizing ChatGPT for SEO and Content Optimization

Prompt Engineering: Rules of Thumb and Examples:

Use the Latest Model:

To ensure you have access to the most capable models, it’s recommended to use the latest versions available. As of November 2022, the recommended models are “text-davinci-003” for text generation and “code-davinci-002” for code generation. But now GPT 3.5 Turbo, GPT-4 are available and GPT-4 turbo is coming soon. For example, you can prompt the model to generate a short story using the latest text generation model:

Prompt: Generate a creative short story about a mysterious island where time flows backward.

Also Read: Mind-Blowing Prompts for Programmers and Developers to Conquer the

Put Instructions at the Beginning and Use Separators:

To provide clear instructions, place them at the beginning of the prompt and use separators like ### or “”” to separate the instructions from the context. This helps the model understand the task better. For example, when instructing the model to summarize a given text, you can use the following prompt:

Prompt: Summarize the text below as a bullet point list of the most important points.

Text: “”” Once upon a time in a faraway land, there was a brave young knight named Sir Arthur. He embarked on a quest to rescue the captured princess from the clutches of an evil sorcerer. “””

Be Specific and Detailed about the Desired Outcome:

When providing instructions, be specific, descriptive, and detailed about the desired outcome, including context, length, format, style, and other relevant aspects. This clarity helps the model understand your requirements. For example, if you want the model to write a poem about OpenAI in the style of a famous poet, you can provide the following prompt:

Prompt: Write a short inspiring poem about OpenAI, focusing on the recent DALL-E product launch (DALL-E is a text-to-image ML model) in the style of {famous poet}.

Also Read: An Essential Glossary, Key Terms, and Concepts for Understanding….

Articulate Desired Output Format through Examples:

To guide the model, articulate the desired output format through examples. Showing specific format requirements helps the model generate output that aligns with your expectations. For instance, if you want the model to extract important entities from a given text, you can provide the following prompt:

Prompt: Extract the important entities mentioned in the text below. First, extract all company names, then extract all people names, and finally, extract specific topics that fit the content and extract general overarching themes.

Desired format: Company names: Apple, Google, Microsoft People names: John Smith, Jane Doe Specific topics: artificial intelligence, machine learning General themes: innovation, technology

Text: “In the tech industry, Apple, Google, and Microsoft are the leading companies. John Smith and Jane Doe are renowned experts in the field of artificial intelligence and machine learning. The key themes of this discussion are innovation and the future of technology.”

Also Read: Exploring the Contrasts: Weak AI and Strong AI in the World of Artificial

Start with Zero-Shot, Then Few-Shot, and Consider Fine-Tuning:

When approaching a task, it’s best to start with zero-shot learning, where you prompt the model without any specific training examples. If zero-shot learning doesn’t produce satisfactory results, you can switch to few-shot learning by providing a few examples relevant to the task. If neither approach yields the desired outcome, you can consider fine-tuning the model. For example:

Zero-shot:

  • Prompt: Extract keywords from the below text.
  • Text: “Artificial intelligence is revolutionizing various industries, including healthcare, finance, and transportation.”
  • Keywords: ……

Few-shot:

  • Prompt: Extract keywords from the corresponding texts below.
  • Text 1: “Stripe provides APIs that web developers can use to integrate payment processing into their websites and mobile applications.” Keywords 1: Stripe, payment processing, APIs, web developers, websites, mobile applications
  • Text 2: “OpenAI has trained cutting-edge language models that are very good at understanding and generating text. Our API provides access to these models and can be used to solve virtually any task that involves processing language.” Keywords 2: OpenAI, language models, text processing, API
  • Text 3: {text} Keywords 3:….

Fine-tuning:

For guidance on fine-tuning, refer to the fine-tune best practices provided by OpenAI.

Also Read: 4 Powerful Ways ChatGPT Can Enhance Your Website

Reduce Fluffy and Imprecise Descriptions:

When giving instructions, avoid using fluffy or imprecise descriptions. Instead, provide clear and concise details to guide the model’s output effectively. For example:

  • Less Effective : “The description for this product should be fairly short, a few sentences only, and not too much more.”
  • Better : “Use a 3 to 5 sentence paragraph to describe this product.”

Focus on Positive Instructions:

Rather than emphasizing what not to do, focus on providing clear guidance on what to do instead. This helps the model understand the desired approach. For example:

  • Less Effective : “The following is a conversation between an Agent and a Customer. DO NOT ASK USERNAME OR PASSWORD. DO NOT REPEAT.”
  • Better : “The following is a conversation between an Agent and a Customer. The agent will attempt to diagnose the problem and suggest a solution while refraining from asking any questions related to personally identifiable information (PII). Instead of asking for PII, such as username or password, refer the user to the help article www.samplewebsite.com/help/faq.”

Also Read: What is Generative AI: The Rise and Popularity of Generative AI

Code Generation Specific – Use “Leading Words”:

When instructing the model for code generation, use “leading words” to nudge the model toward a particular pattern or programming language. For example:

Less Effective :
Write a simple Python function that
1. Ask me for a number in miles
2. Convert miles to kilometers

In this code example, adding “import” as a leading word hints to the model that it should start writing in Python. Similarly, “SELECT” is a good hint for the start of a SQL statement.

Better:
Write a simple Python function that
1. Ask me for a number in miles
2. Convert miles to kilometers
import

Parameters: When working with OpenAI API, you can adjust certain parameters to control the model’s output:

  • Model: Choose the appropriate model based on performance, cost, and latency considerations. Higher-performance models tend to be more expensive and may have higher latency.
  • Temperature: Adjust the temperature parameter to influence the randomness of the model’s output. Higher temperatures produce more diverse (yet potentially less focused) responses, while lower temperatures generate more deterministic and focused responses.
  • Max_tokens (maximum length): Set a limit on the maximum number of tokens the model can generate. This helps control the length of the output and prevent excessively long responses.
  • Stop (stop sequences): Define specific stop sequences that, when generated, will cause the text generation to stop. This allows you to control the completion of the generated text.

Also Read: ChatGPT for Marketing: 90 Essential and Proven ChatGPT Prompts to

Conclusion:

By following these best practices for prompt engineering, you can enhance your interactions with OpenAI’s GPT-3 and Codex models. Clear and effective instructions, along with appropriate formatting and guidance, empower the models to produce more accurate and desired outputs. Experiment with different prompts, formats, and parameters to optimize your results and unlock the full potential of the OpenAI API.

Oh hi there 👋 It’s nice to meet you.

Join 3500+ readers and get the rundown of the latest news, tools, and step-by-step tutorials. Stay informed for free 👇

We don’t spam!

Leave a Reply

%d