Home >Technology peripherals >AI >Heavy! OpenAI officially released best practice instructions for GPT

Heavy! OpenAI officially released best practice instructions for GPT

WBOY
WBOYforward
2023-06-08 12:03:18973browse

Officially released a best practice guide for GPT prompt words, which mainly includes six aspects of optimization strategies, and each strategy gives a corresponding case

Strategy 1: Write Clear Description

GPT The less guesswork you have about what you want, the more likely you are to get it.

For example: If the output is too simple, ask for expert writing.

Also lists some specific examples:

  • Include details in your query to get more relevant answers
  • Ask the model to adopt the persona
  • Use delimiters to clearly indicate different parts of the input
  • Specify the steps required to complete the task
  • Provide examples
  • Specify the desired output length

Strategy 2: Provide Reference Text

GPT can confidently fabricate fake answers, especially when asked about esoteric topics or quotes and URLs. Just like a note can help students do better on an exam, providing a reference text for GPT can help answer with less fudging.

  • Instructs the model to answer using the reference text
  • Instructs the model to use quotes from the reference text to answer

Strategy 3: Split complex tasks into simpler sub-tasks

Analogous to software engineering In line with the practice of splitting complex systems into modular components, decomposing tasks into modular components submitted to GPT is also an effective method.

Complex tasks tend to have higher error rates than simple tasks.

Additionally, complex tasks can often be redefined as workflows of simpler tasks, where the output of earlier tasks is used to build the input of subsequent tasks.

  • Use intent classification to identify the instructions most relevant to user queries
  • For conversational applications that require long conversations, summarize or filter previous Dialogue
  • Summary long documents in sections and recursively build a full summary

Strategy 4: Give GPT time to “think”

Although you may not immediately know how to multiply 17 by 28, you will still be able to figure it out after some time. Without taking the time to find answers, GPT is prone to making more reasoning errors. Asking a series of inference questions improves GPT's inference accuracy, resulting in correct answers more reliably.

  • Instruct the model to work out its own solution before jumping to conclusions
  • Use internal monologue or a series of queries to hide the model’s reasoning process
  • Ask the model if it missed anything previously passed

Strategy 5: Use external tools

Compensate for GPT's weaknesses by providing them with the output of other tools.

For example, a text retrieval system can inform GPT of relevant documents. A code execution engine helps GPT perform math operations and run code. If a tool accomplishes a task more reliably or more efficiently than GPT, uninstall GPT to take advantage of the best of both worlds.

Use embedding-based search for efficient knowledge retrieval

Use code execution to perform more accurate calculations or call external APIs

Strategy 6: Systematically Test Changes

Improving performance is easier if it can be measured. In some cases, modifications to the hints result in better performance on a few isolated examples but lead to worse overall performance on a more representative set of examples. To ensure that changes have a positive impact on performance, it may be necessary to define a comprehensive test suite for evaluation.

The above is the detailed content of Heavy! OpenAI officially released best practice instructions for GPT. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete