GPT Best Practices

GPT Best Practices

This guide presents various approaches and techniques to enhance the performance of GPT models, allowing you to achieve improved outcomes. The suggested methods can be utilized together for increased effectiveness, and we encourage you to experiment and discover the strategies that suit you best.

Please note that certain examples showcased in this guide are currently compatible only with the most advanced model, GPT-4. If you do not currently have access to gpt-4, we recommend joining the waitlist. In general, if you encounter a task where a GPT model falls short, it is often worthwhile to retry using a more capable model if it is available.

Strategies for getting better results

Write clear instructions

To ensure better outputs from GPTs, keep in mind that they cannot read your mind. If the responses are too lengthy, specifically request concise replies. Conversely, if the outputs are too simplistic, ask for expert-level writing to receive more detailed responses.

Additionally, if you have a preferred format, provide an example to demonstrate the desired style. By minimizing guesswork for GPTs, you increase the likelihood of obtaining the desired output.

Provide reference text

When seeking answers on esoteric topics, citations, or URLs, be aware that GPTs can generate fictitious information with confidence. To mitigate this, similar to how a student benefits from a cheat sheet during a test, supplying reference text to GPTs can aid in generating responses that rely less on fabrications.

Split complex tasks into simpler subtasks

When dealing with complex tasks, it is advisable to break them down into simpler subtasks. Just as software engineering follows the practice of decomposing complex systems into modular components, GPTs perform better when presented with simpler tasks.

Complex tasks often exhibit higher error rates, whereas breaking them down into a workflow of simpler tasks allows the outputs of earlier tasks to serve as inputs for subsequent ones. This approach enhances the overall accuracy and performance of GPTs in handling complex tasks.

Allow GPTs sufficient processing time

Similar to how you might require time to calculate the product of 17 and 28, GPTs also benefit from having time to reason and arrive at accurate answers. Instantaneous responses from GPTs often result in more reasoning errors.

By requesting a chain of reasoning before receiving an answer, GPTs can engage in a more reliable process of reasoning and provide correct responses more consistently.

Utilize external tools

To overcome the limitations of GPTs, leverage the outputs of other tools to supplement their capabilities. For instance, incorporating a text retrieval system can provide GPTs with relevant documents, while a code execution engine can assist them in performing mathematical calculations and running code.

By offloading tasks that can be handled more reliably or efficiently by external tools instead of relying solely on GPTs, you can achieve the best of both worlds.

Systematically test modifications

Measuring performance is crucial for improving it effectively. Sometimes, altering a prompt may yield better results on a few specific examples, but it could lead to worse overall performance when tested against a more representative set of examples.

Therefore, to ensure that a modification has a net positive impact on performance, it is necessary to define a comprehensive test suite, also known as an “eval,” that allows for systematic evaluation of the changes.

Conclusion

For additional inspiration and resources, we recommend visiting the OpenAI Cookbook. The OpenAI Cookbook provides a wealth of example code and also includes links to third-party resources. It serves as a valuable reference to explore and discover new possibilities in utilizing OpenAI’s technologies.


Posted

Tags: