GPT-4, GPT-5 Best Practices

GPT-4, GPT-5 Best Practices

This manual presents techniques and methods to enhance the performance of GPT models. You can often apply these tactics collectively to boost their efficiency. We recommend trying out different approaches and GPT Best Practices to discover what suits your needs best.

Please note that some of the instances showcased here are exclusive to OpenAI’s most advanced model, GPT-4 and upcoming GPT-5. If you haven’t gained access to GPT-4 yet, you may want to consider enrolling in the GPT-4 waitlist. Generally, if a less sophisticated GPT model falls short in accomplishing a task and a superior model is available, it would typically be beneficial to retry with the more advanced version.

Six strategies for getting better results with GPT

Ensure Clarity in Instructions

GPTs lack the ability to discern your thoughts. If the generated outputs are too lengthy, request succinct responses. If the outputs seem overly simplified, ask for responses at a more expert level. If the format is not to your liking, provide an example of your preferred format. By minimizing ambiguity in your instructions to GPTs, you increase the chances of obtaining the desired output.

Include Reference Material

GPTs may sometimes produce incorrect or made-up responses, especially when dealing with obscure subjects or when asked for citations and URLs. Similar to how notes can aid a student in a test, giving GPTs reference text can result in more accurate responses and less fabrication.

Break Down Complex Tasks

Mirroring the practice in software engineering where complex systems are divided into modular components, GPTs perform better when large tasks are broken down into smaller, simpler ones. Complicated tasks tend to result in more errors compared to simpler ones. Moreover, intricate tasks can often be reframed as a sequence of simpler tasks, using the outputs of earlier tasks to shape the inputs of subsequent tasks.

Allow GPTs to ‘Think’

Just as you might need some time to multiply 17 by 28, GPTs too are more prone to reasoning errors when forced to give instant answers. Prompting GPTs to provide a sequence of reasoning before delivering an answer can help them derive more accurate responses.

Leverage External Resources

Counterbalance the limitations of GPTs by incorporating outputs from other tools. For instance, a text retrieval system can inform GPTs about pertinent documents, while a code execution engine can aid GPTs in performing mathematical operations and running code. If a task can be executed more reliably or efficiently by another tool rather than a GPT, delegate it to ensure optimal results.

Evaluate Changes Methodically

Measuring performance makes improvement easier. Sometimes, a prompt modification might enhance performance in a few isolated instances but reduce overall performance on a broader set of examples. Thus, to ensure a change truly enhances performance, it may be necessary to develop a comprehensive test suite, also referred to as an ‘eval’.

Read more related articles: