GPT-5 excels at following precise directions but struggles with ambiguity. When setting up your coding environment (
.cursor/rules or AGENTS.md), avoid vague or contradictory guidelines.
Key practice: Review your configuration files to eliminate conflicting instructions that might confuse the model.
GPT-5 automatically applies reasoning to problems. Use high reasoning effort for architecture decisions and complex algorithms. Use medium or low effort for straightforward tasks like formatting or simple refactoring.
Watch for: If the model overthinks simple tasks, be more explicit in your instructions or dial down the reasoning level.
GPT-5 processes structured context exceptionally well. Use XML-like tags to organize your coding guidelines:
<code_standards>
<core_principles>
- Prioritize modularity and reusability
- Write self-documenting code
</core_principles>
<tech_stack>
- Styling: TailwindCSS
- Framework: Next.js
</tech_stack>
</code_standards>
Previous models responded well to forceful instructions like "ALWAYS gather COMPLETE context before responding!" With GPT-5, this can backfire.
Better approach: Use measured language. The model will naturally be thorough without aggressive prompting.
For new applications, guide the model to think strategically before executing:
<planning_phase>
Define success criteria first
Develop evaluation rubric (5-7 categories)
Iterate internally until top marks achieved
Keep rubric internal, show only final result
</planning_phase>
GPT-5 tends toward comprehensiveness. Control this with explicit guidelines:
Tool usage budget: Specify how many tools the agent should call
Thoroughness levels: Define when to be exhaustive vs. efficient
Check-in points: Indicate when to confirm with you vs. proceed independently
Parallelization: Specify if tool calls should run simultaneously or sequentially
Example configuration:
<agent_behavior>
- Make reasonable assumptions and document them
- Don't ask for clarification mid-task
- Adjust based on results rather than seek approval
</agent_behavior>