D - Define Multiple Perspectives
- Instead of single role, use multiple expert viewpoints collaborating
 
- Example: "You are three experts: behavioral psychologist, copywriter, and data analyst"
 
- Prevents generic single-role outputs
 
- Creates richer, more nuanced responses
 
E - Establish Success Metrics
- Define specific, measurable outcomes upfront
 
- Instead of "make it good" → "optimize for 40% open rate, 12% CTR, include 3 psychological triggers"
 
- No metrics = no optimization
 
- Gives AI clear targets to hit
 
P - Provide Context Layers
- Stack multiple context elements: business type, budget, audience, past performance
 
- Example: "B2B SaaS, $200/mo product, targeting overworked founders, previous emails got 20% opens"
 
- Context dramatically improves relevance
 
- More detail = more targeted output
 
T - Task Breakdown
- Break requests into sequential steps instead of one big ask
 
- Example: "Step 1: Identify pain points. Step 2: Create hook. Step 3: Build value. Step 4: Soft CTA"
 
- Prevents AI confusion and generic responses
 
- Creates logical flow in output
 
H - Human Feedback Loop
- Don't accept first output - build in self-critique
 
- Example: "Rate your response 1-10 on clarity, persuasion, and actionability. Improve anything below 8"
 
- Self-critique produces 10x better results
 
- Iterative improvement built into single prompt
 
TD;LR
- Single-role prompts = generic outputs - Always use multiple perspectives
 
- No metrics = no optimization - Specify exact success criteria
 
- Context is king - More background = better relevance
 
- Break down tasks - Step-by-step prevents confusion
 
- Force self-evaluation - AI critiquing itself improves quality dramatically