Tag: prompt-engineering
1 discussion across 1 post tagged "prompt-engineering".
AI Signal - March 31, 2026
- I've been "gaslighting" my AI models and it's producing insanely better results r/ClaudeAI Score: 2944
User discovered prompt techniques that exploit model behavior patterns: telling it "you explained this yesterday" triggers consistency-seeking that produces deeper explanations, assigning random IQ scores affects response quality, and creating fictional constraints generates more creative solutions. While controversial, these techniques reveal interesting aspects of model behavior.