Someone told me that prompt engineering isn't real—that it's just techbros rebranding "good writing" and "using words well." I disagree, and here's why:
Prompt engineering fundamentally differs from writing for human audiences because LLMs aren't people. When done rigorously, prompt engineering relies on automated evaluations and measurable metrics at a scale impossible with human communication. While we do test human-facing content through focus groups and A/B testing, the scale and precision (such as it is) here are entirely different.
The "engineering" aspect involves systematic tinkering—sometimes by humans tweaking language, sometimes by LLMs themselves—to activate specific emergent behaviors in models. Some of these techniques come from formal research; others are educated hunches that prove effective through testing.
Effective prompts often resemble terrible writing. The ritual forms, repetitions, and structural patterns that improve LLM performance would make a professional editor cringe. Yet they produce measurable improvements in evaluation metrics.
Consider adversarial prompts: they're often stuffed with tokens that are nonsense to humans but exploit specific model quirks. Here, the goal is explicitly to use language in ways that aren't human-legible, making attacks harder to detect during review.
Good writing skills can help someone pick up prompt engineering faster, but mastering it requires learning to use words and grammar in weird, counterintuitive ways that are frankly sometimes horrifying.
All-in-all, prompt engineering may still be somewhat hand-wavy as a discipline, but it's definitely real—and definitely not just rebranded writing advice.