Prompt engineering
PROMPT en-jih-NEER-ing
Writing instructions that get the best output from an AI model. The difference between a useless response and a useful one.
Prompt engineering is the practice of writing instructions that get an AI model to do what you actually want. The same model can give you garbage or gold depending on how you ask. "Write me a blog post" produces slop. "Write a 1,200-word blog post about developer onboarding for a Series B infrastructure startup, using specific examples from companies like Vercel and Supabase" produces something useful.
The key techniques are straightforward. Be specific. Give examples of what you want (few-shot prompting). Tell the model what role to play. Break complex tasks into steps. Provide context. Specify the format. These are not tricks. They are communication skills applied to a machine.
For developer marketing teams, prompt engineering is a daily skill. You use it to draft content, analyze competitors, generate email sequences, and build internal tools. The people who write better prompts get better results. Good prompts work within the context window limits and minimize hallucinations by providing relevant context.
Examples
Writing a product comparison.
A bad prompt: "Compare our product to competitors." A good prompt: "You are a senior developer evaluating observability tools. Compare Datadog, Grafana Cloud, and New Relic on these dimensions: pricing at 100GB/day ingest, Kubernetes support, alert configuration, and time to first dashboard. Use a comparison table."
Generating test data.
A developer needs 1,000 realistic user records. Instead of writing a script, they prompt Claude: "Generate 1,000 JSON user records with realistic names, email addresses using company domains, US phone numbers, and created_at timestamps between 2024-01-01 and 2025-06-30."
Building a system prompt for a support bot.
Anthropic's documentation shows how companies like Notion write system prompts for customer-facing AI. The prompt defines the bot's personality, knowledge boundaries, escalation rules, and response format. A well-engineered system prompt is the difference between a helpful assistant and a liability.
In practice
Read more on the blog
Related terms
A neural network trained on massive text data to generate and understand language. The technology behind ChatGPT, Claude, and Gemini.
The maximum amount of text an LLM can process in a single request. Measured in tokens. Bigger windows handle more information at once.
Fetching relevant data and feeding it to an LLM so the response is grounded in real, current information instead of training data alone.
The smallest unit of text an LLM processes. Roughly 4 characters or 3/4 of a word. Tokens determine cost and context limits.

Want the complete playbook?
Picks and Shovels is the definitive guide to developer marketing. Amazon #1 bestseller with practical strategies from 30 years of marketing to developers.