2.1 Anatomy of a Prompt
AI-Generated Content
AI-generated content may contain errors. Always verify against official sources.
2.1 Anatomy of a Prompt
Key Concepts: System prompt · User prompt · Roles · Instruction clarity
Official Docs: OpenAI Prompt Engineering Guide · Anthropic Prompt Engineering
The Chat Completion Format
All modern LLM APIs use a messages array with three role types:
messages = [
{
"role": "system",
"content": "You are a senior Python engineer. Be concise and use type hints."
},
{
"role": "user",
"content": "Write a function that validates an email address."
},
]
| Role | Purpose |
|---|---|
system | Sets the model's persona, rules, and global context |
user | The human's input or task |
assistant | Prior model responses (real or injected as examples) |
The System Prompt
The system prompt is your contract with the model. It should specify:
- Persona — who the model is
- Task scope — what it should and should not do
- Output format — JSON, markdown, plain text
- Tone and constraints — formal/casual, word limit
Weak vs Strong System Prompt
# ❌ Weak
"You are a helpful assistant."
# ✅ Strong
"You are a code review assistant for a Python 3.12 codebase.
Always:
- Point out security issues first
- Suggest the simplest fix
- Keep responses under 200 words
- Use inline code blocks for all code
Never suggest third-party libraries unless asked."
Instruction Clarity Principles
From the OpenAI prompt engineering guide:
1. Use delimiters to separate data from instructions
prompt = """
Summarise the following article in exactly 3 bullet points.
ARTICLE:
"""
{article_text}
"""
"""
2. Specify the output format explicitly
Respond ONLY with valid JSON matching this schema:
{"sentiment": "positive|negative|neutral", "confidence": 0.0-1.0}
3. Give the model an escape hatch
If you cannot answer from the provided context, respond with:
{"answer": null, "reason": "insufficient context"}
Multi-Turn Conversations
from openai import OpenAI
client = OpenAI()
history = [{"role": "system", "content": "You are a Python tutor."}]
def chat(user_input: str) -> str:
history.append({"role": "user", "content": user_input})
resp = client.chat.completions.create(
model="gpt-4o-mini",
messages=history,
)
answer = resp.choices[0].message.content
history.append({"role": "assistant", "content": answer})
return answer
print(chat("What is a list comprehension?"))
print(chat("Show me an example with filtering."))
Context Window Management
History grows with every turn. Implement a sliding window or summarisation strategy before the context fills up.
Key Takeaways
- System prompt = persona + rules + format constraints
- Use delimiters to cleanly separate data from instructions
- Always specify output format explicitly
- Manage
historylength to avoid context overflow