Skip to main content

2.1 Anatomy of a Prompt

AI-Generated Content

AI-generated content may contain errors. Always verify against official sources.

2.1 Anatomy of a Prompt

Key Concepts: System prompt · User prompt · Roles · Instruction clarity

Official Docs: OpenAI Prompt Engineering Guide · Anthropic Prompt Engineering


The Chat Completion Format

All modern LLM APIs use a messages array with three role types:

messages = [
{
"role": "system",
"content": "You are a senior Python engineer. Be concise and use type hints."
},
{
"role": "user",
"content": "Write a function that validates an email address."
},
]
RolePurpose
systemSets the model's persona, rules, and global context
userThe human's input or task
assistantPrior model responses (real or injected as examples)

The System Prompt

The system prompt is your contract with the model. It should specify:

  1. Persona — who the model is
  2. Task scope — what it should and should not do
  3. Output format — JSON, markdown, plain text
  4. Tone and constraints — formal/casual, word limit

Weak vs Strong System Prompt

# ❌ Weak
"You are a helpful assistant."

# ✅ Strong
"You are a code review assistant for a Python 3.12 codebase.
Always:
- Point out security issues first
- Suggest the simplest fix
- Keep responses under 200 words
- Use inline code blocks for all code
Never suggest third-party libraries unless asked."

Instruction Clarity Principles

From the OpenAI prompt engineering guide:

1. Use delimiters to separate data from instructions

prompt = """
Summarise the following article in exactly 3 bullet points.

ARTICLE:
"""
{article_text}
"""
"""

2. Specify the output format explicitly

Respond ONLY with valid JSON matching this schema:
{"sentiment": "positive|negative|neutral", "confidence": 0.0-1.0}

3. Give the model an escape hatch

If you cannot answer from the provided context, respond with:
{"answer": null, "reason": "insufficient context"}

Multi-Turn Conversations

from openai import OpenAI

client = OpenAI()
history = [{"role": "system", "content": "You are a Python tutor."}]

def chat(user_input: str) -> str:
history.append({"role": "user", "content": user_input})
resp = client.chat.completions.create(
model="gpt-4o-mini",
messages=history,
)
answer = resp.choices[0].message.content
history.append({"role": "assistant", "content": answer})
return answer

print(chat("What is a list comprehension?"))
print(chat("Show me an example with filtering."))
Context Window Management

History grows with every turn. Implement a sliding window or summarisation strategy before the context fills up.


Key Takeaways

  • System prompt = persona + rules + format constraints
  • Use delimiters to cleanly separate data from instructions
  • Always specify output format explicitly
  • Manage history length to avoid context overflow

Further Reading

Next → 2.2 Core Prompting Techniques