Skip to main content

2.2 Core Prompting Techniques

AI-Generated Content

AI-generated content may contain errors. Always verify against official sources.

2.2 Core Prompting Techniques

Key Concepts: Zero-shot · Few-shot · Chain-of-thought · Step-by-step reasoning

Official Docs: OpenAI Prompt Engineering · Anthropic Prompt Library


1. Zero-Shot Prompting

Ask the model directly with no examples. Works well for common, well-understood tasks.

prompt = "Classify the sentiment of this review as positive, negative, or neutral:\n\n'The delivery was fast but the packaging was damaged.'"

Output: negative

When to use: Simple, common tasks — summarisation, translation, basic classification.


2. Few-Shot Prompting

Provide 2–5 input → output examples before the actual query. This is the most reliable technique for domain-specific or non-standard tasks.

prompt = """
Classify each review as positive, negative, or neutral.

Review: "Incredible quality, will buy again." → positive
Review: "Stopped working after two days." → negative
Review: "It arrived on time." → neutral

Review: "Great value but the colour is off." →
"""

Best practices:

  • Use 3–5 examples that cover edge cases
  • Keep examples consistent in format
  • Place examples close to the actual query

3. Chain-of-Thought (CoT)

CoT prompts the model to reason step by step before giving the final answer. This reliably improves performance on multi-step reasoning tasks.

Zero-Shot CoT — Add "Think step by step"

prompt = """
A basket has 3 apples. Alice adds 5 more. Bob takes half.
How many apples are left?

Think step by step.
"""
Model output:
Step 1: Start with 3 apples.
Step 2: Alice adds 5 → 3 + 5 = 8 apples.
Step 3: Bob takes half → 8 / 2 = 4 apples.
Answer: 4

Few-Shot CoT — Show the Reasoning

prompt = """
Q: Roger has 5 tennis balls. He buys 2 cans of 3 balls each. How many does he have?
A: Roger starts with 5. He buys 2 × 3 = 6 more. Total = 5 + 6 = 11.

Q: A school has 3 girls for every 2 boys. There are 30 girls. How many students total?
A:
"""

4. Self-Consistency

Generate multiple responses at higher temperature and take the majority answer. Improves accuracy on reasoning at the cost of extra API calls.

from openai import OpenAI
from collections import Counter

client = OpenAI()

def majority_vote(prompt: str, n: int = 5) -> str:
answers = []
for _ in range(n):
resp = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}],
temperature=0.7,
)
answers.append(resp.choices[0].message.content.strip())
return Counter(answers).most_common(1)[0][0]

Technique Comparison

TechniqueBest forRelative Cost
Zero-shotCommon tasksLow
Few-shotDomain-specific classificationLow
Chain-of-thoughtMath, multi-step reasoningLow
Self-consistencyHigh-stakes reasoning

Further Reading

Next → 2.3 Structured Output