Few-Shot Learning
Use examples to steer model behavior without fine-tuning. The most underrated prompting technique.
What Is Few-Shot Learning?
Few-shot learning means including examples of desired input/output pairs directly in your prompt. The model infers the pattern and applies it to new inputs — no fine-tuning required.
It's the fastest way to teach a model your specific style, format, or domain conventions.
Zero-Shot vs One-Shot vs Few-Shot
Zero-shot — No examples. Relies entirely on model's training. Use for simple, well-defined tasks.
One-shot — One example. Useful when the pattern is clear and one demonstration is sufficient.
Few-shot — Two to five examples. Use when the pattern is nuanced, or zero/one-shot produces inconsistent results.
More than five examples rarely improves performance and adds token cost.
Constructing Good Examples
Good few-shot examples are:
- Representative — Cover the typical case, not the easy case
- Diverse — Show variation in inputs without changing the pattern
- Exact — Your output examples should be exactly the format you want replicated
Bad few-shot examples give the model conflicting signals. If your examples aren't consistent, neither will be the output.
Few-Shot for Classification
Classify the sentiment of each review as POSITIVE, NEGATIVE, or NEUTRAL.
Review: "The product exceeded every expectation." → POSITIVE
Review: "Arrived broken. Terrible packaging." → NEGATIVE
Review: "It's fine, does what it says." → NEUTRAL
Review: "{new review}" →
Few-Shot for Format Control
When you need precise output structure (JSON, YAML, custom schema), show the model an example of the exact output. This is more reliable than describing the format in words.