Prompting Techniques

In-Context Learning

Quick Answer: A model's ability to learn new tasks or patterns from examples provided directly in the prompt, without any weight updates or fine-tuning.
In-Context Learning is a model's ability to learn new tasks or patterns from examples provided directly in the prompt, without any weight updates or fine-tuning. The model adapts its behavior based on the demonstrations it sees in the input context, effectively 'learning' at inference time.

Example

You provide three examples of converting informal text to formal business language in your prompt. Without any training, the model picks up the pattern and correctly transforms the fourth informal text you give it, matching the style and tone of your examples.

Why It Matters

In-context learning is the foundation of prompt engineering. It's why few-shot prompting works and why prompt design matters so much. Understanding its strengths and limitations helps you decide when prompting alone is sufficient versus when you need fine-tuning.

How It Works

In-context learning is remarkable because the model isn't actually updating its weights. It's using the examples in the prompt as a kind of temporary program that guides its generation. The transformer architecture's attention mechanism allows it to identify patterns across the examples and apply them to new inputs, all within a single forward pass.

Research has shown that in-context learning has interesting properties. The format and ordering of examples matters more than you'd expect. The model can learn tasks it was never explicitly trained on, just from seeing a few examples. And sometimes, even the labels in the examples don't need to be correct; the model picks up the task structure from the input-output format alone (though correct labels obviously help).

The practical limits of in-context learning are tied to the context window. You can only include so many examples before running out of space. There's also a quality ceiling: for specialized tasks requiring deep domain knowledge, in-context learning with a general model will eventually underperform a fine-tuned model. The sweet spot is using in-context learning for rapid prototyping and then fine-tuning if you need higher accuracy or lower per-request costs.

Common Mistakes

Common mistake: Providing too many examples that fill up the context window, leaving no room for the actual task

Use 3-5 high-quality, diverse examples. Quality beats quantity. Reserve most of the context window for the task input.

Common mistake: Using examples that are all too similar to each other

Include diverse examples covering different scenarios and edge cases. This helps the model generalize rather than memorize one pattern.

Common mistake: Assuming in-context learning works equally well on all models

In-context learning ability scales with model size. Smaller models may not pick up on subtle patterns that larger models handle easily.

Career Relevance

In-context learning is the single most important concept for prompt engineers. It's the mechanism that makes prompt engineering valuable. Understanding how and why it works lets you design more effective prompts and troubleshoot when few-shot approaches fail.

Stay Ahead in AI

Join 1,300+ prompt engineers getting weekly insights on tools, techniques, and career opportunities.

Join the Community →