Skip to main content

RACE Framework for Prompt Engineering: 10 Examples That Get Results

Master the RACE framework for prompt engineering with 10 real-world examples. Learn Role, Action, Context, Expectations to write better AI prompts.

Keyur Patel
Keyur Patel
March 13, 2026
12 min read
Prompt Engineering

RACE Framework for Prompt Engineering: The Complete Guide

RACE is the framework I use for roughly 80% of my prompts. If you have ever pasted a prompt into ChatGPT or Claude and gotten a response that felt generic, unfocused, or just off, the problem probably was not the AI. It was the prompt. RACE framework prompt engineering gives you a repeatable four-part structure (Role, Action, Context, Expectations) that eliminates guesswork and produces consistently useful outputs.

I have tested RACE across hundreds of real projects: marketing copy, code reviews, data analysis, hiring workflows, and more. This guide walks through exactly how each component works, shows you 10 full prompts you can copy and adapt, and compares RACE to other popular frameworks so you can pick the right tool for the job.

For a quick-reference version of the framework itself, see the RACE framework page. This tutorial goes deeper, focusing on practical examples, common mistakes, and when RACE outperforms alternatives.

What Is the RACE Framework?

RACE stands for Role, Action, Context, Expectations. Each letter represents one part of a well-structured prompt:

  • Role: Who should the AI become? A senior data analyst, a pediatric nurse, a tax attorney? The more specific the role, the more specialized the output.
  • Action: What exactly do you want done? Analyze, write, compare, debug, summarize? Use precise verbs.
  • Context: What background information does the AI need? Audience, constraints, data, industry, prior decisions.
  • Expectations: What should the output look like? Format, length, tone, quality criteria, structure.
The power of RACE is that it forces you to think before you type. Most weak prompts are missing at least two of these four components. When you fill in all four, the AI has enough signal to produce something genuinely useful on the first try.

Why Four Components Instead of One Long Instruction?

Writing a single paragraph prompt works for simple questions. But for anything that matters (a deliverable for a client, a technical analysis, a content draft), ambiguity kills quality. RACE eliminates the three most common failure modes:

  • Vague identity: Without a Role, the AI defaults to a generic assistant voice. Giving it a specific professional identity activates domain-appropriate vocabulary, reasoning patterns, and standards.
  • Unclear task: Without a clear Action, the AI guesses what you want. "Help me with marketing" could mean a hundred things. "Write three subject lines for a cart-abandonment email sequence" is one thing.
  • Missing guardrails: Without Expectations, you get whatever format and length the AI decides. You end up re-prompting three times to get the structure you needed from the start.

How to Write Each RACE Component

Role: Be Specific About Expertise

Bad: "You are a marketing expert."

Better: "You are a senior growth marketer with 8 years of experience in B2B SaaS, specializing in product-led growth and free-to-paid conversion funnels."

The more specific the role, the better. Include years of experience, industry focus, methodology preferences, or certifications when relevant. This is not role-playing for fun; it is a mechanism that activates the right knowledge domain in the model.

Action: Use Precise Verbs

Avoid vague verbs like "help," "assist," or "do something about." Instead, use verbs that specify exactly what output you expect:

  • Analyze: Break down data, identify patterns, draw conclusions
  • Compare: Evaluate two or more options against specific criteria
  • Draft: Write a first version of a specific document
  • Audit: Review against a standard and flag issues
  • Prioritize: Rank items using stated criteria

Context: Give Only What Matters

Context is not "tell the AI everything." It is telling the AI the specific facts that change the answer. If you are asking for a marketing plan, the AI needs to know your budget, audience, and timeline, but it does not need your company's founding story.

Ask yourself: "If I gave this task to a human expert, what background would they need to do it well?" That is your Context.

Expectations: Define the Finish Line

This is where most people stop too early. Expectations should cover:

  • Format: Bullet points, numbered list, table, prose, code block
  • Length: Word count, number of items, level of detail
  • Tone: Technical, conversational, formal, persuasive
  • Quality criteria: What makes the output good vs. mediocre
  • Constraints: What to avoid, what to prioritize

10 Real-World RACE Examples

Here are 10 complete RACE prompts across different use cases. Each one is tested and ready to customize. For more prompt templates, check out our collections of best ChatGPT prompts and best Claude prompts.

Example 1: SaaS Landing Page Copy

Example 2: Technical Code Review

Example 3: Market Research Analysis

Example 4: Email Sequence for Onboarding

Example 5: Data Analysis Report

Example 6: Job Interview Questions

Example 7: Content Strategy Brief

Example 8: Bug Report Triage

Example 9: Financial Model Assumptions

Example 10: Technical Documentation

5 Common RACE Mistakes (and How to Fix Them)

After coaching dozens of people on structured prompting, I see the same mistakes repeatedly. Here is how to avoid them. For more on avoiding common pitfalls, see our guide on advanced prompt engineering techniques.

Mistake 1: Roles That Are Too Broad

"You are a marketing expert" tells the AI almost nothing. Marketing covers brand strategy, performance ads, email, SEO, social, content, PR, and a dozen other specialties. A "growth marketer specializing in PLG free-to-paid conversion" activates a much more specific knowledge set.

Fix: Add specialty, years of experience, and industry focus to every Role.

Mistake 2: Actions Without Measurable Output

"Help me improve my resume" is not an action; it is a wish. What does "improve" mean? Rewrite bullet points? Add metrics? Reformat for ATS scanning? Tailor for a specific role?

Fix: Use verbs that produce a specific deliverable: rewrite, audit, compare, draft, prioritize, calculate.

Mistake 3: Context Overload

Pasting your entire company wiki into the Context defeats the purpose. More context is not always better. Irrelevant context actually degrades output quality because the model has to decide what matters.

Fix: Include only the facts that would change the answer if they were different. Budget, audience, constraints, timeline, prior decisions: that is usually enough.

Mistake 4: Missing Expectations Entirely

This is the most common mistake by far. People craft a great Role, Action, and Context, then leave Expectations blank. The AI picks a random format and length. You spend another three prompts trying to reshape the output.

Fix: At minimum, specify format (bullets, table, prose), length (word count or number of items), and tone. Add quality criteria when the task is important.

Mistake 5: Treating RACE as a Rigid Template

Some people type "Role:" and "Action:" as literal labels in every prompt. You do not have to do that. RACE is a mental checklist, not a fill-in-the-blank form. A natural paragraph that covers all four elements works just as well as labeled sections.

Fix: Use labels when the prompt is complex (500+ words). For shorter prompts, weave the four elements into natural language.

RACE vs Other Prompt Frameworks

RACE is not the only structured prompting framework worth knowing. Here is how it compares to three popular alternatives. For a broader comparison, check out our guide on the best AI prompt frameworks in 2026.

RACE vs TAG

The TAG framework uses three components: Task, Action, Goal. TAG is simpler: it skips the explicit Role and folds context into the Task definition.

AspectRACETAG
Components4 (Role, Action, Context, Expectations)3 (Task, Action, Goal)
Best forComplex, multi-step tasksQuick, straightforward tasks
Role assignmentExplicitImplicit
Output controlDetailed (format, length, criteria)Basic (goal-oriented)
Learning curveModerateLow

When to use TAG instead: When your prompt is simple enough that a role is unnecessary and you just need to specify what and why. TAG is great for one-shot tasks like "Summarize this article in 3 bullet points for a non-technical audience."

When RACE wins: Anything where domain expertise matters: technical writing, financial analysis, code review, medical information, legal drafts. The explicit Role component consistently improves output quality for specialized tasks.

RACE vs ROSES

The ROSES framework stands for Role, Objective, Scenario, Expected Solution, Steps. It adds a Scenario component and breaks the output into Expected Solution and Steps, which makes it more prescriptive about the solution path.

AspectRACEROSES
Components45
Best forFlexible, high-quality outputsStep-by-step procedural tasks
FlexibilityHigh: the AI decides the approachLower: you guide the approach
Prompt lengthModerateLonger
Output structureYou define in ExpectationsBuilt into the framework

When to use ROSES instead: When you already know the general approach and want the AI to execute a specific workflow with defined steps. ROSES works well for standard operating procedures and process documentation.

When RACE wins: When you want the AI to bring its own expertise to the problem. RACE gives more room for creative or analytical solutions because the Expectations component defines what good output looks like without dictating how to get there.

RACE vs COSTAR

The COSTAR framework uses six components: Context, Objective, Style, Tone, Audience, Response. COSTAR is the most granular of the common frameworks, splitting what RACE handles in Expectations into Style, Tone, Audience, and Response as separate fields.

AspectRACECOSTAR
Components46
Best forBalanced structure and flexibilityContent creation and writing tasks
Role handlingExplicit first componentImplicit via Style and Tone
Audience awarenessPart of ContextDedicated component
Prompt lengthModerateLonger

When to use COSTAR instead: Content-heavy tasks where tone, style, and audience are critical dimensions: blog posts, marketing copy, social media content. The separate Audience component forces you to think carefully about who will read the output.

When RACE wins: Technical and analytical tasks where the professional Role matters more than writing style. If you are asking for a code review, a financial analysis, or a legal opinion, the Role component in RACE gives you more leverage than COSTAR's style-focused breakdown.

Quick Decision Guide

  • Simple, one-shot tasks: Use TAG
  • Technical or analytical work: Use RACE
  • Writing and content creation: Use COSTAR
  • Process and procedure documentation: Use ROSES
In practice, I use RACE as my default and switch to COSTAR only for content tasks where audience and tone require separate attention.

Tips for Getting More From RACE

Stack RACE With Follow-Up Prompts

RACE works best as the opening prompt in a conversation. Once you have established the Role, the AI maintains that persona for subsequent messages. Your follow-up prompts can be shorter; just add new Actions and Context as needed.

Adjust Detail Level by Task Importance

Not every prompt needs a 200-word RACE structure. For low-stakes tasks, a single sentence covering all four elements works fine:

For high-stakes deliverables, expand each component into its own paragraph with specific details.

Use RACE Across Different AI Models

RACE works with every major language model: GPT-4, Claude, Gemini, Llama, and others. The structure is model-agnostic because it addresses a fundamental problem (prompt ambiguity) rather than exploiting model-specific features. That said, Claude responds particularly well to the explicit Role component, and GPT-4 tends to follow detailed Expectations more precisely.

Start Using RACE Today

The fastest way to improve your AI outputs is to stop typing the first thing that comes to mind and start using a structure. RACE is simple enough to memorize (four letters, four components) but powerful enough to handle complex professional tasks.

Here is how to begin:

  • Pick one task you regularly use AI for: an email draft, a code review, a research summary.
  • Write a RACE prompt for that task using the examples above as a template.
  • Compare the output to what you normally get with unstructured prompts.
  • Save your best RACE prompts as templates for reuse.
Once you see the difference, you will not go back to unstructured prompting. For more frameworks and techniques, explore our RACE framework reference and advanced prompt engineering guide.

Keyur Patel

Written by Keyur Patel

AI Engineer & Founder

Keyur Patel is the founder of AiPromptsX and an AI engineer with extensive experience in prompt engineering, large language models, and AI application development. After years of working with AI systems like ChatGPT, Claude, and Gemini, he created AiPromptsX to share effective prompt patterns and frameworks with the broader community. His mission is to democratize AI prompt engineering and help developers, content creators, and business professionals harness the full potential of AI tools.

Prompt EngineeringAI DevelopmentLarge Language ModelsSoftware Engineering

Explore Related Frameworks

Try These Related Prompts