PECRA Framework: Purpose, Expectation, Context, Request, Action
A purpose-first prompt engineering framework that aligns AI outputs with your goals by defining why before what, reducing misalignment on complex tasks.
Framework Structure
The key components of the PECRA Framework framework
- Purpose
- Define why you are making this request and the ultimate goal
- Expectation
- Specify the type of response or result you want
- Context
- Provide the background information the AI needs
- Request
- State exactly what you want the AI to do
- Action
- Define how the task should be executed or presented
Core Example Prompt
A practical template following the PECRA Framework structure
Usage Tips
Best practices for applying the PECRA Framework framework
- ✓Always start with Purpose to anchor the entire prompt around your actual goal
- ✓Write Expectation before Context so the AI knows what format to aim for before processing details
- ✓Keep Context focused on information that directly impacts the response; remove anything that would not change the output
- ✓Make your Request specific and measurable so you can evaluate whether the AI delivered what you asked for
- ✓Use Action to specify deliverable format, structure, and presentation style
Detailed Breakdown
In-depth explanation of the framework components
P.E.C.R.A. Framework
The P.E.C.R.A. framework (Purpose, Expectation, Context, Request, Action) is a five-component prompt engineering structure that puts your "why" at the front of every prompt. PECRA framework prompt engineering was developed by Fabio Vivas as a purpose-first approach to AI interaction, and it consistently produces more aligned, actionable outputs than frameworks that start with context or role assignment alone. Purpose-first prompting aligns well with best practices recommended in Anthropic's prompt engineering documentation. By declaring your purpose before anything else, you give the AI a north star that shapes every subsequent decision in the response.
Introduction
PECRA is built on a simple insight: when an AI knows why you need something, it makes better choices about what to include and how to present it. This is the same principle that makes good project briefs effective. A team that understands the goal behind a task delivers better work than one that only received a list of instructions. The framework was created by Fabio Vivas, a prompt engineering researcher who identified that leading with purpose, rather than context or role, reduces misalignment on complex tasks.
This framework produces outputs that are:
- Purpose-Aligned - Every element of the response serves your stated goal
- Expectation-Matched - The format and depth match what you actually need
- Contextually Grounded - Background information calibrates specificity
- Request-Focused - The AI addresses exactly what you asked for
- Action-Oriented - The deliverable follows your specified structure
- Strategic planning and decision-support tasks
- Research briefs requiring specific deliverable formats
- Multi-stakeholder communications where purpose clarity matters
- Complex analysis where "why" shapes "what"
- Professional workflows where outputs feed into larger processes
Origin & Background
The PECRA framework was created by Fabio Vivas, a prompt engineering researcher and educator who documented the framework as part of his broader work on prompt structures for large language models. Vivas identified a recurring problem in prompt engineering: most frameworks start with what the user wants (role, task, context) without first establishing why they need it.
The purpose-first principle:PECRA's defining innovation is placing Purpose at the top of the prompt. This is not just a stylistic choice. When you lead with purpose, you create a decision filter that the AI applies to every part of its response. A research report written to "inform a board decision by Thursday" looks fundamentally different from one written to "explore emerging trends for a quarterly newsletter," even if the topic and context are identical.
Why five components work better than three or four:Simpler frameworks like APE (Action, Purpose, Expectation) cover the basics, but they collapse several distinct functions into fewer slots. PECRA separates "what you want" (Request) from "how to deliver it" (Action), and it separates "why you need it" (Purpose) from "what good looks like" (Expectation). This separation gives you finer control over each dimension of the response.
The briefing model:PECRA mirrors how effective professionals brief consultants or agencies. You explain the business objective first (Purpose), describe what success looks like (Expectation), share relevant background (Context), state your specific ask (Request), and define the deliverable format (Action). This sequence matches natural human communication patterns, which makes PECRA intuitive to learn.
Practical validation:Prompt engineers who adopt PECRA report fewer revision cycles, particularly on complex analytical and strategic tasks. The framework excels when the same prompt could reasonably produce very different outputs depending on the underlying goal.
How PECRA Compares to Other Frameworks
| Aspect | PECRA | ROSES | CARE |
|---|---|---|---|
| Complexity | Intermediate | Advanced | Intermediate |
| Components | 5 | 5 | 4 |
| Purpose-First | Yes (core feature) | No (role-first) | No (context-first) |
| Primary Use | Strategic planning, decision support | Strategic analysis, consulting | Action-oriented, practical tasks |
| Learning Time | 15-20 minutes | 25-30 minutes | 10-15 minutes |
| Best For | Purpose-driven analysis, vendor evaluations, research briefs | Role-based strategic consulting, case studies | Quick actionable outputs, content creation |
| Context Depth | High | Very High | Medium |
| Output Control | High (separate Request + Action) | High (via Style + Example) | Medium |
- Your task requires the AI to understand why before deciding what
- You need outputs that serve a specific business objective or decision
- The same request could produce different results depending on the goal
- You want precise control over both content (Request) and format (Action)
- Strategic planning, vendor evaluations, and research briefs are your primary use cases
- For role-based expert consultation where persona matters most (use ROSES)
- For quick, practical outputs where purpose is obvious (use CARE)
- For simple tasks where five components add unnecessary overhead (use APE)
- When you need style examples as part of the prompt (use ROSES)
PECRA Framework Structure
1. Purpose
Define why you are making this request and the ultimate goalPurpose is the anchor of every PECRA prompt. It tells the AI what outcome you are working toward, which shapes the tone, depth, and focus of the entire response. Without a clear purpose, the AI defaults to generic assumptions about what you need.
Good examples:- "I need to recommend a project management tool to leadership by end of week, with clear justification for the budget increase"
- "I am preparing a quarterly board presentation on our AI adoption progress and need to highlight ROI metrics"
- "I want to train new team members on our API integration process with a reference document they can follow independently"
- "I need help with project management" (no stated goal)
- "Write something about AI" (no purpose or outcome)
- "I want information" (too vague to be useful)
2. Expectation
Specify the type of response you wantExpectation describes what a successful response looks like before the AI starts generating. Think of it as the acceptance criteria for the output. By setting expectations early, you prevent the AI from making assumptions about format, depth, or tone.
Good examples:- "A two-page executive summary with bullet-point recommendations and a risk assessment table"
- "A detailed comparison matrix with numerical scoring, followed by a narrative recommendation"
- "A step-by-step implementation guide with estimated timelines and resource requirements for each phase"
- "Something comprehensive" (subjective and unmeasurable)
- "A good response" (no format or quality criteria)
- "Make it detailed" (unspecified depth)
3. Context
Provide the background information the AI needsContext supplies the situational details that calibrate the response to your specific circumstances. Include information that would change the output if it were different; leave out anything that would not affect the result.
Good examples:- "Our company has 200 employees, processes 50,000 daily transactions, and must maintain PCI DSS compliance throughout any infrastructure change"
- "The target audience is C-suite executives with limited technical background who need to approve a $500K investment"
- "We launched the product 6 months ago and have 12,000 active users with a 4.2% monthly churn rate"
- Providing the company's entire history when only current metrics matter (excessive)
- "We are a company" (insufficient)
- Including technical specifications irrelevant to the request (distracting)
4. Request
State exactly what you want the AI to doRequest is the specific ask. It should be concrete enough that you can evaluate whether the AI delivered what you asked for. Think of it as a task description that could appear on a project ticket.
Good examples:- "Compare three CRM platforms on ease of integration, cost per seat, and customer support quality for our mid-market sales team"
- "Identify the top five risks of migrating our payment processing to a microservices architecture and propose mitigation strategies for each"
- "Draft three email sequences for re-engaging churned users, each targeting a different churn reason"
- "Tell me about CRM platforms" (too open-ended)
- "Help with our architecture" (no specific deliverable)
- "Write some emails" (lacks specificity)
5. Action
Define how the task should be executed or presentedAction specifies the format, structure, and presentation of the deliverable. It answers "how should this look when it is done?" Separating Action from Request gives you independent control over content and presentation.
Good examples:- "Present findings as a scored comparison table, followed by a pros/cons list, then a one-paragraph recommendation in executive summary style"
- "Structure the guide as numbered steps with estimated time per step, required tools, and common pitfalls highlighted in callout boxes"
- "Format as three separate email drafts, each under 150 words, with subject line, body, and CTA clearly labeled"
- "Format it nicely" (subjective)
- "Make it professional" (undefined standard)
- "Present it well" (no structural guidance)
Example Prompts Using the PECRA Framework
Example 1: Strategic Planning
Prompt:Example 2: Research Brief
Prompt:Example 3: Content Strategy
Prompt:Best Use Cases for the PECRA Framework
1. Strategic Decision Support
- Vendor evaluations and comparison matrices
- Go-to-market strategy development
- Investment analysis and ROI projections
- Resource allocation recommendations
2. Research and Analysis
- Market research briefs with specific deliverable formats
- Competitive intelligence reports
- Technology assessment documents
- Industry trend analysis tied to business decisions
3. Professional Communications
- Board presentations and executive summaries
- Stakeholder proposals with clear justification
- Cross-functional project briefs
- Client-facing strategy documents
4. Process and Planning
- Project implementation roadmaps
- Content strategy and editorial calendars
- Team onboarding and training materials
- Workflow optimization recommendations
When NOT to Use PECRA
PECRA adds valuable structure for complex, goal-driven tasks, but it is not the right choice for every situation:
- Simple questions or quick lookups: If you just need a definition or a quick answer, five components are overkill. A plain question works fine.
- Creative brainstorming: When you want the AI to explore freely without constraints, the structure of PECRA can limit creative output. Use a lighter framework or no framework at all.
- Role-heavy tasks: If the task depends on the AI adopting a specific professional persona (e.g., "act as a senior data scientist"), ROSES gives you a dedicated Role component that PECRA lacks.
- Iterative conversations: For multi-turn dialogues where you refine outputs through back-and-forth, PECRA's full structure is better suited to the initial prompt. Subsequent turns can be more conversational.
Common Mistakes to Avoid
1. Skipping or Weakening Purpose
Problem: Writing a vague purpose like "I need help with marketing" or skipping it entirely and jumping straight to the request.Why it matters: Without a clear purpose, the AI cannot make informed trade-offs. It will not know whether to prioritize speed, depth, cost analysis, or creative options, because it does not know what you are trying to achieve.How to fix: State the specific outcome you need, who will use it, and any deadline or constraint that shapes the goal. "I need to present three marketing channel recommendations to the CMO on Wednesday, with projected cost-per-lead for each" gives the AI everything it needs to make smart decisions throughout the response.2. Merging Request and Action
Problem: Combining what you want with how you want it presented in a single block, losing the granularity that makes PECRA effective.Why it matters: When Request and Action are merged, you lose independent control over content and format. You might get the right analysis in the wrong format, or the right structure with shallow content.How to fix: Write Request as a task description ("Compare three vendors on five criteria") and Action as a deliverable specification ("Present as a scored matrix followed by a narrative recommendation"). Keep them separate so you can adjust one without rewriting the other.3. Overloading Context
Problem: Including every piece of background information you can think of, burying the critical details in noise.Why it matters: Excessive context dilutes focus. The AI may fixate on irrelevant details or miss the constraints that actually matter. Context should make the response more specific, not more confused.How to fix: Apply a simple test to each piece of context: "If this detail were different, would the output change?" If yes, include it. If no, cut it. A 200-person fintech company has different needs than a 10-person startup, so company size matters. Your office carpet color does not.4. Setting Vague Expectations
Problem: Writing the Expectation component as something subjective like "a thorough analysis" or "a comprehensive report" without defining what those terms mean.Why it matters: "Thorough" means different things to different people. Without concrete expectations, you get whatever the AI decides, which is often either too broad or the wrong format entirely. You then spend time re-prompting to get the structure you actually needed.How to fix: Replace subjective adjectives with specific deliverables. Instead of "a comprehensive market analysis," write "a comparison table scoring three vendors on five criteria (price, scalability, support, integration complexity, compliance), followed by a two-paragraph recommendation with supporting data." The more precise your Expectation, the less re-prompting you will need.5. Forgetting to Separate Request and Action
Problem: Writing a single paragraph that combines what you want done with how it should be presented, losing the modularity that makes PECRA powerful.Why it matters: Request defines the content (what information you need). Action defines the format and execution (how it should be delivered). When you merge them, you cannot adjust one without rewriting the other. You also give the AI less structure to work with, which usually means less organized output.How to fix: Write the Request as a content-focused statement: "Compare AWS, Azure, and GCP for our migration." Write the Action as a format-focused statement: "Present findings as a scored matrix with weighted criteria, followed by a phased migration timeline." This separation lets you keep the same Request and swap out the Action when you need the same analysis in a different format.Copy-Paste Template
Use this template as a starting point for your PECRA prompts. Replace the bracketed text with your specific details:
Quick-start example:Conclusion
The PECRA framework gives you a repeatable structure for prompts where the "why" matters as much as the "what." By placing Purpose at the top, you create a decision filter that shapes every element of the AI's response, from content selection to formatting choices.
Why PECRA consistently produces better results for strategic tasks:- Purpose alignment prevents the AI from solving the wrong problem
- Separated Expectation and Action give you independent control over quality and format
- Five components provide enough granularity for complex tasks without becoming unwieldy
- The natural briefing sequence makes prompts intuitive to write and review
- Start with tasks where you know the purpose clearly, such as preparing a specific deliverable for a specific audience
- Practice separating Request (what) from Action (how) until it becomes natural
- Build a library of Context blocks you can reuse across similar prompts
- Compare your PECRA results to unprompted outputs to see the difference purpose-first framing makes
Framework in Action: Examples
See how PECRA Framework transforms basic prompts into powerful instructions
Before & After Examples
See how this framework transforms basic prompts into powerful instructions for AI
Help me pick a cloud provider for our migration.
Help me pick a cloud provider for our migration.
Response Comparison
# Cloud Migration Comparison Here is a quick overview of the three providers: AWS is the most popular and has lots of tools. Azure integrates well with Microsoft products. GCP is good for data-heavy workloads. I would recommend AWS because it is the market leader. The costs vary depending on your usage. All three support PCI compliance in some form. Let me know if you need more details on any of these.
# Cloud Migration Vendor Evaluation: Executive Comparison ## Weighted Scoring Matrix | Criteria (Weight) | AWS Migration Hub | Azure Migrate | GCP Migration Center | |---|---|---|---| | Cost Efficiency (25%) | 7/10 | 8/10 | 9/10 | | PCI DSS Compliance (30%) | 9/10 | 9/10 | 7/10 | | Migration Tooling (20%) | 9/10 | 8/10 | 7/10 | | Post-Migration Support (15%) | 8/10 | 8/10 | 7/10 | | Timeline Feasibility (10%) | 8/10 | 7/10 | 7/10 | | **Weighted Total** | **8.35** | **8.15** | **7.55** | ## Year 1 and Year 2 Cost Projection | Cost Category | AWS | Azure | GCP | |---|---|---|---| | Migration Tooling | $45,000 | $38,000 | $32,000 | | Compute (Year 1) | $186,000 | $174,000 | $168,000 | | Compliance Monitoring | $24,000 | $22,000 | $28,000 | | Support Tier (Enterprise) | $48,000 | $42,000 | $36,000 | | **Year 1 Total** | **$303,000** | **$276,000** | **$264,000** | | **Year 2 Projected** | **$258,000** | **$238,000** | **$232,000** | ## Executive Recommendation Based on weighted scoring across five criteria, AWS Migration Hub scores highest at 8.35/10, driven by its superior PCI DSS compliance tooling and mature migration services that align with your 6-month timeline. While GCP offers the lowest cost at $264,000 in Year 1, the $39,000 difference versus AWS is offset by AWS's stronger compliance automation, which reduces your 3-person DevOps team's manual compliance workload by an estimated 40%. For a fintech company where PCI DSS compliance is non-negotiable, AWS provides the best risk-adjusted value. I recommend proceeding with AWS Migration Hub, allocating the remaining $97,000 of your $400K budget as a contingency buffer for the first two migration phases.
Key Improvements with the Framework
Professional Structure
Clear organization with logical sections
Targeted Focus
Precisely aligned with specific outcomes
Enhanced Clarity
Clear intent and specific requirements
Actionable Output
Concrete recommendations and detailed analysis