Skip to main content

System Prompts & Custom Instructions: The Complete Guide

How to write system prompts for ChatGPT, Claude, and APIs. 8 copy-paste templates, common mistakes, and advanced patterns for consistent AI behavior.

Keyur Patel
Keyur Patel
March 13, 2026
11 min read
Prompt Engineering

Every conversation with AI starts with invisible instructions. Whether you realize it or not, system prompts shape how ChatGPT, Claude, and every other AI model responds to you. Learning to write your own custom instructions is the single highest-leverage prompting skill you can develop. A well-crafted system prompt transforms a generic chatbot into a specialized assistant that consistently delivers exactly what you need.

System prompts are persistent instructions that run behind every message in a conversation. They define the AI's personality, capabilities, boundaries, and response format before a single user message arrives. Think of them as the DNA of your AI interaction; they determine behavior at a fundamental level.

You encounter system prompts in three main places: ChatGPT Custom Instructions, Claude Project Instructions, and API system messages. Each platform handles them slightly differently, but the core principles remain the same. By the end of this guide, you will have 8 ready-to-use templates and the knowledge to write your own from scratch.

System Prompts vs Regular Prompts

Understanding the difference between system prompts and regular prompts is essential before you start writing them.

A regular prompt is a one-shot instruction. You type "Summarize this article" and get a response. The AI has no memory of your preferences, no context about your role, and no guidelines for how to format the output. Every conversation starts from zero.

A system prompt is a persistent instruction that shapes every response in a conversation. It runs silently in the background, influencing tone, format, depth, and behavior across all interactions.

Here is a useful analogy: a system prompt is like a job description, while a regular prompt is like a task assignment. You would not hand someone a task without first telling them their role, what tools they have, and what standards you expect. The same logic applies to AI.

AspectSystem PromptRegular Prompt
PersistenceApplies to every messageOne-time instruction
PurposeDefine behavior and personalityAssign a specific task
ScopeConversation-wideSingle response
VisibilityHidden from conversation flowVisible in chat
AnalogyJob descriptionTask assignment

When you combine a strong system prompt with clear regular prompts, you get consistent, high-quality outputs without repeating yourself every time.

How to Set System Prompts on Each Platform

The mechanics differ by platform, but the underlying concept is identical: give the AI its instructions before the conversation begins.

ChatGPT Custom Instructions

OpenAI provides two fields under Settings > Personalization > Custom Instructions:

  • "What would you like ChatGPT to know about you?" - Your background, role, preferences
  • "How would you like ChatGPT to respond?" - Format, tone, length, constraints
These instructions apply to every new conversation automatically. You can toggle them on and off per conversation. For a deep dive into ChatGPT-specific prompts, check out our best ChatGPT prompts collection.

Claude Project Instructions

Anthropic's Claude uses Projects to organize system-level instructions:

  • Create a new Project in the Claude interface
  • Add your system prompt in the Project Instructions field
  • Every conversation within that project inherits the instructions
Claude's project system also lets you attach reference documents, which provides additional context alongside your system prompt. Explore more Claude-specific techniques to get the most out of this setup.

API System Messages

When calling the OpenAI or Anthropic APIs directly, you pass system prompts as the first message in the conversation:

For Anthropic's API, the system prompt is a top-level parameter rather than a message in the array. The effect is the same: the model reads it before processing any user input.

Custom GPTs

OpenAI's Custom GPTs let you build specialized chatbots with persistent system instructions, uploaded knowledge files, and custom actions. You define the system prompt during GPT creation under the Instructions field. For a complete walkthrough, see our Custom GPTs guide.

8 System Prompt Templates

These templates are ready to copy, paste, and customize. Each one targets a specific use case. For structuring complex system prompts, the RACE framework provides an excellent scaffolding approach: Role, Action, Context, Expectation.

1. Code Assistant

When to use: Daily coding work, code reviews, debugging sessions, and learning new languages.

2. Writing Assistant

When to use: Blog posts, emails, documentation, marketing copy, and any written communication. Pairs well with the COSTAR framework for creative content planning.

3. Data Analyst

When to use: Exploratory data analysis, dashboard creation, SQL query writing, and interpreting business metrics.

4. Customer Support Agent

When to use: Building chatbots, training support teams, or creating response templates for help desk software.

5. Research Assistant

When to use: Literature reviews, competitive analysis, due diligence research, and fact-checking. For simple research queries, the TAG framework offers a lightweight structure: Task, Action, Goal.

6. Meeting Summarizer

When to use: After any meeting where you need actionable notes. Paste in transcripts from Otter.ai, Fireflies, or your own rough notes.

7. Content Creator

When to use: Social media posts, newsletters, blog outlines, video scripts, and content calendars.

8. Technical Reviewer

When to use: Pull request reviews, architecture decision reviews, technical RFCs, and security audits.

Advanced Patterns

Once you have the basics down, these advanced techniques will take your system prompts to the next level. For a deeper exploration of these concepts and the theory behind them, see our guide on context engineering.

Role Stacking

Instead of assigning a single role, layer multiple perspectives to get richer responses:

Role stacking forces the model to consider trade-offs that a single-role prompt would miss.

Negative Instructions

Sometimes telling the model what NOT to do is more effective than listing what to do. This is especially useful for eliminating persistent bad habits:

Negative instructions work best when paired with positive alternatives. Tell the model what to do instead, not just what to avoid.

Dynamic Prompts with Variables

Structure your system prompts with placeholders that adapt to context:

This pattern is powerful in API integrations where you programmatically swap variables per request. It lets one system prompt template serve dozens of use cases.

Token Management

Long system prompts eat into your context window, leaving less room for conversation history. Keep your system prompts efficient:

  • Front-load critical rules - models pay more attention to the beginning
  • Use structured formats (numbered lists, headers) instead of prose
  • Remove redundancy - if two rules overlap, merge them
  • Test shorter versions - often a 200-token prompt works as well as a 500-token one
For API users, monitor token usage carefully. A system prompt that uses 1,000 tokens on every request adds up quickly at scale.

Multi-Turn Consistency

System prompts can include instructions for maintaining state across a conversation:

This pattern prevents the common problem of AI "forgetting" earlier decisions in long conversations. For more techniques on maintaining coherent multi-turn interactions, explore our advanced prompt engineering guide.

Common Mistakes

After writing and reviewing hundreds of system prompts, these are the patterns that consistently cause problems.

1. Making the prompt too long. A 2,000-word system prompt is almost always worse than a focused 300-word one. The model loses signal in the noise, and you waste context window space. Start short, then add rules only when you observe specific failures.

2. Contradictory instructions. "Be concise" followed by "Always provide detailed explanations" creates confusion. The model will oscillate between behaviors unpredictably. Review your prompt for conflicts before deploying.

3. Over-reliance on "don't." A prompt that is 80% negative instructions gives the model very little guidance on what to actually do. For every "don't," include a corresponding "do instead." Negative instructions are seasoning, not the main course.

4. Skipping adversarial testing. Your prompt works great for normal inputs. But what happens when a user asks something unexpected, provides contradictory information, or tries to override the system prompt? Test edge cases before shipping.

5. Never iterating. The first version of a system prompt is never the best version. Track where the AI fails, identify patterns, and update the prompt. Treat it like code: version it, test it, improve it.

Frequently Asked Questions

Can AI ignore system prompts?

Yes, partially. System prompts are strong suggestions, not hard constraints. A determined user can sometimes override system prompt behavior through creative prompting (known as "jailbreaking"). For production applications, combine system prompts with output validation, content filtering, and application-level guardrails. Never rely solely on a system prompt for safety-critical behavior.

How long should a system prompt be?

For ChatGPT Custom Instructions, keep it under 1,500 characters per field. For API system messages, aim for 200-500 tokens for most use cases. Longer prompts are justified for complex, specialized applications, but test whether the extra length actually improves output. Many people are surprised to find that cutting their system prompt in half produces identical results.

Do system prompts work the same across models?

No. Each model interprets system prompts differently. GPT-4 tends to follow system prompts more literally, while Claude often captures the "spirit" of instructions better but may deviate from exact formatting rules. Gemini models have their own nuances. Always test your system prompt on the specific model you plan to use, and expect to make model-specific adjustments.

System prompts are the foundation of consistent, high-quality AI interactions. Whether you are customizing ChatGPT for personal use, building a customer-facing chatbot, or orchestrating complex API workflows, the principles are the same: be specific, be structured, and iterate relentlessly.

Start with one of the 8 templates above, customize it for your use case, and refine it based on real results. The best system prompt is one you have tested, broken, and rebuilt.

Keyur Patel

Written by Keyur Patel

AI Engineer & Founder

Keyur Patel is the founder of AiPromptsX and an AI engineer with extensive experience in prompt engineering, large language models, and AI application development. After years of working with AI systems like ChatGPT, Claude, and Gemini, he created AiPromptsX to share effective prompt patterns and frameworks with the broader community. His mission is to democratize AI prompt engineering and help developers, content creators, and business professionals harness the full potential of AI tools.

Prompt EngineeringAI DevelopmentLarge Language ModelsSoftware Engineering

Explore Related Frameworks

Try These Related Prompts