AI Platform Comparison Report
2026 Edition

A comprehensive analysis of seven major AI distributions — pricing, privacy, data retention, training practices, regulatory history, and Claude-specific deep dives.

Published: April 2026
Platforms: 7
Sections: 8
Charts: 22+
Status: ● Live Data
The AI Landscape in 2026
Seven major platforms now dominate consumer and enterprise AI. This report compares them across cost, privacy, data practices, regulatory exposure, and capability.

The AI distribution market has consolidated around seven major players, each with distinct business models, safety philosophies, and privacy approaches. As of April 2026, pricing has largely converged at $20/month for first-tier paid access — but the differences in what you get, and what you give up, are substantial. This report is designed for researchers, procurement officers, and informed consumers who want a clear-eyed analysis beyond marketing claims.

Scope note: Performance discussions represent a broad average across all available models within each distribution, not specific model comparisons. Scores are researcher estimates based on published policies and third-party analyses. Privacy, retention, and training data practices should be verified against each provider's current terms of service before making enterprise decisions.
The Seven Distributions
Quick profiles of each platform: origin, primary audience, and key differentiator.
Claude
Anthropic · 2023
Constitutional AI, safety-first design. Leads on long-context analysis, coding, and privacy-by-default. Favored in regulated industries and research environments. PBC structure provides unique governance guarantees.
ChatGPT
OpenAI · 2022
Largest user base (~800M weekly). Most versatile across general tasks, creative writing, and multimodal work. Strong plugin/tool ecosystem. Sets the de facto industry standard for pricing.
Gemini
Google · 2023
Deep Google Workspace integration. Leads on multimodal (video, audio, images) and long-context (1M tokens). Best for users already embedded in Google ecosystem. Strong search-backed factuality.
Copilot
Microsoft · 2023
Embedded in Microsoft 365 suite. Best for enterprise users in Windows/Office environments. Powered by OpenAI models with enterprise compliance layers. Coding focus via GitHub Copilot integration.
Meta AI
Meta · 2024
Entirely free. Integrated into WhatsApp, Messenger, Instagram. Powered by Llama open-source models. No paid tiers — monetized through Meta's advertising ecosystem. Weakest privacy posture.
Perplexity
Perplexity AI · 2022
Search-native "answer engine." Uniquely routes queries across multiple frontier models (ChatGPT, Claude, Gemini) with real-time citations. Best for research, fact-checking, and current-event queries.
Grok
xAI / X.com · 2023
Integrated with X (Twitter). Real-time access to X data stream. Known for uncensored style and minimal content restrictions. Competitive SWE-bench scores. Weakest enterprise compliance posture.
Capability Comparison — Cross-Platform Overview
Relative scores (1–10) across key functional dimensions. Average across all model tiers per distribution.
Claude ChatGPT Gemini Copilot Meta AI Perplexity Grok
Anthropic Claude — Tier Summary
A brief overview of Claude's subscription tiers and what distinguishes each level.
Free
$0
Per month

Access to Sonnet 4.6 and Haiku 4.5. Limited usage windows. No Opus access. Good for occasional use and evaluation. No Projects or Claude Code.

Pro
$20/mo
$17/mo annual

5× more usage than Free. Full Opus 4.6 + Sonnet 4.6 + Haiku 4.5 access. Claude Code included. Projects with Files, Memory, Instructions. 200K context window.

Max
$100–200/mo
5× or 20× usage vs Pro

Max 5× ($100/mo) or Max 20× ($200/mo). Same features as Pro with dramatically higher usage limits. 1M context in Claude Code. Designed for heavy daily users and power workflows.

Team
$25–100/seat/mo
Min. 5 seats

Standard ($25/seat) for collaboration. Premium ($100/seat) adds Claude Code. SSO, admin controls, central billing, M365/Slack integrations, enterprise search. Training OFF contractually.

Enterprise
Custom Pricing contact sales
Self-serve Enterprise also available — per seat + API usage billed separately

Full Enterprise unlocks: 500K context window, HIPAA readiness, SCIM provisioning, audit logs, role-based access control, custom data retention, enhanced security posture. Training disabled contractually (not just by settings). SOC 2 Type II certified. Targeted at regulated industries: healthcare, finance, legal, government. Enterprise customers see $13–$250/developer/month in Claude Code usage at API rates.

Key insight: Claude is the only major platform where data training is OFF by default at the consumer level (opt-in, not opt-out). At Team and Enterprise tiers, no-training is guaranteed contractually — not just a settings toggle that Anthropic could change unilaterally.
Feature Comparison Matrix
Side-by-side comparison of key features and capabilities across all seven distributions.
Feature / Capability Claude ChatGPT Gemini Copilot Meta AI Perplexity Grok
Free Tier✓ Yes✓ Yes✓ Yes✓ Yes✓ All Free✓ Yes✓ Yes
Max Context Window1M tokens400K tokens1M tokens128K tokens128K tokens200K tokens256K tokens
Web Search✓ Built-in✓ Built-in✓ Google✓ Bing✓ Built-in✓ Native✓ X/Web
Code GenerationExcellentExcellentGoodExcellentModerateModerateExcellent
Terminal/CLI Agent✓ Claude Code~ Codex✗ No✓ GitHub✗ No✗ No✗ No
Image Generation✗ No✓ DALL-E✓ Imagen✓ DALL-E✓ Meta✓ FLUX✓ Aurora
Video Understanding~ Limited✓ Yes✓ Best-in-class~ Limited✗ No✗ No~ Limited
Real-Time Data✓ Web search✓ Web search✓ Google search✓ Bing~ Limited✓ Native, cited✓ X live data
Projects / Workspaces✓ Advanced✓ GPT Projects✓ Gems✓ Pages✗ No✓ Spaces✗ No
Multi-Model Routing✗ No✗ No✗ No✗ No✗ No✓ GPT/Claude/Gemini✗ No
Enterprise SSO/SCIM✓ Team+✓ Team+✓ Workspace✓ M365✗ No✓ Enterprise~ Limited
Training Off by Default✓ Opt-in only✗ Opt-out✗ Opt-out✗ Opt-out✗ No opt-out✗ Opt-out✗ No opt-out
HIPAA Ready✓ Enterprise✓ Enterprise✓ Enterprise✓ Enterprise✗ No~ Enterprise✗ No
EU GDPR Compliance✓ DPA available✓ DPA available✓ DPA available✓ DPA available~ Contested~ Limited✗ Contested
Open Source Model✗ Proprietary✗ Proprietary✗ Proprietary✗ Proprietary✓ Llama✗ Proprietary~ Partial
Mobile App✓ iOS/Android✓ iOS/Android✓ iOS/Android✓ iOS/Android✓ WhatsApp/IG✓ iOS/Android~ Via X app
Desktop App✓ Mac/Win✓ Mac/Win~ Mac✓ Windows✗ No~ Mac✗ No
API Access✓ Yes✓ Yes✓ Yes✓ Via Azure✓ Meta API~ Limited✓ Yes
Cost Analysis — Pricing by Tier
Monthly USD pricing across all distributions and tiers. Custom/contact-sales tiers shown as typical estimates. Meta AI has no paid tiers ($0 for all).

The $20/month first-tier price point has become the industry standard, set by ChatGPT Plus in early 2023 and matched by nearly every competitor. The real differentiation lies in what you receive for that price — context window size, model access, usage limits, and privacy guarantees vary enormously. Power-user tiers have split into a $100–$200 range, with enterprise pricing remaining highly variable and negotiated.

All-Tier Pricing Comparison — Monthly USD (All Distributions)
Free, First Pay, Second Pay, Team, and Enterprise tiers shown side by side. Enterprise tiers = estimated typical rates. Meta AI shown at $0 — no paid tiers exist.
Free First Pay Second Pay Team (per seat) Enterprise (per seat, est.)
Tier Claude ChatGPT Gemini Copilot Meta AI Perplexity Grok
Free $0 · Sonnet/Haiku $0 · GPT-4o mini $0 · Gemini Flash $0 · Basic Copilot $0 · Full access $0 · Limited searches $0 · X integrated
First Pay Pro · $20/mo
($17 annual)
Plus · $20/mo AI Pro · $19.99/mo Copilot Pro · $20/mo N/A · $0 only Pro · $20/mo
($16.67 annual)
X Premium · $8/mo
Second Pay Max 5× · $100/mo
Max 20× · $200/mo
Pro · $200/mo AI Ultra · ~$250/mo M365 Business
~$22/user/mo
N/A · $0 only Max · $200/mo
($166 annual)
X Premium+ · $40/mo
Team Standard · $25/seat
Premium · $100/seat
Team · $30/seat Workspace AI
$30/user
M365 Business
$30/user
N/A · No team tier Enterprise
$40/seat
Business
$30/seat
Enterprise Custom (contact sales)
Self-serve ~$20/seat + API
Custom
~$60+/seat est.
Enterprise
$30+/user
M365 E3+Copilot
~$42.50/user
N/A · No enterprise Enterprise
$40–325/seat
Custom
~$50+/seat est.
Value insight: At the $20 first-tier, Claude Pro uniquely includes Claude Code (CLI coding agent) and the largest context window (200K, 1M in Claude Code). ChatGPT Plus includes DALL-E image generation. Perplexity Pro routes queries across multiple frontier models simultaneously — unique at any tier. Grok's $8 entry point is the lowest among paid tiers but ties AI access to X (Twitter) platform features.
Consumer Privacy Scores by Tier
Overall privacy score (0–10, higher = better privacy) for each distribution at each pay tier. Scores derived from published policies, training opt-out mechanisms, retention periods, and third-party analyses.

Privacy practices differ dramatically not just between platforms, but between tiers on the same platform. Enterprise tiers generally offer contractual protections absent from consumer plans. The most significant jump in privacy protection occurs between consumer (Pro/Plus) and team/enterprise tiers, where Data Processing Agreements (DPAs) create legal accountability.

Scoring methodology: Composite of: data training default (30%), retention duration (25%), third-party sharing (20%), human review access (15%), user control options (10%). Higher = better privacy. Scores represent researcher consensus estimates from published policies as of April 2026.
Tier 1 (First Pay) — Privacy Score Comparison
Higher score = better privacy. Claude leads due to opt-in training default. Meta AI scores lowest — no training opt-out available. Meta AI has no paid tier; consumer score shown for comparison.
Tier 2 (Second Pay) — Privacy Score Comparison
Second-tier plans generally offer marginally better controls. Claude Max maintains the highest score with opt-in training. Meta AI shown at consumer baseline (no paid tiers).
Tier 3 (Team) — Privacy Score Comparison
Team tiers show the biggest privacy improvement. DPAs create contractual protections. Claude Team's contractual no-training guarantee scores highest. Meta AI has no team offering.
Tier 4 (Enterprise) — Privacy Score Comparison
Enterprise tiers show the strongest privacy postures across the industry. Claude Enterprise and ChatGPT Enterprise both score highest. Meta AI and Grok lack enterprise-grade privacy frameworks. Meta AI shown as N/A (no enterprise tier).
Data Retention Risk
Worst-case data retention in months. Indefinite retention periods scored at 60 months. Lower bar = lower risk. Based on published policies as of April 2026.

Data retention risk measures how long your conversation data may be stored by the platform in the worst-case scenario. Indefinite retention (common among consumer-tier free products) is capped at 60 months for visualization purposes. Enterprise tiers with negotiated DPAs often offer configurable retention down to 30 days or less.

Key finding: Gemini's default 18-month retention (adjustable 3–36 months) is the most clearly documented among consumer tiers. Claude's "opt-in" model means if you opt in to training, data is retained up to 5 years (60 months); if you opt out, only 30 days. Meta AI retains data indefinitely across its platform ecosystem with no clear deletion guarantee.
Tier 1 — Data Retention Risk (Months, Worst Case)
Lower = less risk. 60 = indefinite. Claude: 60 if opted into training (5yr), or 1 month if opted out. ChatGPT Plus: indefinite by default unless history disabled. Grok: tied to X account indefinitely.
Tier 2 — Data Retention Risk (Months, Worst Case)
Second-tier plans offer minimal improvement in retention for most platforms. Gemini AI Ultra maintains configurable 36-month maximum. Meta AI remains indefinite.
Tier 3 — Team Data Retention Risk (Months, Worst Case)
Team tiers with DPAs show significant improvement. Claude Team: ~12 months. ChatGPT Team: ~1 month (30 days, excluded from training). Grok Business still lacks strong retention controls.
Tier 4 — Enterprise Data Retention Risk (Months, Worst Case)
Enterprise tiers with full DPAs offer the best retention controls. Claude Enterprise and ChatGPT Enterprise allow configuration down to 30 days. Grok lacks enterprise DPA framework.
Training Data Default
Whether consumer conversations train AI models without requiring user action. 1 = trains by default (red), 0.5 = partial/easy opt-out (amber), 0 = does not train by default (green).

Training data defaults reveal each platform's fundamental business model. Platforms that train on user data by default are subsidizing their service through your intellectual property. Platforms that require opt-in (like Claude) rely on paid subscriptions and API revenue rather than data monetization. Enterprise tiers across the industry have generally moved to no-training-by-default, recognizing enterprise customers will not accept training on sensitive organizational data.

Tier 1 — Training Data Default Status
Green (0) = safe by default. Amber (0.5) = easy opt-out. Red (1) = trains by default, requires effort to disable. Meta AI has no opt-out at any tier.
SAFE BY DEFAULT
Claude Pro — Opt-in only. Training off unless you explicitly enable it. Gold standard in the industry.
EASY OPT-OUT
ChatGPT Plus — Training on by default, but opt-out is findable in settings. "Memory" feature adds complexity.
NO OPT-OUT
Meta AI — No training opt-out at any tier. Grok X Premium — Training tied to X account, no disable mechanism.
PARTIAL / COMPLEX
Gemini — Training opt-out limited; disabling Activity Control reduces functionality. Perplexity — Data also routed to partner models.
Tier 2 — Training Data Default Status
Slight improvement at higher individual tiers, but the fundamental defaults do not change for most platforms. Claude Max maintains 0 (no training by default).
Tier 3 — Team Training Data Default Status
Team tiers show major improvement. Claude, ChatGPT, Gemini, and Copilot all exclude team data from training. Grok Business partial improvement. Meta AI has no team tier.
Tier 4 — Enterprise Training Data Default Status
Enterprise tiers across all platforms except Grok have eliminated training-by-default. DPAs contractually prohibit training on customer data. Grok Enterprise remains without a formal DPA framework.
Multi-Dimensional Privacy Profiles
Radar charts across five risk dimensions. Smaller filled area = lower risk (better). Scored 0 (safest) to 10 (highest risk). Meta AI is excluded from Team and Enterprise tiers (no such offerings exist).

Five dimensions capture the full privacy risk landscape: Data Training Risk (does the platform train on your data?), Retention Duration Risk (how long is data stored?), Third-Party Sharing Risk (who else sees your data?), Human Review Risk (can employees read your conversations?), and User Control Risk (can you prevent or undo data use?). A company with a small, tight radar polygon offers superior privacy protection.

Tier 1 — Multi-Dimensional Privacy Profile
Smaller area = better privacy. Claude (gold) has the smallest polygon. Meta AI (blue) has the largest — poorest privacy. Click legend items to toggle visibility.
Tier 2 — Multi-Dimensional Privacy Profile
Second-tier plans show marginal improvements. Claude Max maintains the tightest polygon. Meta AI baseline included for reference.
Tier 3 — Team Privacy Profile
Team tiers show dramatic overall improvement. Claude Team and ChatGPT Team show the tightest polygons. Grok Business lags significantly. Meta AI excluded (no team tier).
Tier 4 — Enterprise Privacy Profile
Enterprise tiers converge on strong privacy for the major platforms. Claude Enterprise and ChatGPT Enterprise nearly identical. Grok Enterprise without DPA remains an outlier. Meta AI excluded.
Regulatory Action History (2023–2026)
Significant regulatory actions, enforcement proceedings, and major legal events per platform. Regulatory risk is platform-level; higher-tier plans offer more contractual protections but cannot eliminate corporate-level regulatory exposure.

Regulatory scrutiny of AI platforms has intensified dramatically from 2023 to 2026. Key events include: Italy's ChatGPT suspension (April 2023) and subsequent €15M GDPR fine (December 2024), Texas's $1.375B biometric settlement against Google, multiple copyright lawsuits against OpenAI and Perplexity from major media organizations, the FTC's "Operation AI Comply" anti-deception enforcement, and a growing volume of state AG actions. Note that these are company-level events — higher tiers do not eliminate exposure, but enterprise DPAs provide contractual protections for individual organizational data.

Important context: Regulatory actions are corporate-level events that affect all tiers equally in terms of reputational and systemic risk. Higher-tier subscribers benefit from stronger DPAs, but cannot fully insulate themselves from a company-wide regulatory action. The chart below shows the same data for all tiers, as the underlying events are platform-wide.
Tier 1 — Platform Regulatory Actions 2023–2026
Count of significant regulatory actions, enforcement proceedings, or major legal events. Excludes routine compliance filings. Higher = greater regulatory risk exposure at platform level.
Tier 2 — Platform Regulatory Actions 2023–2026
Same platform-level data applies across all tiers. Second-tier subscribers face identical corporate regulatory risk exposure as first-tier users.
Tier 3 — Team Platform Regulatory Actions 2023–2026
Team subscribers benefit from DPAs that provide data protections, but platform-level regulatory risk (fines, investigations) remains unchanged. Shown with Enterprise DPA Coverage indicator.
Tier 4 — Enterprise Platform Regulatory Actions 2023–2026
Enterprise users have the strongest contractual protections. However, platform-level regulatory exposure is the same. Claude/Anthropic's lower count (2) reflects its newer market entry and PBC legal structure.

Key Regulatory Events Summary (2023–2026)

  • OpenAI/ChatGPT (7 events): Italy ChatGPT suspension & reinstatement (Apr 2023), FTC investigation opened (Jul 2023), Italy €15M GDPR fine (Dec 2024), NYT copyright lawsuit ongoing, Florida criminal investigation into school shooting conversations (2026), State AG "delusional outputs" letter (Dec 2025), EU AI Act GPAI compliance obligations.
  • Google/Gemini (5 events): Texas $1.375B biometric data settlement (2025), Italy Garante inquiry, NYT copyright lawsuit (2025), EU AI Act compliance scrutiny, August 2025 data breach exposing conversations in Google Search index.
  • Meta AI (5 events): EU GDPR training pause (Jun 2024), Italy investigation, EU blocked Meta AI launch requiring policy changes (2024), State AG letter (Dec 2025), multiple copyright lawsuits regarding Llama training data.
  • Microsoft/Copilot (4 events): NYT copyright lawsuit (2023), FTC scrutiny of AI practices, State AG letter (Dec 2025), EU AI Act compliance requirements for high-risk AI systems.
  • Perplexity (3 events): NYT copyright lawsuit (Dec 2025), Chicago Tribune trademark/copyright lawsuit (2026), State AG letter (Dec 2025).
  • Grok/xAI (3 events): EU AI Act/GDPR compliance investigations, xAI Colorado AI Act federal lawsuit (Apr 2026), State AG letter (Dec 2025).
  • Claude/Anthropic (2 events): State AG letter (Dec 2025), copyright case re: music lyrics transformation (Apr 2026). Notably, Anthropic's PBC status and Constitutional AI approach have largely insulated it from major enforcement actions to date.
Claude Version Comparison: Browser vs Desktop
Comparing the web-based and desktop-based Claude distributions — what each offers, where they differ, and when to choose one over the other.
🌐 Browser-Based Claude
claude.ai — No installation required

Access

Available on any modern browser (Chrome, Firefox, Safari, Edge) on any OS. No software installation. Access via claude.ai or mobile apps (iOS/Android). Works on tablets and mobile devices with full feature parity to the web version.

Core Features

  • Full chat interface with conversation history
  • Projects (Files, Instructions, Memory, Share)
  • Artifact rendering (HTML, React, SVG, Code, Markdown)
  • Web search, deep research, file uploads
  • MCP app connections (Google Drive, Gmail, Miro, etc.)
  • Image and document upload/analysis
  • Voice input (mobile)

Limitations vs Desktop

  • No local file system access (browser sandbox prevents this)
  • No system-level integrations
  • Claude Code available but terminal runs in cloud, not locally
  • Notifications require browser to be open
🖥️ Desktop App (macOS/Windows)
Downloadable app — System integration layer

Access

Native desktop application for macOS and Windows. Installable from claude.ai/download. Behaves like a native app with dock/taskbar presence, system notifications, keyboard shortcuts, and background operation. Requires Pro plan or higher for full feature access.

Additional Desktop Features

  • Local file system integration — Claude can access files you explicitly share from your local drive
  • Screenshot/screen capture — Share what's on your screen directly with Claude
  • System-level Claude Code — Terminal integration runs Claude Code against your local machine's filesystem and development environment
  • App context — Claude can see context from apps you're using (Pro+ required)
  • Background processing with notifications
  • Native macOS/Windows keyboard shortcuts and accessibility APIs
  • Enterprise deployment via MDM/SCCM for IT teams

Best Use Cases for Desktop

  • Software development with local codebase integration (Claude Code)
  • Power users who want OS-level keyboard shortcuts
  • Teams using enterprise MDM deployment
  • Users who want Claude accessible without browser tab management
Browser vs Desktop Feature Availability
Feature parity rating 0–10 for each platform version across key capability categories.
Browser (claude.ai) Desktop App
Overall Claude Abilities
A comprehensive walkthrough of Claude's core features: chat, Projects, Claude Code, Customizations, and Design capabilities.

1. Claude Basic Chat

Claude's core interface is a conversation window where you interact with the model using natural language. Each conversation maintains full context within a session, and (with Pro+) previous conversations are searchable. Key chat capabilities include:

Text & Documents

  • Long-form analysis up to 200K tokens (1M in Claude Code)
  • Upload PDFs, Word docs, CSVs, images, code files
  • Deep research with web search
  • Multi-document synthesis and comparison

Code & Technical

  • Code generation in all major languages
  • Debugging, refactoring, documentation
  • SQL query generation and optimization
  • Architecture and system design

Artifacts

  • Generate interactive HTML/React applications
  • SVG diagrams and visualizations
  • Mermaid flowcharts and ERDs
  • Downloadable code files, reports, documents

2. Claude Projects

Projects are persistent workspaces that maintain context, files, and instructions across all conversations within that project. Think of a Project as a specialized AI workspace optimized for a specific domain, client, or workflow. All conversations within a Project share the same knowledge base, instructions, and memory.

📤 Share Capability

Projects can be shared with team members on Team and Enterprise plans. Sharing gives collaborators access to the project's instructions, knowledge base, and the ability to start new conversations within the project's context. The project owner controls who can view vs. edit. Shared Projects enable consistent AI behavior across an entire team — everyone Claude encounters in that Project will be guided by the same instructions and has access to the same knowledge base.

⚙️ Available on: Team, Enterprise plans
🧠 Memory Section

Memory stores key facts and preferences Claude has learned about you across conversations. Claude generates memories automatically from your interactions, but you can view, edit, and delete individual memories. Use the Memory section to: review what Claude knows, correct inaccuracies, add specific preferences manually, or delete sensitive information. Memories supplement — not replace — context within a conversation. Best practice: periodically review your memories to ensure they reflect your current preferences and role.

⚙️ Available on: Pro, Max, Team, Enterprise
📋 Instructions Section

Instructions are the system prompt for your Project — they define Claude's role, behavior, tone, and constraints within that workspace. Instructions are applied automatically to every conversation in the Project. This is where you specify: Claude's persona/role ("You are a senior data analyst specializing in Python"), response format preferences, things Claude should never do within this project, and domain-specific context Claude should always keep in mind. Instructions are not visible to Claude in the user turn — they're applied as a system prompt.

Best practice: Write instructions in clear, direct language. Specify both what Claude SHOULD do and what it should AVOID. Include output format expectations (bullet points vs prose, specific headers, etc.).
⚙️ Available on: Pro, Max, Team, Enterprise
📁 Files (Knowledge Base)

The Files section of a Project is a persistent knowledge base that Claude can reference across all conversations in the Project. Upload documents, reference materials, SOPs, data dictionaries, company policies, or any text that Claude should know. Files are automatically included in Claude's context for every conversation in the Project, within the 200K token context limit. Accepted formats: PDF, TXT, DOCX, CSV, code files, MD. This replaces the need to re-upload reference materials in every conversation — upload once, use always.

Tip: Organize files strategically. A 200K context window is large but not infinite. For very large knowledge bases, prioritize the most frequently referenced documents and use Instructions to point Claude to specific files for specific query types.
⚙️ Available on: Pro, Max, Team, Enterprise

3. Claude Code

Claude Code is Anthropic's agentic coding tool — a CLI (command-line interface) application that runs in your terminal and operates directly on your local filesystem. Unlike chatting about code, Claude Code executes tasks: it reads files, writes changes, runs tests, uses git, and completes multi-step development tasks autonomously. Included with Pro plans and above.

Core Capabilities

  • Reads/writes files directly in your project
  • Executes shell commands and scripts
  • Runs test suites and interprets results
  • Makes git commits and manages branches
  • Multi-file refactoring at scale
  • 1M token context window (no size surcharge)

Agent Capabilities

  • Auto-accept mode: executes without confirmation
  • Agent Teams: spawns multiple Claude instances
  • Background processing for long tasks
  • GitHub Actions integration
  • MCP server connections for extended tooling
  • Web search for documentation lookup

Pricing Structure

  • Pro: Included (with usage limits)
  • Max 5×: $100/mo — ~$100-150/dev/mo typical
  • Max 20×: $200/mo — ~$150-250/dev/mo typical
  • Team Premium: $100/seat (CC included)
  • Enterprise: Seat fee + API tokens separately
  • API-only: $6/dev/day average, variable
# Install Claude Code npm install -g @anthropic-ai/claude-code # Launch in your project directory cd /path/to/your/project claude # Example: Ask Claude to implement a feature > Add unit tests for the authentication module using Jest # Claude reads your files, writes tests, runs them, fixes failures # Run with auto-accept (no confirmation prompts) claude --autoaccept # Specify a model claude --model claude-opus-4-6

4. Claude Customizations

🔌 Connecting Apps (MCP)

Claude supports connections to third-party services via the Model Context Protocol (MCP). Connect apps via Settings > Integrations. Connected apps allow Claude to: read/write data from external services, execute actions on your behalf, and access real-time data. Currently supported: Google Drive, Gmail, Google Calendar, Atlassian (Jira/Confluence), Miro, Slack, and more via the MCP registry. Once connected, Claude can (with your permission) search your Drive, read your emails, create calendar events, and interact with project management tools — all from within the conversation interface.

🔒 Security: App connections require explicit user authentication (OAuth). Claude can only access what the connected app's permissions allow. No automatic data access — Claude asks before using connected tools.
⚙️ Create New Skills

Skills are reusable instruction sets that customize Claude's behavior for specific tasks. Think of them as saved, templated prompts with structured behaviors. Create skills for: writing in your company's brand voice, always generating code in a specific style, using specific tools or workflows for a domain, or maintaining a specific persona across interactions. Skills can be created in your Settings panel and applied per-conversation or set as defaults within Projects. Team and Enterprise plans allow skills to be shared across an organization, ensuring consistent AI behavior at scale.

Skills differ from Project Instructions in scope — Instructions are Project-specific, while Skills are portable across Projects and individual conversations. Skills are essentially reusable system-prompt components.

5. Claude Design Capabilities

Claude has powerful design and visual content generation capabilities that go well beyond text. Through its Artifact system, Claude can generate, render, and iterate on a wide range of visual and interactive design outputs. Below are the primary design capability categories:

🛠️ Prototypes

Claude generates fully interactive UI prototypes as HTML/CSS/JS or React artifacts that render directly in the interface. Request: "Build a login form with validation," "Create a dashboard with a chart," or "Prototype a mobile onboarding flow." Prototypes are interactive, downloadable, and can be iterated upon through conversation. Best for: product design validation, client presentations, rapid ideation, building MVPs. Claude can also build full single-page applications with persistent state using the storage API.

🎨 Design Systems

Claude can generate design system documentation, component libraries, and style guides. Ask Claude to: "Define a color palette for a fintech brand," "Create a component specification for our button library," or "Generate a typography scale for a healthcare application." Output can be Markdown documentation, JSON tokens, CSS variables, or Tailwind configurations. Claude understands design system methodologies including Atomic Design, Material Design, and custom systems. Useful for developers needing to translate designs into code specifications.

📊 Slide Decks

Claude generates presentation content in multiple formats. For text-based slides: structured Markdown outlines with speaker notes, ready for import into PowerPoint, Google Slides, or Keynote. For rendered slides: HTML-based presentations using CSS animations and slide transitions that work in-browser. Ask: "Create a 10-slide pitch deck for a SaaS product," or "Generate an executive summary presentation on our Q3 results." Claude can also generate PowerPoint (.pptx) files directly via its computer use capabilities for downloadable decks.

📐 From Template

Claude can start from specified templates or generate based on established design patterns. Provide a wireframe description, a reference to a design style ("make it look like Linear's dashboard"), or an existing structure ("use this existing form as the base"). Claude understands common UI patterns: CRUD interfaces, data tables, dashboard layouts, marketing landing pages, checkout flows, and admin panels. For code-based templates, Claude can import and adapt from component libraries (shadcn/ui, Material UI, etc.).

🗺️ Diagrams & Other

Beyond prototypes, Claude generates: Mermaid flowcharts and ERDs (copy-paste to Mermaid.live or Notion), SVG illustrations and icons, data visualizations using Chart.js, D3.js, or Recharts, architecture diagrams using Mermaid or ASCII, infographics as styled HTML, technical documentation with embedded visual explanations. Claude can also generate 3D scenes using Three.js for product visualization, generative art using p5.js or canvas, and animation sequences using CSS keyframes. All outputs are downloadable and code-editable.

💡 Design Tips

  • Specify the target audience and use case upfront
  • Reference existing design systems (Material, Apple HIG, etc.)
  • Iterate conversationally — "make it darker," "add a mobile view"
  • Ask Claude to explain its design choices
  • Use Projects with Files to maintain brand assets/guidelines
  • For production use, review and refine Claude's output before deploying
Claude Model Summary: Opus, Sonnet & Haiku
A comparison of the three Claude model families available as of April 2026, plus a guide to the Adaptive Thinking toggle.
Note on model naming: As of April 2026, the current recommended models are Claude Opus 4.6, Claude Sonnet 4.6, and Claude Haiku 4.5. The prompt referenced "Opus 4.7" — this version has not been publicly released as of this report's date. The analysis below covers Opus 4.6, the current flagship. Model versions update regularly; verify current availability at claude.ai or docs.anthropic.com.
Opus 4.6
Most Capable · Flagship
Pro+
Intelligence
9.6
Coding
9.4
Reasoning
9.6
Writing
9.8
Speed
5.5

Best For

  • Complex multi-step reasoning tasks
  • Advanced code architecture and design
  • Long-form analysis and research synthesis
  • Nuanced writing requiring deep domain knowledge
  • Tasks where quality > speed

API: $5/M input · $25/M output · Context: 200K (1M in Code)
RECOMMENDED
Sonnet 4.6
Balanced · Best Value
All Plans
Intelligence
8.8
Coding
9.0
Reasoning
8.6
Writing
8.7
Speed
7.8

Best For

  • Daily work tasks (emails, documents, analysis)
  • Software development (coding, debugging, code review)
  • Research assistance and summarization
  • Conversational AI applications
  • 90%+ of typical use cases at 40% of Opus cost

API: $3/M input · $15/M output · Context: 200K (1M in Code)
Haiku 4.5
Fastest · Most Affordable
All Plans
Intelligence
7.2
Coding
7.4
Reasoning
6.8
Writing
7.5
Speed
9.6

Best For

  • High-volume, latency-sensitive applications
  • Simple Q&A and classification tasks
  • Customer service chatbots (first response)
  • Content moderation at scale
  • Applications where speed is critical

API: $0.80/M input · $4/M output · Context: 200K
Model Capability Comparison — Opus vs Sonnet vs Haiku
Relative scores across 5 dimensions. Sonnet is the recommended default for most workloads given its balance of quality and cost.
Opus 4.6 Sonnet 4.6 Haiku 4.5
🧠 The Adaptive Thinking Toggle (Extended Thinking)
Available on Sonnet 4.6 and Opus 4.6 · Pro plans and above

Adaptive Thinking (also known as Extended Thinking) is a mode where Claude is allowed to reason through a problem at length before generating its final response. When enabled, Claude produces an internal "thinking" step — a scratch pad where it works through the problem, considers alternatives, checks its own reasoning, and corrects errors before committing to a final answer.

When to Enable

  • Complex mathematical or logical problems
  • Multi-constraint optimization tasks
  • Advanced code debugging where the bug is subtle
  • Legal, medical, or technical analysis requiring careful reasoning
  • Tasks where "chain of thought" improves accuracy
  • Research questions with ambiguous or conflicting evidence

Trade-offs

  • Slower response time (thinking takes tokens and time)
  • Uses more tokens (thinking tokens billed at output rates)
  • May be overkill for simple factual questions
  • Can be toggled on/off per-conversation
  • Budget for thinking can be controlled via API
  • Final answer quality often significantly higher for hard problems
How it works: With Adaptive Thinking enabled, Claude generates a <thinking> block (visible or hidden depending on the interface) where it reasons step by step. It may revise its reasoning multiple times before the final response. This process can take additional seconds but yields measurably better performance on complex problems — particularly math, code, and multi-step analysis. Enable it by clicking the "Extended Thinking" or "Adaptive Thinking" toggle in the model selection interface, or pass thinking: {type: "enabled", budget_tokens: N} in the API.
Claude Instructions Template
A practical starter template for writing effective Claude Project Instructions, including roles, context, formatting, constraints, and output examples.

Well-written Instructions are the highest-leverage way to customize Claude's behavior. A good Instructions block defines Claude's role, operating context, behavioral constraints, and output expectations. Below is a battle-tested template structure with annotations, followed by domain-specific examples.

Master Instructions Template

## Role & Identity You are [ROLE TITLE], a specialized assistant for [ORGANIZATION/TEAM NAME]. Your primary purpose is [CORE MISSION IN ONE SENTENCE]. You have deep expertise in [DOMAIN 1], [DOMAIN 2], and [DOMAIN 3]. ## Audience You are speaking with [AUDIENCE DESCRIPTION — e.g., "senior software engineers with 5+ years of experience"]. Calibrate your response complexity to this audience. [SPECIFIC CALIBRATION — e.g., "Do not over-explain basic concepts."] ## Operating Context - Company: [COMPANY NAME] - Industry: [INDUSTRY] - Primary tools used: [TECH STACK / TOOLS] - Key stakeholders: [WHO DECISIONS GO TO] - Regulatory environment: [ANY COMPLIANCE REQUIREMENTS] ## Behavioral Guidelines ALWAYS: - [BEHAVIOR 1 — e.g., "Cite your sources when making factual claims"] - [BEHAVIOR 2 — e.g., "Flag uncertainty explicitly with phrases like 'I'm not certain, but...'"] - [BEHAVIOR 3 — e.g., "Ask clarifying questions if the request is ambiguous"] - [BEHAVIOR 4 — e.g., "Provide code in Python unless another language is specified"] NEVER: - [CONSTRAINT 1 — e.g., "Do not provide legal advice — recommend consulting legal counsel"] - [CONSTRAINT 2 — e.g., "Do not use competitor product names positively"] - [CONSTRAINT 3 — e.g., "Do not share internal company data in your responses"] - [CONSTRAINT 4 — e.g., "Do not make commitments about pricing or delivery timelines"] ## Output Format Preferences Default format: [prose / bullet points / structured report / code-first] Default length: [brief ~150 words / standard / detailed with full explanations] Code style: [language preference, style guide, commenting conventions] Report structure: [preferred headers, whether to include executive summary] For different request types: - Quick questions: Answer directly in 1-3 sentences - Analysis requests: Use headers, bullet points, and a brief conclusion - Code requests: Code first, brief explanation after, test cases if relevant - Strategy questions: Framework → analysis → recommendation → next steps ## Domain Knowledge (Key Context) [INSERT RELEVANT BACKGROUND INFORMATION HERE — terminology, processes, products, etc.] Example: "Our product uses a microservices architecture on AWS. Services communicate via SQS. The frontend is Next.js, backend is FastAPI (Python). We use Postgres for primary storage." ## Persona / Tone Tone: [professional / casual / technical / empathetic / direct] Voice: [first person / third person / neutral] Response style: [concise and direct / comprehensive / Socratic / collaborative] Brand voice notes: [any specific language preferences or brand guidelines] ## Escalation & Edge Cases If asked about [SENSITIVE TOPIC], respond with: [SPECIFIC RESPONSE OR REDIRECT] If the request is outside your scope, say: [SPECIFIC OUT-OF-SCOPE RESPONSE] If you encounter a conflict between these instructions and a user request, [PRIORITY RULE]

Example: Research Assistant Project

## Role & Identity You are a Senior Research Analyst assistant for [Research Organization]. Your primary purpose is to help researchers synthesize academic literature, validate methodologies, and structure findings. You have expertise in academic research methods, statistical analysis, scientific writing, and literature review. ## Audience PhD-level researchers and senior scientists. Do not over-explain foundational concepts. Use standard academic terminology. Flag when claims require additional citation. ## Behavioral Guidelines ALWAYS: - Clearly distinguish between established findings and emerging/contested research - Flag limitations and alternative interpretations in any analysis - Use precise statistical language (e.g., "statistically significant at p < 0.05" not "probably true") - Recommend citing primary sources rather than relying on my training knowledge for specific statistics - When summarizing papers, distinguish between the authors' claims and the evidence they provide NEVER: - Invent citations, DOIs, or specific statistics - Express more confidence than the evidence warrants - Reproduce copyrighted material verbatim — summarize and paraphrase ## Output Format Literature reviews: Summary → Key findings → Methodological notes → Limitations → Suggested further reading Data analysis: Context → Method applied → Results → Interpretation → Caveats Default length: Comprehensive — researchers need detail

Example: Software Engineering Team Project

## Role & Identity You are a Principal Engineer assistant for the Platform Engineering team. Your primary purpose is to help engineers write, review, debug, and architect software. Stack: TypeScript/React frontend, Python/FastAPI backend, PostgreSQL, Redis, AWS. ## Behavioral Guidelines ALWAYS: - Write TypeScript with strict typing — no 'any' types - Follow our style guide: 2-space indent, single quotes, no semicolons (JS/TS) - Include error handling in all code examples - Write tests when generating new functions (Jest for TS, pytest for Python) - Prefer composition over inheritance - Comment non-obvious logic inline NEVER: - Suggest using deprecated APIs or packages - Recommend storing secrets in code or environment files committed to git - Skip error handling in examples - Introduce new dependencies without noting the tradeoffs ## Output Format Code first, then brief explanation. Include: 1. The working code 2. A one-paragraph explanation of key design decisions 3. Any important edge cases or caveats 4. A test example (unless trivial)
Template tips: Start with Role and Behavioral Guidelines — these two sections have the highest impact. Keep instructions concrete and specific ("respond in 150 words or less" is better than "be concise"). Test your instructions with a few representative queries before deploying to a team. Iterate: instructions should evolve as you discover edge cases. For Team Projects, review instructions quarterly to ensure they still reflect current practices and team preferences.

Claude Roles Reference — Common Role Archetypes

Technical Roles

  • Principal Software Engineer
  • DevOps / Platform Engineer
  • Data Scientist / ML Engineer
  • Security Engineer
  • Technical Writer
  • QA / Test Engineer

Business / Strategy Roles

  • Business Analyst
  • Product Manager
  • Strategy Consultant
  • Financial Analyst
  • Market Research Analyst
  • Executive Communications Lead

Research / Content Roles

  • Senior Research Analyst
  • Academic Writing Assistant
  • Content Strategist
  • Science Communicator
  • Legal Research Assistant
  • Medical Information Specialist

AI Platform Comparison Report · April 2026 · Prepared as a research reference document

All scores and pricing are estimates based on publicly available information as of April 2026. Verify all figures against official provider documentation before making business decisions. Privacy policies change — review current terms of service for the most accurate information.

Sources include: Official pricing pages (claude.ai/pricing, openai.com/pricing, etc.), published privacy policies, Tom's Guide, IntuitionLabs, CheckThat.ai, Cortex Times, Vantage Point, SSD Nodes, and regulatory reporting from MLex, TechCrunch, and Securiti.ai.