7 ways to enhance your UX workflow

Artificial intelligence has fundamentally transformed how UX professionals work.

What once required hours of manual analysis and documentation can now be accelerated with the right prompts. From distilling research findings to crafting accessible interfaces, AI assists at every stage of the design process.

However, AI is a collaborator, not a replacement. These prompts are starting points to enhance your workflow, not automate away your role. Use them thoughtfully, iterate on outputs, and always validate results against real user needs.

1. Find Evidence for Problem Statements

When validating design decisions, you need concrete evidence that a problem actually exists. AI can quickly scan research documents, interview transcripts, or even search online sources to find supporting data for your hypotheses. This saves hours of manual review and helps you build stronger cases for stakeholders.

Keep in mind that AI may miss nuanced context or misinterpret qualitative data. Always review the sources it references and verify claims, especially when presenting to leadership or using findings to justify major design changes.

The Prompt

**Context**
- Goal: Validate or refute the problem statement using the attached documents or, if none are provided, by searching the web (with a focus on Reddit).
- Input:
  - Problem statement: [Paste your problem statement]
  - Target users/segments: [Paste your target audience]

**Your Tasks**

1) **Understand and break down the problem**
   - Restate the problem in your own words.
   - Extract key claims/hypotheses and note assumptions.

2) **Review and extract evidence**
   - If documents are provided:
     - Follow the process outlined in the previous response.
   - If no documents are provided:
     - Search Reddit (and other relevant platforms if necessary) for discussions, user reports, or data related to the problem statement.
     - Use relevant subreddits (e.g., r/askreddit, r/technology, r/UX, r/userexperience, r/your_target_audience) and consider using search filters (e.g., 'year' for recency).
     - Extract quotes, relations, sources (Reddit post URLs), and strength of evidence as described in the previous response.

3) **Evaluate evidence strength**
   - Rate each item: strong / moderate / weak, using the provided rubric.

4) **Synthesize and assess**
   - Provide an overall verdict: Supported / Partially supported / Not supported, with a brief rationale and a confidence score (0–100).
   - Identify gaps, risks, and assumptions that remain unvalidated.
   - If evidence is lacking or contradictory, recommend concrete next steps for research.

**Output Format**
- Follow the same output format as the previous response, adjusting for web search findings if applicable.

**Additional Guidance for Web Search**
- When using Reddit, consider the subreddit's size, activity, and relevance to your target audience.
- Be mindful of potential biases in user-generated content (e.g., self-selection, echo chambers).
- Use additional platforms (e.g., Twitter, forums, blogs) if Reddit doesn't yield sufficient results, but note their limitations and biases as well.
- If searching the web, prioritize recent data (last 1–2 years) and consider the source's credibility.

**If Inputs Are Missing**
- If the problem statement is not provided, ask clarifying questions before proceeding.
- If no documents are provided, explicitly state that you'll be conducting a web search (with Reddit as the primary source) to gather evidence.

Claude Sonnet 4.5 provides excellent document analysis and nuanced reasoning.

2. Create a Comprehensive User Journey

User journeys often focus solely on in-app interactions, missing the critical real-world context that impacts the experience. Users get interrupted by calls, run out of battery, switch between devices, or abandon tasks to handle life events. AI can help you map more realistic journeys that account for these off-device moments and environmental factors.

While AI can identify common interruptions and context switches, it may not know your specific user's unique circumstances. Use the output as a framework, then validate and enrich it with actual user research.

The Prompt

**Objective**
Create a thorough, realistic user journey for [describe the user goal or task], highlighting what happens on-screen and off-screen, why users proceed or pause, and where to improve the end-to-end experience.

**Inputs**
Provide or infer the following:
- Target user: [persona, demographics, domain knowledge, accessibility needs]
- Goal/outcome: [what “success” looks like, constraints for “done”]
- Scenario: [when/where the journey starts, key trigger(s)]
- Scope: [channels/devices to include, geographic/locale constraints, connectivity]
- Constraints: [time, budget, policy/compliance, device limits, account access]
- Business/KPIs: [primary metrics, guardrails]
- Known risks: [privacy/security, safety, trust]

If any input is missing, ask up to 5 clarifying questions. If still unknown, state assumptions explicitly and proceed with 2–3 plausible variants.

**Output Requirements**

1) Journey overview (1–2 paragraphs)
- Who the user is, their motivation, success criteria, and key risks.
- Entry trigger(s) and expected end state.

2) Phased journey map (end-to-end)
Break the journey into 5–9 phases. For each phase, list stages/steps. For every step, include:
- Context: location, time-of-day, device/channel, connectivity, environmental factors (noise, lighting, one‑handed use), social setting.
- Trigger: what prompts the step (internal/external).
- User actions: on-device and off-device (calls, searching, waiting, fetching documents, travel).
- System responses: UI, latency, content, errors; include backstage processes (APIs, verification, support).
- Thoughts: “I think…”
- Feelings: emotional valence and intensity (e.g., −2 to +2).
- Motivation: JTBD/job story or intent (“When I…, I want to…, so I can…”).
- Time & effort: estimated time-on-task, cognitive load, steps/taps.
- Pain points & friction: usability, trust, policy, access, learnability.
- Interruptions/context switches: phone calls, notifications, low battery, poor network, switching devices, permission prompts, real‑world obligations.
- Recovery paths: save state, resume mechanisms, cross-device handoff.
- Accessibility: potential barriers and needed accommodations.
- Data & privacy: data captured/used, consent, user control.
- Metrics: which KPIs this step affects and how.
- Opportunities: “How might we…” and quick wins vs. bigger bets.

3) Context variants
Describe how the journey differs in at least three contexts:
- At home (stable Wi‑Fi, longer sessions)
- Commuting (one‑handed, intermittent connectivity, time‑boxed)
- At work (policy/security constraints, shared devices, interruptions)
Optionally add: low-vision or screen reader use, non-native language, rural low bandwidth, high-noise environment.

For each variant, specify: key deviations, added risks, different triggers, altered emotions, unique failure modes, and tailored opportunities.

4) Abandonment and pause analysis
- Top reasons users stall or exit at each phase (with likelihood if possible).
- Early warning signals and recovery strategies (reminders, drafts, gentle nudges, alternative channels, human assist).
- Re‑entry experience: how state is preserved and trust is rebuilt.

5) Service blueprint (concise)
- Frontstage: visible touchpoints by channel.
- Backstage: teams/systems/processes enabling the step.
- Support: help content, assisted channels, SLAs.
- Dependencies: integrations, compliance checks, data flows.

6) Prioritized opportunities
- 5–10 recommendations labeled by impact/effort (e.g., P1 high-impact/low-effort), mapped to KPIs and user pain.
- Experiments: hypotheses and success metrics.

7) Assumptions and open questions
- List key assumptions made and the top unanswered questions to validate next.

**Formatting and Depth**

- Length target: concise but complete; ~800–1500 words for the narrative + bullet detail per step.
- Use numbered phases and bullet points per step. Keep language clear and specific.
- Include realistic timings, content examples (sample copy, notification text), and visual states described in words.
- If data is sparse, provide 2–3 plausible branches and note what would confirm/refute each.
- Call out critical compliance, safety, or ethical considerations where relevant.

Quality Bar Checklist (use before finalizing)
- Does it describe what happens between digital interactions?
- Are physical, environmental, and social factors explicit?
- Are abandonment reasons and recovery paths concrete?
- Are differences across contexts clearly contrasted?
- Are accessibility and privacy implications covered?
- Are opportunities tied to pain points and KPIs?
- Are assumptions and open questions transparent?

**Template Starters**

- Persona:
  “[Name] is a [role/experience level] who wants to [goal] in order to [outcome]. They have [constraints], use [devices], and care about [motivations].”
- Job stories:
  “When [situation], I want to [motivation], so I can [expected outcome].”
- Opportunity framing:
  “How might we reduce [specific friction] during [phase] to improve [KPI] without increasing [risk]?”

GPT-5 and GPT-4o excel at comprehensive scenario thinking.

3. Improve Error Messages

Error messages are critical UX touchpoints that often get neglected. Poor error copy frustrates users and increases support tickets. AI can help you review and refine existing error messages, transforming technical jargon into clear, actionable guidance that follows consistent patterns across your product.

While AI excels at rewriting for clarity, it may not understand your product's specific technical constraints or brand voice. Review outputs to ensure accuracy and alignment with your style guide, and test messages with real users when possible.

The Prompt

**Objective**
Rewrite the following error messages to be clear, empathetic, and action-oriented, while staying concise (ideally under 20 words) and consistent with our brand and UI constraints.

**Inputs**
Provide or infer:
- Audience and context: [who the user is, their familiarity level, task they’re doing, device/surface: toast, inline, modal, email, SMS]
- Brand voice: [tone keywords, formality, use of contractions, first/second person]
- Constraints: [max words/characters, line limits, space for buttons/links, platform guidelines (iOS/Android/Web)]
- Severity and responsibility: [info/warning/error; our fault vs. user input vs. third party]
- Required action: [what the user should do, alternative paths, support channels]
- Compliance/privacy guardrails: [no PII, no internal codes on-screen, legal copy if needed]
- Localization/accessibility: [languages, reading level target, screen reader considerations]

If inputs are missing, ask up to 5 clarifying questions. If unknown, state assumptions and proceed.

What to deliver for each message

1) Original
- Paste the original string (and where it appears).

2) Diagnosis (brief)
- What’s wrong: jargon, vague, no action, blamey, too long/short, scary, inconsistent, accessibility issues.

3) Improved message (primary)
- ≤20 words, plain language, empathetic, tells the user exactly what to do next.
- Use active voice and second person.
- Include only necessary detail; avoid blame and technical codes.
Example structure: “We couldn’t [what]. Try [specific action].” or “Enter [requirement].” or “Check [setting], then [action].”

4) Action details (if needed)
- Buttons/links: 1–2 clear labels (e.g., “Try again,” “Update payment,” “Contact support”).
- Short helper line (optional, ≤15 words) if the primary line alone isn’t sufficient.

5) Variants by surface (optional)
- Toast/snackbar (≤12 words).
- Inline field error (paired with hint).
- Modal/dialog (primary line + helper line + buttons).

6) Prevention tip (optional)
- One sentence on avoiding the issue next time (only if helpful and not scolding).

7) Notes
- Accessibility: avoids ALL CAPS, ensures screen-reader clarity, no color-only meaning.
- Localization: no idioms; placeholders clearly marked (e.g., {email}); neutral date/number formats.
- Compliance/privacy: no PII or internals; apology only when we’re at fault.
- Telemetry: internal error code for logs (not shown to user).

**Rewrite rules and guardrails**

Do
- Lead with the user’s goal and path forward.
- Be specific: what failed, why it matters, what to do now.
- Use familiar words: use “sign in” not “authenticate,” “save” not “persist.”
- Keep blame out; take responsibility when it’s on us.
- Keep to one idea per message; details to “Learn more.”

Don’t
- Don’t show stack traces, IDs, or codes to users.
- Don’t use scary or vague terms: “fatal,” “invalid,” “error 0x800…,” “something went wrong.”
- Don’t imply fault without certainty: avoid “you did…”; use “we couldn’t…” or neutral phrasing.
- Don’t overload toasts; move steps to modals or help articles when lengthy.

Formatting
- Use sentence case.
- Contractions allowed if on-brand.
- Avoid exclamation marks unless required by brand voice.
- Keep reading level ~Grade 6–8.

Quality bar checklist (use before finalizing)
- Under 20 words (primary), plain language, clear next step.
- Empathetic tone, no blame, no jargon or codes.
- Action is feasible in-context (buttons/links align with UI).
- Accessibility and localization considered.
- If our fault, includes a brief apology.
- Optional variants provided for different surfaces.
- Diagnosis explains why the original fell short.

**Template to fill per message**

- Original:
- Context/surface:
- Diagnosis (1–3 bullets):
- Improved (primary, ≤20 words):
- Buttons/links (if any):
- Helper line (optional, ≤15 words):
- Variant – Toast (≤12 words):
- Variant – Inline field:
- Prevention tip (optional):
- Notes: Accessibility | Localization | Compliance
- Internal: Error code/cause (not shown)

**Examples (for reference; adjust to your brand)**

- Original: “Invalid phone number.”
  Diagnosis: Vague, jargon (“invalid”), no action.
  Improved: “Enter a 10-digit phone number, including area code.”
  Buttons/links: —
  Toast: “Enter a 10-digit phone number.”
  Inline: “Use 10 digits, no spaces or dashes.”
  Notes: Keep number format locale-aware.

- Original: “Authentication token expired.”
  Diagnosis: Jargon, no user action.
  Improved: “You were signed out. Please sign in again to continue.”
  Buttons: “Sign in”
  Toast: “Session ended. Sign in again.”
  Notes: Apologize only if our timeout was too short.

Usage
- Paste your error messages under “Original.”
- Provide any inputs you know; I’ll ask clarifying questions if needed.
- I’ll return rewrites in the template above, with optional variants per surface.

Claude Sonnet 4.5 provides excellent document analysis and nuanced reasoning for this task.

4. Build a Persona Based on Interview Notes

Personas synthesize research into actionable user archetypes, but creating them from raw interview notes can be time-consuming. AI can help you quickly draft personas by identifying patterns, goals, pain points, and behaviors across multiple interviews, giving you a solid foundation to refine.

Remember that AI-generated personas are only as good as the input data. Always validate findings against your research, ensure the persona represents real user needs (not stereotypes), and involve your team in refining the output to maintain accuracy and buy-in.

The Prompt

**Objective**
Synthesize a clear, evidence-based user persona from the interview notes, highlighting patterns, contradictions, and actionable implications. Cite which insights came from which interviews.

**Inputs**
Provide or infer:
- Interview materials: [notes/transcripts], with participant IDs (e.g., I1, I2), role, date, and context of use.
- Product/service context: [what the persona is for], key tasks, platforms/devices.
- Audience scope: [geography/language], accessibility needs, environment (home/office/field).
- Business decisions this persona should inform: [prioritization, messaging, features, onboarding, support].
- Research status: [sample size, recruitment bias, stage], any quantitative data to triangulate.
- Constraints: [privacy/PII redactions, timeline, required format/length].

If inputs are missing, ask up to 5 clarifying questions. If still unknown, state assumptions and proceed.

**Synthesis Approach (how to reason)**
- Normalize notes: extract behaviors, goals, pain points, context, quotes, workarounds, triggers.
- Cluster into themes; flag contradictions and variability across participants.
- Threshold for inclusion: only include attributes supported by 2+ interviews or clearly critical to the product; mark confidence levels.
- If patterns imply multiple segments, create 2–3 personas and include a brief segment comparison.

**Output Requirements**

1) Persona snapshot
- Name (human and memorable), short tagline (their “job to be done” in one line).
- Brief demographic overview only if relevant to behavior (avoid stereotyping).
- Quote that captures their perspective (verbatim or lightly edited).

2) Background and context
- Role, experience level, responsibilities.
- Environment and constraints (tools, connectivity, noise, privacy, policies).
- Tech comfort and device ecosystem.

3) Goals and motivations
- Primary outcomes they seek; why they care.
- Triggers that start their tasks; success criteria.

4) Behaviors and habits
- Typical workflow, information sources, collaboration patterns.
- Current tools and workarounds; frequency and time constraints.

5) Pain points and frustrations
- Usability, trust, policy, access, latency, support gaps.
- What makes them stall, switch tools, or abandon.

6) Needs and expectations
- Functional needs, content needs (tone/level), support needs, accessibility accommodations.
- Decision criteria when choosing tools.

7) Jobs-to-be-Done (job stories)
- “When [situation], I want to [motivation], so I can [outcome].” (2–4 job stories)

8) Day-in-the-life vignette (concise)
- A short scenario showing context, interruptions, and cross-device behavior.

9) Implications and opportunities
- Product/UX opportunities tied to pains and goals.
- Onboarding, education, and content tone guidance.

10) Evidence map and confidence
- Inline citations per bullet using [Ix], e.g., “Prefers email summaries [I2, I5].”
- Confidence per section: High/Medium/Low with rationale and N interviews.

11) Assumptions, contradictions, and open questions
- What we assumed due to missing data.
- Conflicting signals across interviews and how to resolve them.
- Validation plan: next 3 research steps and success criteria.

Context variants (optional if relevant)
- How this persona’s behavior changes at home vs. commuting vs. at work.
- Accessibility scenario (e.g., screen reader, low vision) or low-bandwidth context.

**Formatting and Depth**

- Length target: 700–1200 words per persona; concise bullets + a brief narrative vignette.
- Use sentence case, plain language, and avoid unnecessary demographics.
- Cite sources inline with [Ix]; include a Source key at the end mapping IDs to participants.
- Redact PII and avoid direct quotes that reveal identity unless consented.

**Quality Bar Checklist (use before finalizing)**

- Are claims backed by 2+ sources or clearly marked Low confidence?
- Do citations [Ix] appear on each key attribute?
- Are environment, constraints, and interruptions explicit?
- Are needs tied to pains and goals, with clear implications?
- Is the quote memorable and representative (not an outlier)?
- Are assumptions and contradictions called out with a validation plan?
- Is the persona free of stereotypes and PII?

**Fillable Template**

Persona name and tagline
- Name:
- Tagline (one line):
- Brief demographic overview (only if behaviorally relevant):
- Quote:

Background and context
- Role and responsibilities:
- Experience level/tenure:
- Environment and constraints (tools, policies, connectivity, privacy/noise):
- Tech comfort and devices:

Goals and motivations
- Primary goals:
- Motivations/why it matters:
- Triggers and success criteria:

Behaviors and habits
- Typical workflow:
- Information sources and channels:
- Tools and workarounds:
- Time constraints and rhythms:

Pain points and frustrations
- —
- —
- —

Needs and expectations
- Functional needs:
- Content/tone needs:
- Support and accessibility needs:
- Decision criteria:

Jobs-to-be-Done (2–4)
- When [situation], I want to [motivation], so I can [outcome].
- …

Day-in-the-life vignette (5–8 sentences)
- Brief scenario that shows context, interruptions, and cross-device moments.

Implications and opportunities
- Product/UX opportunities:
- Onboarding/education guidance:
- Content tone guidelines:

Evidence map and confidence
- Key attributes with citations (e.g., “Relies on templates [I1, I3, I4] — Confidence: High (3/5)”).
- Section confidence summaries: Background | Goals | Behaviors | Pain points | Needs.

Assumptions, contradictions, open questions
- Assumptions:
- Contradictions to explore:
- Validation plan (next 3 research actions + success criteria):

Context variants (optional)
- At home:
- Commuting:
- At work:
- Accessibility/low-bandwidth (optional):

Source key
- I1: [participant role, date, context]
- I2: …
- I3: …

Claude Sonnet 4.5 provides excellent document analysis and nuanced reasoning for this task.

5. Identify Key Points in User Testing

After conducting user tests, you're often left with pages of notes, timestamps, and observations that need to be distilled into insights. AI can help you quickly identify patterns, critical issues, and notable quotes across multiple test sessions, creating summaries that are easy to share with stakeholders.

While video analysis is often impractical due to file size limitations, transcripts work exceptionally well for this purpose, just ensure you supplement them with your observational notes about tone and behavior.

The Prompt

**Objective**
Synthesize a concise, stakeholder-ready summary from usability test notes. Highlight what matters, why it matters, and what to do next. Include severity, frequency, and confidence for each issue.

**Inputs**
Provide or infer:
- Study context: product/feature, stage (concept/prototype/live), device(s), environment (remote/in-person), prototype fidelity.
- Test script/objectives: key tasks, success criteria, hypotheses.
- Participants: count (N), IDs (P1, P2…), segments, accessibility needs, recruitment method.
- Data sources: notes/transcripts, recordings with timestamps, screens/flows tested, any metrics (time-on-task, completion).
- Constraints: privacy/PII redactions, timeline, audience for the report (execs, PMs, Eng, Design).
- KPIs/business goals: what success impacts (activation, conversion, retention, CSAT, support volume).

If inputs are missing, ask up to 5 clarifying questions. If unknown, state assumptions and proceed.

**Synthesis approach (how to reason)**

- Code observations per task; cluster into themes.
- Assign severity using the rubric below; capture frequency as n/N and a label (All/Most/Some/Few).
- Note confidence (High/Medium/Low) based on N, consistency, and data quality.
- Trace each finding to evidence (timestamps, quotes, or multiple participant mentions).
- Distinguish root cause vs. symptom; separate usability from feature gaps or policy constraints.

Severity rubric
- P0 Blocker: cannot complete the task or data loss.
- P1 High: major friction; risky workaround; likely to abandon.
- P2 Medium: confusing but recoverable; slows users.
- P3 Low: minor polish; doesn’t block success.

Frequency labels
- All (≈100%), Most (≥60%), Some (20–59%), Few (<20%). Always include n/N.

**What to deliver**

1) Executive summary (5–8 bullets)
- Who we tested, what we learned, and the top 3 actions.
- One-line outcome per primary objective (met/not met) with why.

2) Key findings (3–5)
- For each: insight statement, why it matters (tie to KPI), evidence [n/N, P#], confidence.

3) Usability issues (prioritized)
For each issue:
- Title + severity (P0–P3) + frequency (n/N, label) + confidence.
- Affected tasks/flows and impact on goals.
- Evidence: brief examples, quotes with [P#], timestamps if available.
- Likely root cause.
- Recommendation(s): specific change(s), expected impact, effort (S/M/L), owner/team.

4) Positive moments (what worked)
- Features or content that enabled success or delight; evidence [n/N].

5) Behavioral patterns
- Recurring strategies, mental models, expectations, and workarounds across participants.

6) Notable quotes
- 5–8 short quotes with participant IDs and context. Redact PII.

7) Task outcomes and metrics
- Per task: completion (n/N), common failure points, typical time-on-task (if available), error types.

8) Recommendations and next steps (prioritized)
- 6–10 actions labeled by priority (P0 immediate, P1 next sprint, P2 backlog), effort (S/M/L), and linkage to findings.
- Experiments or follow-ups (A/Bs, microcopy tests, usability retests) with success criteria.

9) Risks, dependencies, and open questions
- Known constraints (legal, tech, data), decisions needed, and what to validate next.

10) Appendix (optional, for the UX team)
- Method and sample snapshot, session-by-session notes, observation log by task.

**Formatting and style**

- Use clear headings and bullet points; avoid tables.
- Tag critical items at the start of bullets: [P0], [Most], [High confidence].
- Keep language plain and concrete; include examples and short snippets.
- Include counts (n/N) and participant IDs (P1, P2…); no PII.

**Quality bar checklist (use before finalizing)**

- Are top findings tied to evidence with n/N and [P#]?
- Does each issue have severity, frequency, confidence, and a specific recommendation?
- Are business impacts and KPIs explicit?
- Are quotes concise, anonymized, and illustrative?
- Are root causes distinguished from symptoms?
- Are next steps prioritized by impact and effort?
- Are limitations and open questions clearly stated?

**Fillable template**

Study info
- Title and date:
- Product/feature and stage:
- Devices/environments:
- Participants: N, segments (P1–Pn):
- Objectives/hypotheses:
- KPIs/business goals:

Executive summary (5–8 bullets)
- —
- —
- —

Key findings (3–5)
- Finding 1: [Insight]. Why it matters: [KPI/impact]. Evidence: [n/N; P#]. Confidence: [H/M/L].
- Finding 2: …
- Finding 3: …

Usability issues (prioritized)
- [P0 | n/N, Most | Confidence H/M/L] Issue title
  - Impact/tasks:
  - Evidence: [P# quote or observation; timestamp if any]
  - Root cause:
  - Recommendation: [specific change], Effort [S/M/L], Owner [team]
- [P1 | …] Issue title
  - …

Positive moments
- [n/N; P#] What worked and why it helped.

Behavioral patterns
- Pattern: description [n/N; P#].

Notable quotes
- “[quote]” — P#, context

Task outcomes and metrics
- Task 1: Completion [n/N]; common failure at [step]; time-on-task [range/median if available].
- Task 2: …

Recommendations and next steps
- [P0, Effort S] Action, linked to Issue #, expected impact on [KPI].
- [P1, Effort M] …

Risks, dependencies, open questions
- Risk/Dependency:
- Open question:
- Validation plan: next research/experiments and success criteria.

Claude Sonnet 4.5 for nuanced analysis and clear documentation

6. Create a UX Writing Style Guide

Building a comprehensive UX writing style guide ensures consistency across your product. AI can create a solid first draft by analyzing your existing content.

By feeding it examples of your interface copy, error messages, and onboarding text, you can quickly generate a guide that reflects your current voice and tone.

The Prompt

**Objective**
Analyze the provided text samples from [Project Name] to identify patterns in voice, tone, and style. Produce a foundational UX writing style guide with clear principles, component rules, inclusive language standards, and ready-to-use examples. Call out inconsistencies, open decisions, and governance.

**Inputs**
Provide or infer:
- Project context: [product domain], [platforms/surfaces: web, iOS, Android, email, SMS, push, support docs], [stage: concept/prototype/live].
- Audience: [primary segments, expertise level], [geography/locales/languages], [accessibility needs], [regulated context: finance/health/education/government].
- Brand and goals: [brand attributes/personality], [desired outcomes: adoption, trust, conversion, retention], [reading level target].
- Existing guidance: [brand voice doc], [design system], [legal/compliance requirements].
- Constraints: [string length limits, character caps], [UI space constraints], [content tokenization/variables], [localization infrastructure].
- Content samples: Paste diverse UI copy with context and IDs (e.g., S1 Button “Get started” — Onboarding, web) across:
  - Buttons/CTAs, links, navigation
  - Forms: labels, help, placeholders, validation errors
  - System messages: errors, warnings, success, toasts
  - Onboarding: tooltips, checklists, walkthroughs
  - Empty states, confirmations, modals
  - Notifications: in-app, email, SMS, push
  - Settings, permissions, payments/security
- Research inputs (optional): top user questions, VOC terms, support tickets.

If inputs are missing, ask up to 5 clarifying questions. If still unknown, state assumptions and proceed with 2–3 plausible options where relevant.

Synthesis approach (how to reason)
- Audit and cluster samples by component and scenario; note voice/tone patterns and inconsistencies.
- Derive 3–5 voice principles; define tone “lanes” by scenario (e.g., success vs. error).
- Normalize terminology; propose preferred terms and “avoid” list with rationale.
- Codify mechanics: capitalization, punctuation, numbers, dates, units, contractions, links.
- Add accessibility, inclusivity, and localization guardrails.
- Provide before/after examples using real samples; flag open decisions and risks.

**Output Requirements**

1) Executive summary (5–7 bullets)
- Who we’re writing for, voice pillars, and top 5 rules to adopt now.
- Key inconsistencies found and high-impact fixes.

2) Voice and tone principles (3–5)
For each principle:
- Name + 1–2 sentence description.
- Do/Don’t bullets.
- Example and counterexample using your samples.

3) Tone-by-scenario matrix
Define tone, intent, and microcopy patterns for:
- Onboarding/education
- Success/confirmation
- Progress/loading/empties
- Errors/validation/recovery
- Warnings/permissions/security
- Payments/billing
- Notifications (in-app, email, SMS/push)
For each: tone lane (e.g., calm/confident/empathetic), verbs to favor/avoid, template pattern (What happened + Why + What next), and examples.

4) Grammar and mechanics
- Capitalization: sentence case for UI (buttons, menus), Title Case for proper nouns; exceptions.
- Punctuation: serial comma policy, exclamation use, ellipses, slashes, ampersands, quotations.
- Numbers/dates/units: numerals vs words, ranges, currency, time, time zones, relative dates (“today,” “in 2 minutes”).
- Contractions and person: encourage second person (“you”), active voice.
- Link and button copy: verbs first; avoid “click here.”
- Abbreviations/acronyms: first-use expansion, casing rules.
- Length guidance: target character counts per component; truncation strategy.
- Variables/placeholders: syntax, fallbacks, and test strings to prevent broken grammar.

5) Terminology and naming decisions
- Preferred vs. avoid list (e.g., “Sign in” not “Log in”; “Workspace” not “Project”).
- Definitions and when to use; cross-surface consistency.
- Blacklist/words to avoid (jargon, idioms, blamey language).
- Open decisions with proposed options and recommendation.

6) Component-specific guidelines
For each component include rules + 2–3 examples and 1 anti-example:
- Buttons/CTAs
- Links and navigation labels
- Form labels, help text, placeholders, validation errors
- Empty states (with primary action and supportive guidance)
- Success toasts and confirmations
- Modals (destructive vs. safe actions)
- Tooltips and inline hints
- Banners/alerts
- Tables/lists (empty, truncated, overflow)
- Search, filters, and sorting
- Settings and permissions
Include: intent, structure, verb choice, length limits, accessibility notes, and sample copy drawn from your content.

7) Error and recovery standards
- Structure: what happened, why it matters, how to fix, reassurance; add support path if blocked.
- Tone per severity: soft for validation, firm for security.
- Copy patterns for: invalid input, timeouts, connectivity, permission denied, payment failed, data loss warnings.
- Examples using your samples; anti-patterns to avoid.

8) Accessibility and inclusive language
- Plain language and reading level; avoid idioms and metaphors.
- Person-first vs identity-first choice (state which and apply consistently).
- Gender-neutral and culturally sensitive terms.
- Screen reader guidance: ARIA label phrasing, error associations, link text.
- Visual + text pairing: never rely on color alone; include text alternatives.
- Capitalization/punctuation that improves TTS output.

9) Localization and internationalization
- Avoid concatenation; provide full sentences.
- Placeholder order and grammatical agreement.
- Date/time/number/currency format guidance by locale.
- Text expansion allowance; right-to-left considerations.
- Glossary handoff for translators; non-translatable brand terms.

10) Examples gallery (before/after)
- 10–15 representative strings with:
  - Context/screen
  - Before (from sample)
  - After (edited per guidelines)
  - Rationale (principles invoked)

11) Governance, measurement, and maintenance
- Review workflow: who approves what, when, and where it lives (Figma, CMS, repo).
- Style checklist for PRD/design reviews.
- Quality metrics: readability target, task success, error rate, support tickets, trust indicators.
- Changelog section and process for updates.

12) Assumptions, risks, and open questions
- What we assumed due to missing inputs.
- Risks (legal, security, brand) and mitigations.
- Top open questions + proposed experiments/research to resolve.

Formatting and depth
- Length target: 1200–2000 words plus examples; prioritize clarity and scannability.
- Use clear headings, bullets, and “Good/Avoid” labels on examples.
- Pull concrete examples from your samples; if coverage is thin, propose options and mark Low confidence.
- Tag sample references inline like [S3 Button], [S12 Error] for traceability.

Quality bar checklist (use before finalizing)
- Do principles map to consistent patterns in the samples?
- Are tone lanes defined per scenario with examples and anti-examples?
- Do mechanics cover capitalization, punctuation, numbers/dates, and variables?
- Are terminology decisions explicit with “prefer vs avoid” and rationale?
- Are accessibility and localization implications covered with actionable rules?
- Are examples realistic, concise, and testable in UI constraints?
- Are open decisions and governance steps clearly stated?

Fillable template (paste and complete)

Project overview
- Project name/domain:
- Platforms/surfaces:
- Audience/locales:
- Brand attributes:
- Reading level target:
- Constraints (length, tokens, legal):

Voice and tone principles
- Principle 1: Name — Description
  - Do:
  - Don’t:
  - Example:
  - Anti-example:
- Principle 2: …

Tone-by-scenario matrix
- Scenario: [e.g., Error/validation]
  - Tone lane:
  - Pattern:
  - Good verbs:
  - Avoid:
  - Examples:

Grammar and mechanics
- Capitalization:
- Punctuation:
- Numbers/dates/units:
- Contractions/voice:
- Links/buttons:
- Acronyms:
- Length guidance:
- Variables/placeholders:

Terminology
- Prefer → Avoid → Definition/rationale:
- Open decisions:

Component-specific guidelines
- Buttons/CTAs:
  - Rules:
  - Good:
  - Avoid:
- Forms (labels/help/placeholders/errors):
  - …
- Empty states:
- Success/confirmations:
- Modals:
- Tooltips/inline help:
- Banners/alerts:
- Navigation:
- Tables/lists:
- Search/filters/sort:
- Settings/permissions:

Error and recovery standards
- Structure:
- Severity tones:
- Patterns and examples:

Accessibility and inclusive language
- Guidelines:
- Screen reader notes:
- Link text patterns:

Localization and internationalization
- Guidelines:
- Glossary handoff:

Examples gallery (before/after)
- Context:
  - Before:
  - After:
  - Rationale:

Governance and measurement
- Review workflow:
- Checklist:
- Metrics:
- Changelog process:

Assumptions, risks, open questions
- Assumptions:
- Risks:
- Open questions + next steps:

GPT-5 has great abilities to synthesize patterns from varied inputs and generate well-structured documents.

7. Perform an Early Accessibility Check

Integrating accessibility from the start of a project is far more effective than trying to fix issues later. Before you've even created a wireframe, you can use AI to perform a preliminary check on your project concept.

By describing a feature or user flow, you can prompt the AI to identify potential barriers for people with different disabilities. This helps you build a more inclusive design brief and bake accessibility into your core strategy.

Remember, this is a conceptual exercise to surface potential issues, not a technical audit. It does not replace WCAG compliance checkers, manual testing with assistive technology, or including people with disabilities in your testing.

The Prompt

*Objective*
Act as an accessibility consultant to evaluate a new digital feature against WCAG 2.1 AA and inclusive design best practices. Identify risks, root causes, and actionable recommendations, with priorities, test cases, and acceptance criteria.

*Inputs*
Provide or infer the following:
- Project context: [feature name], [product area], [user goals], [business goals/KPIs].
- Platforms/surfaces: [web (desktop/mobile)], [iOS/Android], [email/SMS], [embedded/webview], [kiosk].
- Technology stack: [framework (React/Vue/Native)], [design system], [routing], [state mgmt], [media/streaming], [data viz libs], [auth/MFA].
- Audience and locales: [primary users], [assistive tech usage], [locales/languages], [reading level target].
- Regulatory scope: [ADA, AODA, EN 301 549, Section 508], [org policies], [security restrictions].
- Content types and interactions: [forms], [tables], [charts/maps], [modals/toasts], [drag-and-drop], [carousels], [live updates], [video/audio], [infinite scroll], [virtualized lists], [file uploads], [payments], [real-time collab].
- Constraints: [timeline], [string length caps], [telemetry], [legacy components], [performance targets], [offline mode].
- Project description: paste detailed user flows, screens, and states.

If inputs are missing, ask up to 5 clarifying questions. If unknown, state assumptions and proceed with 2–3 plausible variants where relevant.

Analysis approach (how to reason)
- Map user flows to interactions and content types; identify AT touchpoints (focus, reading order, announcements).
- Evaluate per disability category and cross-cutting themes (focus, semantics, color/contrast, motion, error recovery).
- Tie each issue to WCAG 2.1 criterion (ID, level), user impact, and business risk.
- Prioritize by severity (P0–P3) and compliance risk (A/AA violation vs. best practice).
- Provide specific code/content patterns (e.g., roles, ARIA, labeling) and acceptance criteria.

Severity and risk labels
- Severity: P0 Blocker (task impossible/crashes/health risk), P1 High (likely abandonment/workarounds), P2 Medium (slows/confuses), P3 Low (polish).
- Compliance risk: WCAG‑AA violation, WCAG‑A violation, Best practice (non‑blocking), Policy risk.
- Confidence: High/Medium/Low (based on detail, precedents, data).

*What to deliver*

1) Executive summary (6–8 bullets)
- Top 3–5 risks, mapped to WCAG criteria and user impact.
- Where to focus first (P0/P1), expected KPI impact, and quick wins vs. structural fixes.

2) Findings by disability category
For each category, list 2–5 findings with recommendations, each including WCAG mapping, severity, risk, and acceptance criteria.

A) Visual impairments (blindness, low vision, color vision deficiency)
- Screen reader and semantics: landmarks, headings, roles, names/states/values; dynamic updates (ARIA live).
- Focus and reading order: logical DOM order; skip links; visible focus indicators.
- Contrast and scaling: text/non‑text contrast, 200%–400% zoom, reflow, high contrast modes, color‑independent cues.
- Recommendations: concrete patterns (e.g., use <button> over div, aria-expanded, aria-describedby for errors).

B) Motor disabilities (limited dexterity, keyboard/switch, touch target)
- Keyboard support: all actions operable without a mouse; no keyboard traps; logical tab order.
- Target sizes and spacing: minimum touch target (e.g., 44×44 CSS px where feasible).
- Timing and gestures: avoid complex gestures; provide alternatives for drag‑and‑drop; extend/disable timeouts.
- Recommendations: focus management on modal open/close, key bindings, escape routes, device rotation support.

C) Auditory impairments (deaf/hard of hearing)
- Captions and transcripts: for prerecorded/live audio/video; speaker identification and non‑speech info.
- Audio‑only cues: parallel visual and programmatic cues; no auto‑play audio or provide control within 3 seconds.
- Notifications: provide text alternatives for sounds; vibration/haptic alternatives on mobile.
- Recommendations: caption specs, transcript placement, player controls.

D) Cognitive and neurological (dyslexia, ADHD, autism, memory, seizure disorders)
- Plain language and predictability: consistent navigation, clear labels, chunked content, progressive disclosure.
- Error prevention and recovery: inline validation, clear instructions and examples, undo/confirm for destructive actions.
- Motion/animation: respect reduced motion; avoid flashing >3 Hz; limit parallax/auto‑advancing carousels.
- Load and distractions: minimize simultaneous stimuli; concise copy; save/resume.
- Recommendations: content patterns, loading states, form guidance.

For each finding include:
- Issue: concise statement
- Why it matters: user impact with example
- WCAG mapping: [e.g., 1.3.1 Info and Relationships (A)]
- Severity and compliance risk: [P1 | AA violation | Confidence: Medium]
- Recommendation: specific design/engineering steps
- Acceptance criteria (Given/When/Then)
- Test method: manual steps, AT to use, devices/browsers

3) Cross-cutting checks (component/system level)
- Semantics and ARIA: roles for nav/main/aside/footer; name/role/value; avoid ARIA when native controls suffice.
- Focus management: on route changes and modal/toast open/close; restore focus on dismiss.
- Forms: labels vs placeholders; error association (aria-describedby); required indicators; helpful hints; autofill; input purpose (1.3.5).
- Validation and errors: timing, non‑color cues, summary at top, link to fields; live regions for async errors.
- Interactive patterns: menus, tabs, accordions, tooltips, dialogs—reference WAI‑ARIA Authoring Practices.
- Media: player controls, captions, transcripts, sign language (if required), audio description.
- Data visualizations: text alternatives, table fallback, discernible colors/patterns, keyboard panning/zoom.
- Notifications and live updates: polite/assertive announcements; user controls to pause/stop/hide.
- Responsive and zoom: reflow (1.4.10), text spacing (1.4.12), orientation (1.3.4).
- Authentication and MFA: cognitive function tests alternatives (3.3.7), device‑agnostic OTP entry, copy/paste allowed.
- Security and CAPTCHAs: provide non‑visual alternatives; avoid puzzle CAPTCHAs; use accessible turnstiles.
- Performance: ensure AT responsiveness; avoid content shifts; skeletons with semantics.
Provide a brief status for each (Compliant/Risk/Unknown) and top fixes.

4) Platform and AT coverage plan
- Browsers/OS: Chrome, Firefox, Safari, Edge on latest OSs.
- Screen readers: NVDA + Chrome/Firefox; JAWS + Chrome/Edge; VoiceOver (macOS + Safari), iOS VoiceOver; Android TalkBack.
- Other AT/modes: keyboard only, switch control, dictation/voice control, zoom/magnifier, high contrast mode, reduced motion, dark mode.
- Test matrix: prioritized combos relevant to your user base.

5) Prioritized recommendations and roadmap
- 8–12 actions with Priority (P0–P3), Effort (S/M/L), Owner (Design/Eng/Content), and KPI impact.
- Quick wins (next sprint) vs structural changes (component refactor, design system).
- Dependencies and risks.

6) Example acceptance criteria (ready to paste into tickets)
- Dialogs: Given a modal opens, focus moves to its first focusable element; When it closes, focus returns to the triggering control; Then background content is inert and not reachable via Tab; SR announces title and role.
- Form errors: Given an invalid email, When the user blurs the field, Then the field is marked invalid with aria-invalid="true" and error text linked by aria-describedby; SR announces the error immediately.
- Live updates: Given a notification arrives, When it is inserted, Then it’s announced via aria-live="polite" with role="status" and remains dismissible by keyboard; focus is not stolen.
- Reduced motion: Given prefers‑reduced‑motion is enabled, When page loads, Then non‑essential animations are disabled; carousels do not auto‑advance.

7) Sample test cases (manual)
- Keyboard: Tab through all interactive elements without traps; visible focus; logical order; Esc closes overlays.
- Screen reader: Navigate by headings, landmarks, and links; verify names/states/values; confirm dynamic announcements.
- Contrast: Verify text and non‑text contrast ratios; validate focus indicator contrast.
- Zoom/Reflow: At 200% and 400% zoom and 320px width, content reflows without horizontal scroll (except data tables where necessary).
- Media: Enable captions; verify transcript presence and accuracy; test auto‑play behavior.

8) Assumptions, limitations, and open questions
- List assumptions made due to missing inputs.
- Known limitations (e.g., third‑party widgets) and proposed mitigations.
- Open questions and decisions needed; research or audits to schedule.

Formatting and style
- Use clear headings and bullets; tag critical items at the start of bullets: [P0], [AA], [High confidence].
- Reference WCAG by ID and level; explain user impact in plain language.
- Include concrete examples, code or attribute patterns where relevant.
- Keep language concise and actionable.

Quality bar checklist (use before finalizing)
- Are top risks prioritized with severity and WCAG mapping?
- Does every recommendation include user impact and acceptance criteria?
- Are cross‑cutting patterns (focus, semantics, color, motion, forms) addressed?
- Is the platform/AT test plan realistic for your audience?
- Are quick wins vs structural fixes clearly separated?
- Are assumptions and dependencies transparent?

*Fillable template (paste and complete)*

Project overview
- Feature name:
- Platforms/tech:
- Audience/locales:
- Regulations/policies:
- Content types/interactions:
- Constraints/dependencies:

Executive summary
- —
- —
- —

Findings by disability type
- Visual
  - [P? | WCAG X.X.X (A/AA) | Confidence ?] Issue:
    - Why it matters:
    - Recommendation:
    - Acceptance criteria (G/W/T):
    - Test method:
  - …
- Motor
  - …
- Auditory
  - …
- Cognitive/neurological
  - …

Cross‑cutting checks (status + top fix)
- Semantics/landmarks:
- Focus management:
- Forms/validation:
- Media:
- Data viz:
- Notifications/live regions:
- Responsive/zoom:
- Auth/MFA/CAPTCHA:
- Performance/CLS:

Platform and AT coverage
- Priority combos:
- Out‑of‑scope (if any):

Prioritized recommendations
- [P0, Effort S, Owner Eng] …
- [P1, Effort M, Owner Design] …

Assumptions and open questions
- Assumptions:
- Open questions:
- Next steps (audits/tests):

Claude 3 Opus is known for its deep knowledge base on guidelines like WCAG and its ability to provide nuanced, safety-conscious recommendations.


As we've seen, AI offers powerful ways to streamline your UX process, from initial research to final polish. Integrating these prompts into your workflow can save hours of manual effort, giving you more time to focus on strategic thinking and creative solutions that truly serve your users.

As you begin experimenting, you'll notice that different models excel at different tasks. A prompt that yields a brilliant persona in Claude may generate a more structured style guide in GPT-5. Managing multiple tools and subscriptions to access these specialized strengths can be cumbersome.

This is where Mono simplifies your toolkit. It integrates all the leading AI models, including those mentioned here, into a single, cohesive interface under one affordable subscription.

Of course, you can always work with each model's native UI separately. Mono's goal is simply to remove the friction (and cost), allowing you to choose the best tool for the job at hand without juggling tabs or multiple subscriptions.

Whichever path you choose, the key is to start. Adapt these prompts, find what works for you, and discover how AI can become your most powerful creative collaborator.