Skip to main content
Use this when you want breadth before depth. Asking for options and tradeoffs helps you choose a path consciously instead of defaulting to the first idea.

Why it works

  • Surfaces alternative approaches you may not know to ask for.
  • Makes tradeoffs explicit (effort, risk, complexity, cost, UX impact).
  • Sets up clearer validation plans because the assistant must justify choices.

How to prompt

  • Request 2–3 approaches, each with pros, cons, and assumptions.
  • Ask the assistant to recommend one option and explain why, including validation steps.
  • Invite it to flag information gaps that could change the recommendation.

Example prompts

  • Feature approach: “We need collaborative editing. Propose 3 approaches, discuss tradeoffs (latency, conflict resolution, cost), and recommend one. Iterate the plan back to me.”
  • Bug fix paths: “The export job stalls for large datasets. Suggest multiple fixes, rank by effort and impact, and explain how you’d validate each.”
  • Design validation: “Is server-side rendering the best move for the new marketing site? Compare SSR vs. static vs. hybrid, lay out pros/cons, and recommend with reasoning.”

Quick templates

  • Option set + pick: “Propose 3 options for <goal>. For each: approach, pros/cons, effort, risks, validation plan. Recommend one and say what would change your pick.”
  • Component-first design/build: “For <page/flow>, propose component-level builds (hero, feature grid, CTA, etc.). Compare building whole-page vs. component-first. Recommend the best path and sequence.”
  • Fix path shortlist: “For <bug/perf issue>, suggest 2–3 fix paths. Include confidence, blast radius, and how to test quickly. Recommend the safest/fastest.”

Bad vs. good (options)

  • Bad: “Make the dashboard faster.”
  • Good: “Make the dashboard faster. Give 3 approaches (caching, query optimization, render slicing). Compare effort/impact/risk, propose tests, and pick one.”

Validation steps to request

  • What to measure (latency, error rate, UX metric) and how.
  • Minimal test plan per option (unit, integration, manual checks).
  • Rollback/mitigation if the chosen path underperforms.