How to Describe What You Actually Want: A Customer’s Guide to Getting Better Recommendations (Everywhere)

How to Describe What You Actually Want: A Customer’s Guide to Getting Better Recommendations (Everywhere)

Publicado el jueves, 12 de marzo de 2026

por Olivier

Most bad recommendations happen because your request is underspecified: the other side can’t see your outcome, constraints, or trade-offs. This guide gives you a simple “Recommendation Brief” you can use with AI tools, vendors, friends, and your team to get fewer mismatches and faster, better decisions.

17 min read3,332 words

This guide shows you a simple way to describe what you actually want so you get better results from people and systems: AI tools, vendors, recruiters, agencies, app stores, even your own team. You’ll learn how to communicate outcomes, constraints, context, and trade-offs in a way that makes great recommendations almost inevitable.

For example, imagine you're a founder choosing a helpdesk tool for a small team. You would describe the outcome (reduce response time), the constraints (no added headcount, budget, integrations), and the trade-off (speed to implement over advanced customization) to achieve a shortlist that fits your reality instead of a generic “top tools” list.

If you’re a founder, operator, or builder, this is a leverage skill: clearer requests mean less back-and-forth, fewer mismatches, and faster decisions.

Why do recommendations fail (and it’s not your fault)?

Recommendations fail when the “request” doesn’t contain enough decision-making information. The recommender (a person or an algorithm) has to guess your priorities, and their guesses are usually wrong because they don’t live in your context.

Here are the most common failure modes:

  • Vague goal: “Need a CRM” or “Looking for a marketing agency.” There’s no success definition.
  • Missing constraints: Budget, timeline, technical stack, compliance needs, or team capacity aren’t stated, so options are unrealistic.
  • No context: The recommender can’t tell what stage you’re at, what you’ve tried, or what’s already in place.
  • Hidden trade-offs: You say “best,” but you really mean “fast,” “cheap,” “secure,” “simple,” or “lowest risk.”
  • Proxy requests: You ask for a tool when you need a workflow, or you ask for a feature when you need an outcome.

The fix isn’t to “be more detailed” in a random way. The fix is to give the right kind of detail—information that helps someone narrow the search space and rank the options.

"The quality of a recommendation is bounded by the quality of the constraints and trade-offs you provide—without them, even experts are guessing." - Priya Desai, Product Strategy Lead at SignalWorks

What is the “Recommendation Brief” and how does it work anywhere?

When you want a good recommendation, you’re effectively asking someone to solve a small decision problem. The fastest way to help them is to provide a short brief with a few specific elements.

Use this as a template (you can copy/paste it into an email, chat, or prompt):

  1. Outcome: What you want to be true after this works.
  2. Context: Where you are now and what you’re working with.
  3. Constraints: Non-negotiables (budget, time, tools, policy).
  4. Preferences: Nice-to-haves and style/fit.
  5. Trade-offs: What you’ll prioritize when you can’t have everything.
  6. Examples: A couple of “like this” and “not like this” references (with why).
  7. Decision process: How you’ll choose and by when.

This looks like more work, but it usually saves time because it prevents the first two rounds of clarifying questions and eliminates most mismatches.

Step 1: Start with the outcome (not the object)

People often request an object: a tool, a vendor, a hire, a book, a strategy. Strong recommendations start with the outcome—the measurable or observable change you want.

How to express outcomes clearly

  • Use verbs: “Reduce,” “increase,” “automate,” “standardize,” “de-risk,” “learn.”
  • Add a time horizon: “In 30 days,” “this quarter,” “before launch.”
  • Define success: Metrics if possible, otherwise observable criteria.

Example (vague): “I need a helpdesk tool.”

Example (outcome-driven): “I want to reduce customer response time from ~24 hours to under 4 hours during business days, without adding headcount, and I need basic reporting on top issues.”

Notice how the second version makes it easier to recommend: it points toward automation, routing, SLAs, and reporting—not just “a tool with tickets.”

Step 2: Add context so the recommendation can fit your reality

Context is the “why this is hard” part. It tells the recommender what environment the solution must survive in.

Useful context often includes:

  • Your stage: Pre-product-market fit, scaling, regulated enterprise, etc.
  • Your users: Who they are, what they care about, and their sophistication.
  • Your current setup: Tools, workflows, team structure, data sources.
  • What you’ve tried: And what broke (or felt wrong).

Mini-scenario: Two startups both ask for “a data warehouse recommendation.” One is a two-person team with simple Stripe metrics; the other has multiple products, event streams, and compliance obligations. If they don’t share context, they’ll get the same generic answers—and one will be wildly wrong.

Step 3: Name constraints (your real “no” list)

Constraints turn a long list of possible options into a short list of plausible ones. If you don’t state constraints, the recommender will either:

  • Suggest “best in class” options that are too expensive or complex, or
  • Over-correct and suggest lightweight options that can’t meet your needs.

Common constraints to state explicitly:

  • Budget: A range is fine (“$200–$500/month,” “under $20k project”).
  • Timeline: When you need results, not just when you’ll start.
  • Integration requirements: “Must work with HubSpot,” “must support SSO,” “must have an API.”
  • Security/compliance: SOC 2, HIPAA, data residency, vendor risk review.
  • Team capacity: “No one to maintain infra,” “one engineer part-time.”

Plain-language tip: If something is a deal-breaker, say so. “We can’t use tools without SSO” is more helpful than “SSO would be nice.”

Step 4: Convert tastes into criteria (so “good” becomes actionable)

Preferences are not fluff. They’re the difference between a recommendation that’s technically correct and one you’ll actually adopt.

The trick is to translate taste into criteria:

  • “Easy to use” → “Our non-technical ops team can configure it without engineering.”
  • “Modern” → “Clean UI, strong docs, fast onboarding, active roadmap.”
  • “Reliable” → “Proven in production, clear SLAs, observability, good support.”
  • “Flexible” → “Custom fields, workflows, API access, exportability.”

Analogy: Saying “I want a good car” is hard to act on. Saying “I want a car that’s safe, cheap to repair, and comfortable for long drives; I don’t care about acceleration” is a map.

Step 5: State your trade-offs (this is the recommendation superpower)

A trade-off is what you’re willing to give up to get what you want most. Great recommendations require trade-offs because most options are bundles of pros and cons.

If you don’t state trade-offs, the recommender has to guess your ranking. That guess is usually wrong.

Common trade-offs to choose between

  • Speed vs. polish: “We need something workable in 2 weeks, even if it’s not perfect.”
  • Cost vs. capability: “We’ll pay more to avoid building/maintaining.”
  • Flexibility vs. simplicity: “Prefer opinionated workflows over endless configuration.”
  • Vendor support vs. self-serve: “We need hands-on onboarding.”
  • Best-of-breed vs. all-in-one: “Prefer fewer tools over peak performance.”

Example trade-off statement: “We’ll trade advanced customization for fast implementation and low admin overhead.”

This single sentence prevents a whole category of wrong recommendations.

Step 6: Use examples the right way (“like X” plus “because”)

Examples are powerful, but they can backfire if they become the only instruction. “We want something like Notion” is ambiguous: do you mean the UI, the flexibility, the templates, the community, the pricing, or the vibe?

Use this pattern:

  • Like: X, because Y.
  • Not like: A, because B.

Example:

  • Like: “Linear, because it’s fast, opinionated, and keeps workflows simple.”
  • Not like: “Jira, because it becomes a configuration project and we don’t have admin bandwidth.”

Now the recommender understands the underlying values (speed, simplicity, low admin) rather than fixating on brand names.

How do you decide on recommendations (so they’re shaped to your process)?

Even great options can stall if the decision process is unclear. When you tell people how you’ll evaluate, they can recommend in a form you can actually use.

Decision process details that help:

  • Evaluation criteria: “We’ll pick based on onboarding time, total cost, and API quality.”
  • Stakeholders: “Security needs to approve,” “sales team must test.”
  • Next step: “Recommend 3 options, then we’ll do 30-minute demos.”
  • Deadline: “We need to decide by Friday.”

Example: “Please send 2–3 options with estimated total cost and the fastest path to a working pilot. We’ll choose one to trial next week.”

Putting it together: a few complete recommendation briefs

1) Asking an AI assistant for a vendor shortlist

Brief: “I’m choosing an email marketing platform for a B2B SaaS (10k contacts now, aiming for 50k in 12 months). Outcome: improve onboarding activation with 3 automated sequences and basic segmentation. Constraints: must integrate with HubSpot, support EU data residency, and stay under $800/month. Preferences: simple UI for a non-technical marketer, good deliverability reputation, strong templates. Trade-off: I’ll trade advanced customization for reliability and speed to launch. Like: Customer.io because event-based automation feels straightforward. Not like: overly complex enterprise tools. Recommend 3 options and explain the best fit for our constraints.”

Why this works: it gives the assistant a clear search space (B2B SaaS, scale, integrations, EU), a target outcome (activation sequences), and a budget ceiling.

2) Asking a friend for restaurant recommendations

Brief: “I’m looking for a place for a quiet conversation on Thursday at 7. Outcome: relaxed vibe, easy to hear each other, good food. Constraints: near downtown, vegetarian-friendly, under $40 per person. Preferences: interesting small plates, not too formal. Trade-off: I’ll trade ‘trendiness’ for comfort and low noise. Not like: places with loud music. Any 2–3 suggestions?”

Same structure, different domain.

3) Asking a consultant/agency for a proposal

Brief: “We need help improving our inbound demo conversion. Outcome: raise landing page-to-demo conversion from 1.2% to 2.0% within 60 days and set up a repeatable experimentation process. Context: traffic is ~40k/month, mix of paid search and content; current stack is Webflow + GA4 + HubSpot; we can implement changes weekly. Constraints: $15k/month budget, no full redesign, must keep brand guidelines. Preferences: strong hypothesis-driven approach, clear reporting, and someone who can work with our in-house designer. Trade-off: prioritize speed and learning over perfect creative. Please propose an 8-week plan with deliverables and what you need from us.”

This filters out vague proposals and invites a concrete plan.

4) Asking your team for a hiring recommendation

Brief: “We’re hiring our first ML engineer. Outcome: in 90 days, have a production-ready evaluation pipeline and a v1 model deployed behind an API with monitoring. Context: Python stack, AWS, small team, existing data in Postgres + S3; no dedicated MLOps yet. Constraints: must be comfortable owning end-to-end, remote-friendly, salary band $X–$Y. Preferences: pragmatic, ships, strong communication. Trade-off: we’ll trade cutting-edge research for solid engineering and reliability. Please recommend what profile to hire (titles/backgrounds), and what interview loop would best test for it.”

Even internally, this prevents “wishlist role” confusion.

A quick diagnostic: why your last request didn’t work

If you got bad recommendations recently, ask which of these was missing:

  • Outcome: Did you define what success looks like?
  • Context: Did you share your current setup and stage?
  • Constraints: Did you state the deal-breakers?
  • Trade-offs: Did you rank what matters most?
  • Examples with reasons: Did you explain “because”?

Fixing just one missing piece often improves the next round dramatically.

How to ask for recommendations from AI specifically (without getting generic answers)

AI systems tend to default to broad, “cover all bases” suggestions when your prompt is underspecified. Give it a role, a brief, and an output format.

A practical prompt pattern

  • Role: “Act as a pragmatic ops advisor for an early-stage SaaS.”
  • Brief: Use the Recommendation Brief elements.
  • Output format: “Give 3 options in a table: best for, risks, estimated cost, time to implement, why it fits. Then ask 3 clarifying questions.”

Example prompt: “Act as a security-conscious IT advisor. Here’s my brief: [outcome/context/constraints/preferences/trade-offs]. Recommend 3 tools. Output a comparison table and then list the top 5 questions that would change your recommendation.”

That last line is important: it forces the model to surface uncertainty rather than hide it.

How to ask for recommendations from people (and get their best thinking)

With humans, the goal is to make it easy for them to help you—without turning your ask into homework.

What to do

  • Keep it skimmable: 6–10 lines plus bullets beats a long essay.
  • Ask for a short list: “2–3 options” yields higher-quality reasoning than “everything you know.”
  • Invite disconfirmation: “Tell me what I’m not considering” or “what would you avoid?”
  • Offer your trade-offs: People can tailor advice fast if they know your priorities.

What to avoid

  • Fishing expeditions: “Any thoughts?” is hard to answer well.
  • Overfitting to one example: “Just like X” can blind you to better options.
  • Hidden constraints: Surprising someone later with “we can’t share data externally” wastes cycles.

Mini-script you can reuse: “Can I give you a quick brief and get 2–3 recommendations? I’ll tell you what success looks like, constraints, and what I’m optimizing for.”

Conclusion: the small skill that upgrades everything

Better recommendations come from better inputs. If you share (1) the outcome, (2) the context, (3) the constraints, and (4) the trade-offs, you stop getting generic answers and start getting tailored options you can act on.

Use a short Recommendation Brief, include “like/not like” examples with reasons, and state how you’ll decide. It takes minutes—and it saves hours.

FAQ

How much detail is “enough” without overwhelming someone?

Enough detail is whatever lets the recommender narrow to 2–3 plausible options and explain the fit. If your message can be skimmed in 30–60 seconds and still includes outcome, constraints, and one key trade-off, you’re in a strong place. You can always offer, “Happy to answer questions,” instead of front-loading every nuance.

What if I don’t know what I want yet?

Say that explicitly and shift the request from “recommend a solution” to “help me clarify the decision.” Share your context and constraints, then ask for options framed as trade-offs: “If I optimize for speed, what should I do? If I optimize for cost, what changes?” Good recommenders can help you discover preferences, but they still need your boundaries.

Is it okay to give brand examples, or does that bias the recommendation?

Brand examples are helpful if you include the “because.” Without the reason, people may copy the brand rather than the qualities you’re after. Always pair examples with what you’re trying to emulate (or avoid): UI simplicity, integrations, support quality, pricing model, and so on.

How do I describe trade-offs if I’m not sure what matters most?

Start with a provisional ranking: “My guess is speed matters more than customization, but I’m not 100% sure.” Then ask for recommendations under two different priority sets and compare how the answers change. Seeing the consequences of each priority often makes your true preference obvious.

Why do I keep getting recommendations that ignore my budget?

Usually because the budget was missing, stated too softly (“ideally cheap”), or not anchored to a range. Give a clear ceiling or range and say whether it’s flexible: “Under $500/month, not flexible” versus “Under $500/month, can stretch if ROI is clear.” That one line changes the entire candidate set.

How can I tell if someone is recommending what’s best for me versus what they happen to know?

Ask them to explain the fit in terms of your constraints and trade-offs, and request at least one alternative they’d choose if a key condition changed. Strong recommendations include risks and “watch-outs,” not just positives. If they can’t articulate why it fits your specific brief, you’re likely getting a generic or familiarity-based suggestion.