Most “bad recommendations” aren’t really bad—they’re a reasonable answer to a vague question. When you say “What tool should we use?” or “What should I read?” you’re asking someone to guess your constraints, goals, and taste.
This article gives you a simple way to describe what you want so other people (and AI assistants) can recommend things that actually fit. You’ll learn a practical checklist, a reusable template, and examples you can copy.
If you’re building a startup, running a team, or making product and AI decisions under time pressure, this skill saves hours, reduces back-and-forth, and leads to higher-quality choices.
Why do recommendations fail (even from smart people)?
A recommendation is a match between an option and a situation. If the situation is unclear, the match will be random.
Here are the most common failure points:
- •Unstated goal: You say “best,” but you mean “best for this outcome.”
- •Missing constraints: Budget, timeline, integration needs, legal rules, and team skills aren’t mentioned.
- •Undefined audience: “For our users” could mean enterprise admins, consumers, developers, or procurement.
- •No success criteria: Without a definition of “good,” you can’t evaluate answers.
- •No trade-offs: Every option has costs. If you don’t say what you’re willing to sacrifice, the recommender chooses for you.
Think of it like ordering coffee by saying “coffee.” You might get a black drip coffee, which is correct—but if you wanted an oat-milk latte with one extra shot, the barista needed more information. Recommendations work the same way.
What is the Recommendation Brief (and why does it work everywhere)?
When you want a good recommendation, give a short “brief”—a compact description of your situation that makes good options obvious and bad options easy to exclude.
You can write this as a paragraph or bullets. The key is to include the right categories.
1) The decision (what you’re choosing)
Be specific about the category and scope.
- •Bad: “What AI should we use?”
- •Better: “We need an LLM to power a customer-support chatbot for a B2B SaaS product.”
2) The job-to-be-done (the outcome you want)
“Job-to-be-done” is just a plain-English way of saying: what are you trying to accomplish in the real world? Not features—results.
- •“Reduce time-to-first-response from 6 hours to under 10 minutes.”
- •“Help users self-serve answers without escalating to a human.”
- •“Generate first drafts that our team can approve quickly.”
3) Context (who, where, and what’s already true)
Context turns generic advice into relevant advice.
- •Company stage (startup, SME, enterprise), team size, and skill level
- •Current stack (e.g., HubSpot, Zendesk, Slack, AWS, GCP, Azure)
- •Domain constraints (healthcare, finance, education, government)
- •Volume and scale (tickets/day, users, documents)
4) Constraints (hard limits you can’t ignore)
Constraints are the fastest way to filter. If you skip them, you’ll get recommendations that are “great” but unusable.
- •Budget: monthly, annual, or per-seat limits
- •Timeline: “We need this in 2 weeks” vs “We can invest 3 months”
- •Compliance: SOC 2, HIPAA, GDPR, data residency, on-prem requirements
- •Integration: must work with existing tools, SSO, API availability
- •Operational limits: small team, no dedicated ML engineer, limited DevOps
5) Preferences (what you like, but could compromise on)
Preferences improve the “fit” after constraints filter out the impossible. The trick is to label them as preferences, not requirements.
- •“Prefer open-source, but not at the cost of reliability.”
- •“Prefer a managed service; we don’t want to self-host.”
- •“Prefer tools with strong docs and a fast developer experience.”
6) Success criteria (how you’ll judge the recommendation)
Give measurable criteria when you can, and qualitative criteria when you can’t.
- •Quantitative: “Improve onboarding conversion by 15% (Source: TBD, Year TBD).”
- •Operational: “Support team should maintain it without engineering help.”
- •Quality: “Answers must cite sources from our internal docs.”
7) Trade-offs you accept (and ones you refuse)
This is where recommendations get dramatically better. If you don’t state trade-offs, you’ll get safe, generic answers.
- •“We’ll accept higher cost if it reduces implementation risk.”
- •“We refuse vendor lock-in that prevents export of our data.”
- •“We’ll accept slightly worse UX if the security model is stronger.”
8) Examples and non-examples (your taste, made concrete)
One of the highest-leverage moves is to provide examples of what you mean.
- •Examples: “We like tools like Linear and Notion: fast, opinionated, clean UI.”
- •Non-examples: “We don’t want something like Jira: heavy configuration and admin overhead.”
This doesn’t just convey preference—it anchors the recommender’s mental model to something real.
"The quality of a recommendation is bounded by the clarity of the constraints and the definition of success." - Alex Chen, Product Lead at Example Organization
A 90-second checklist you can run before asking
If you only remember one thing, remember this: state your goal, your constraints, and your definition of “good.”
- •What is the decision? (category + scope)
- •What outcome do I want? (job-to-be-done)
- •What constraints are non-negotiable? (budget, timeline, compliance, integrations)
- •What does success look like? (metrics or clear qualitative criteria)
- •What trade-off am I willing to make? (cost vs speed vs quality vs control)
Answering these five takes less time than reading three irrelevant recommendations.
How to be specific without writing a novel
Many people avoid specificity because they don’t want to “overwhelm” the recommender. The goal isn’t length—it’s signal.
Use this rule:
- •Lead with the filters (constraints) and the goal.
- •Then add a small number of priorities (2–4) that matter most.
- •Finish with context only where it changes the answer.
If you’re unsure what context matters, include it briefly and ask the recommender to ignore what’s irrelevant.
Example phrasing: “Not sure if this matters, but our team is 2 engineers and no DevOps. If that changes your recommendation, optimize for low maintenance.”
Make your preferences legible: rank them
When people say “We care about security, speed, cost, and quality,” they’re usually saying “we care about everything.” That forces the recommender to guess your ranking.
Instead, rank your priorities explicitly:
- •Must-have: deal-breakers
- •Should-have: important, but not fatal
- •Nice-to-have: only matters if options are close
Even better: use a trade-off statement.
“We will pay more to reduce risk and implementation time. We will not accept a solution that requires storing customer data outside the EU.”
Ask for the format you want back
People often ask for a recommendation and then get a wall of text. You can prevent that by requesting a response structure.
Useful formats to request:
- •Top 3 options + why: quick scan
- •Decision matrix: criteria vs options with notes
- •One recommended default + two alternatives: clear action
- •Risks and mitigations: especially for tools, vendors, and architectures
Example: “Give me three options. For each: best for, trade-offs, estimated setup time, and what could go wrong.”
Concrete examples you can copy
Below are mini-scenarios showing how a vague ask becomes a high-quality brief. Notice how the improved version makes it easy to recommend—and easy to decline unsuitable options.
Example 1: Choosing an analytics tool (startup/SME)
Vague: “What analytics tool should we use?”
Better brief:
- •Decision: Product analytics for a B2B SaaS web app
- •Goal: Understand activation and retention; identify drop-offs in onboarding
- •Context: 15-person team, 2 engineers; stack is React + Node, Postgres; hosting on AWS
- •Constraints: Must support GDPR; prefer EU data residency; budget < $500/mo initially
- •Success criteria: Funnel + cohort analysis, event tracking with minimal engineering overhead
- •Trade-offs: We’ll accept less customization if setup is fast and maintenance is low
- •Response format: Recommend 1 default + 2 alternatives, with setup steps and ongoing cost
Now a recommender can filter out enterprise-only tools, suggest privacy-friendly options, and tailor to your team capacity.
Example 2: Asking for a book/course recommendation (personal but practical)
Vague: “What should I learn about management?”
Better brief:
- •Goal: Improve 1:1s and feedback; reduce misalignment on priorities
- •Context: First-time manager, 6-person team, remote, high autonomy
- •Preferences: Practical frameworks > theory; examples of tough conversations; can do 20 minutes/day
- •Non-examples: Not looking for “inspirational leadership” content
- •Success criteria: I can apply one technique per week immediately
This helps someone recommend the right material (and the right difficulty level) instead of a generic “best leadership books” list.
Example 3: Getting better recommendations from an AI assistant
AI tools are excellent at generating options, but they rely heavily on what you provide. Treat your prompt like a brief.
Prompt template:
We need a recommendation for: [decision].
Goal/outcome: [job-to-be-done].
Context: [company stage, users, stack].
Constraints (non-negotiable): [budget, timeline, compliance, integrations].
Preferences: [nice-to-haves].
Success criteria: [how we’ll evaluate].
Trade-offs: [what we accept/refuse].
Return format: [top 3 + pros/cons + risks + next step].
Filled-in example (LLM vendor choice):
We need a recommendation for: which LLM API to use for an internal knowledge assistant.
Goal/outcome: answer employee questions using internal docs with citations.
Context: SME (~200 employees), Google Workspace, Confluence, Slack; small platform team.
Constraints (non-negotiable): EU data residency; avoid training on our data by default; budget $2k/month; must have reliable uptime.
Preferences: strong tool calling/function calling; good eval tooling; clear pricing.
Success criteria: >80% helpfulness rating in pilot; reduces internal support tickets by 25% (Source: TBD, Year TBD).
Trade-offs: willing to pay more for privacy controls and reliability; not willing to manage self-hosted inference.
Return format: recommend 1 default + 2 alternatives; include risks, mitigations, and a 2-week pilot plan.
This kind of prompt reliably produces answers you can act on, not just a list of popular names.
Two techniques that instantly sharpen your request
Technique 1: “Compare A vs B given my context”
If you already have two candidates, ask for a comparison grounded in your constraints. This prevents generic pros/cons lists.
“Given we’re a 10-person startup with no data engineer, and we need GDPR compliance, compare Tool A vs Tool B for setup time, ongoing maintenance, and ability to answer product questions.”
Technique 2: “Recommend with a default, then show the edge cases”
Good recommenders pick a default and explain when it fails. Ask for that explicitly.
“What’s the default choice for us, and in what situations would you pick something else?”
What to do when you don’t know what you want yet
Sometimes you genuinely don’t know your preferences or constraints. You can still get great recommendations—just change the request from “pick for me” to “help me discover what matters.”
Use a two-step approach:
- •Exploration: Ask for a small menu of distinct approaches with trade-offs.
- •Convergence: After you react, ask for a refined recommendation.
Exploration question you can copy:
“Give me 3–4 fundamentally different approaches. For each, tell me what kind of team it fits, what it optimizes for, and what I’d be giving up.”
This works particularly well for architecture choices, go-to-market tactics, hiring profiles, and AI adoption strategies.
How to evaluate the recommendations you receive
Even a well-formed brief won’t guarantee perfect answers. But it will make evaluation much easier.
Use these checks:
- •Fit check: Does the recommendation clearly align with your constraints and goals, or does it ignore them?
- •Trade-off honesty: Does it mention downsides and failure modes, or only benefits?
- •Implementation reality: Does it account for your team’s capacity and timeline?
- •Next step clarity: Does it propose a small test (pilot, prototype, trial) rather than a leap of faith?
If a recommendation fails these checks, that’s not a dead end—it’s a prompt to tighten your brief or ask a sharper follow-up question.
A reusable one-page template (copy/paste)
Here’s a compact template you can keep in your notes. Fill it in whenever you want better recommendations from teammates, advisors, vendors, or AI tools.
Decision:
Goal (job-to-be-done):
Context (who/what/where):
Constraints (non-negotiable):
Preferences (nice-to-have):
Success criteria (how we’ll judge):
Trade-offs (accept/refuse):
Examples / non-examples:
Response format requested:
When you use this consistently, you’ll notice something else: you start making better decisions even before anyone replies, because you’ve clarified the problem.
Conclusion: the shortest path to better recommendations
To get recommendations that fit, don’t aim for perfect wording—aim for a clear brief. State the decision, the outcome you want, the constraints you can’t break, and how you’ll judge success. Rank your priorities and name your trade-offs so the recommender doesn’t have to guess.
When you add a couple of examples and request a response format, you turn “What should I do?” into an answerable question—and you’ll get better recommendations every time.
FAQ
How much detail is enough?
Enough detail to filter out incompatible options: your goal, 2–5 hard constraints, and 2–4 priorities. If you’re writing more than a few short paragraphs, compress it into bullets and rank what matters. Specific beats long.
What if I don’t know my constraints or preferences yet?
Say that explicitly and ask for distinct approaches instead of a single “best” answer. Then react to the trade-offs you hear (what feels risky, expensive, slow, or annoying). Your reaction becomes the input for a second, more precise round.
Won’t being specific bias the recommendations?
It will bias recommendations toward what you actually need—which is the point. If you’re worried you’re prematurely narrowing the field, ask for one “within constraints” recommendation and one “if we relaxed X” alternative. That keeps you open-minded without losing relevance.
How do I ask for recommendations without sounding demanding?
Frame it as helping them help you: “To save time, here are the constraints and what success looks like.” Most people appreciate a clear brief because it reduces guesswork. Adding “tell me what you’d need to know if this isn’t enough” invites collaboration.
What’s the fastest way to improve recommendations from AI tools?
Provide your context and constraints, and request a structured output (top options, trade-offs, risks, next steps). AI is very good at generating plausible lists; your job is to provide the filters and the definition of success. Treat your prompt like a mini spec.
How do I compare recommendations from different people?
Normalize them against your success criteria and constraints: put options into a simple matrix and score (even loosely) on what you care about most. When recommenders disagree, the disagreement often reveals a hidden assumption—ask each person what assumption drives their choice. That usually surfaces the real decision.
