Conjoint analysis: a practical guide for market research

If you've ever asked customers what they want, you've probably run into a familiar problem: they tell you everything is important. Price, speed, quality, support, features, brand, color, size – all "must-haves."
Conjoint analysis is the antidote.
Instead of asking people to rate features in isolation, you show realistic trade-offs and measure what they're willing to give up to get what they really value.
Done well, it turns preference research into numbers you can use for product and pricing research, product positioning research, and market simulations.
Here's what we'll cover in this article:
- What is conjoint analysis
- Why market researchers use it for "real or hidden drivers" of choice
- The main types of conjoint analysis and when to use each
- How to do conjoint analysis – a step-by-step guide
- How to build a conjoint analysis survey that respondents can actually finish
- A worked conjoint analysis example for a SaaS plan bundle
- How to interpret conjoint analysis results
What is conjoint analysis?
At its core, conjoint analysis can be defined as a market research method that measures how survey respondents value different parts of an offer by forcing trade-off analysis.
You define:
- Attributes – the things that vary, such as price, warranty, delivery time, support tier, and color
- Attribute levels – the options within each attribute (e.g., price: $19, $29, $39)
- Profiles – also called product concepts, these are the bundles of attribute levels shown together
In a conjoint analysis experiment, respondents evaluate multiple profiles. Depending on the conjoint analysis methods you choose, they might pick one option (a choice task), rank options, rate them, or allocate points across them.
The logic behind conjoint measurement is that each level carries a part-worth utility (a utility value), and the total utility of a profile is the sum of the utilities of its levels. That's the engine behind preference scores, attribute importance (often called relative importance), and market simulator outputs.
A quick mini-scenario
You're testing three TV streaming plans. Each plan varies by:
- Price
- Ads vs. no ads
- Resolution
- Number of devices
Instead of asking, "How important is resolution?" you ask people to choose among bundles like:
- $9.99 – Ads – HD – 1 device
- $13.99 – No ads – HD – 2 devices
- $17.99 – No ads – 4K – 4 devices
That's conjoint analysis work in action: quantifying perceived value through concrete choices.
What is the purpose of conjoint analysis?
The purpose of conjoint analysis is to measure consumer preferences in the way people actually make decisions.
Direct "importance" survey questions – like "How important is feature X?" – are useful for directional input, but they tend to cause respondents to overstate everything, meaning it's challenging to understand what actually provides the most value.
Conjoint studies help you estimate:
- Which product features drive choice and which are just nice-to-haves
- How sensitive people are to price changes (i.e., price sensitivity)
- The relative value of features when compared with each other
- The optimal product configuration for a target market segment
- Likely market share under different product configurations
- Needs-based segmentation and how value drivers differ by segment
Because the method of conjoint analysis relies on statistical analysis of repeated trade-offs, it often surfaces decision drivers that respondents can't easily articulate – the "real or hidden drivers" behind what they choose.
Conjoint analysis in marketing and market research
Conjoint analysis in marketing research sits at the decision end of the customer journey. You can run focus groups to explore language, motivations, and unmet needs, then use a conjoint survey to quantify how big those needs are and what people will pay to solve them.
Common use cases for market researchers and product teams include:
- Pricing research and conjoint analysis pricing – Estimate willingness to pay, test price ladders, and run conjoint pricing analysis for packaging decisions.
- Packaging and bundling – Decide what goes into Basic vs. Pro vs. Enterprise plans, and what to gate behind upgrades.
- Feature prioritization – Avoid building too many product features by identifying what moves decision-making, not just what sounds good.
- Product positioning research – Learn which combinations best fit different market segments.
- Market entry and competitive sets – Use market simulations to compare your concept against current options and predict market shares.
- Optimal product configuration – Combine utilities with business constraints, such as cost and feasibility (sometimes using linear programming techniques) to shortlist configurations that balance appeal and margin.
Conjoint analysis employs choice scenarios and statistical techniques instead of relying on a single importance question to achieve results.
Types of conjoint analysis and how to choose one
There are several types of conjoint analysis. The right pick depends on how realistic you need the decision to be, how many attributes you have, and how much survey time you can spend.
Choice-based conjoint (CBC)
Choice-based conjoint analysis (also known as choice-based conjoint analysis) is the most common modern approach. It's built around a repeated choice task: show a small set of alternatives and ask respondents to choose the one they would buy. In academic and policy contexts, you may see it called discrete choice experiments.
Because it mirrors real shopping behavior, CBC is a strong default when you care about predicting market shares, price sensitivity, and market share simulations.
When CBC is a good fit
- You want realistic decisions, not just attitudes
- You need a market simulator for scenario planning
- You plan to test pricing, bundles, or product configurations
Adaptive conjoint analysis (ACA)
Adaptive conjoint analysis changes what each respondent sees based on earlier answers. The idea is to focus on the most relevant comparisons, which can reduce the burden on respondents when there are more attributes.
When ACA is a good fit
- You have many attributes
- You're doing early-stage preference research
- You're willing to trade some realism for efficiency
Full profile conjoint analysis
Full profile conjoint analysis is closer to classic conjoint designs, as respondents rate or rank complete profiles. It can work well when choices are hard to simulate as a purchase where customers will pick just one option, but you still want full concepts.
When full profile conjoint analysis is a good fit
- You have a small number of attributes and can show the whole concept at once without overwhelming survey respondents
- You want feedback on complete product concepts, not just individual features
- The decision is closer to evaluating or ranking than making an immediate purchase
- You need something simpler to explain to stakeholders than choice-based models, and realism is still acceptable
Menu-based conjoint analysis
Menu-based conjoint analysis is useful when respondents build a product by selecting add-ons or features from a menu. It's a natural fit for configurable offers, such as subscriptions, insurance riders, or enterprise software add-ons.
When menu-based conjoint analysis is a good fit
- The product is naturally configurable, and buyers "build" an option from add-ons
- You're testing bundling, packaging, and upgrade paths where respondents choose specific features from a menu
- You want to understand the impact of too many product features and which ones people actively select vs. ignore
- You need outputs that translate cleanly into optimal product configuration decisions for product management
Volumetric conjoint analysis
Volumetric conjoint analysis extends the idea beyond "which option would you pick?" into "how many would you buy?" or "how much would you consume?" It's useful when demand volume matters as much as share.
When volumetric conjoint analysis is a good fit
- You care about how much people would buy, not just which option they'd pick
- The outcome you need is demand forecasting or revenue planning, not just preference.
- Small changes in price sensitivity could materially affect total demand, not merely market share
- You want market simulations that estimate market share + volume together
Ranking, rating, constant-sum, and self-explicated
These are lighter-weight conjoint analysis techniques that can be easier on survey respondents:
- Ranking – order options from best to worst
- Rating – score each option
- Constant-sum – allocate points across features
- Self-explicated – respondents directly state desirability and importance
They can be helpful as simplification strategies, but they're also more vulnerable to "everything is important" answers.
When ranking, rating, constant-sum, and self-explicated are a good fit
- If you want to make your survey as simple as possible for respondents
- You have fewer choices for customers
A quick decision guide
- Choice-based conjoint analysis (CBC) – you need realistic purchase behavior and market simulations
- Adaptive conjoint analysis – you have more attributes than respondents can handle in CBC, or reduce your scope
- Menu-based conjoint analysis – your product is naturally configurable
- Volumetric conjoint analysis – you need volume, not just share
- Simpler rating/ranking methods – this is exploratory and you need speed

How to do conjoint analysis: a step-by-step guide
Doing conjoint analysis may sound like a technical challenge, but the workflow is actually straightforward. The hard part is making good choices early, because those choices determine whether your conjoint analysis results are actionable.
- Define the decision you need to make
Start with a single, testable question, like:
- Which bundle should we launch for SMBs?
- What price range maximizes revenue without killing adoption?
- Which features drive upgrade intent?
Conjoint data is most useful when it points to a real decision: a package, a price, or a configuration you can ship.
- Choose your audience and sampling plan
Be clear about who you're modeling. Your "market" isn't everyone – it's the people who could realistically buy.
At this stage, many teams also plan segmentation: market segments by use case, budget, or current solution. If your goal is to develop needs-based segmentation, you'll want enough samples per segment to compare utilities.
- Select attributes and attribute levels
This is where most conjoint studies win or lose.
Pick attributes that are:
- Actionable – you can change them
- Understandable – people can evaluate them
- Realistic – levels exist in the real world
Decide how many attributes you can include. There's no magical number, but more attributes mean more cognitive load. If you include too many product features, respondents stop trading off and start guessing.
Here's a practical approach:
- Start broad with internal stakeholders and focus groups
- Narrow to the few attributes that matter most for the decision
- Keep levels realistic – especially price levels
Prioritize what's important while also reflecting the product features and functionality you offer.
- Pick a conjoint model and estimation approach
A few common statistical techniques used in conjoint analysis include:
- Multinomial logit (MNL) – A baseline choice model
- Hierarchical Bayes (HB) – Produces individual-level utilities and works well for CBC
- Latent class – Finds segments with distinct preference patterns, making it useful for needs-based segmentation
- Interaction models – Explore model interactions when the value of a feature depends on another feature
You don't need to do the math by hand, but you do want to understand what the software is estimating, and what assumptions it makes.
- Design the conjoint analysis experiment
For CBC, design choices include how many alternatives per task (often 2–4 plus an opt-out), how many tasks per respondent, whether to include a "none" option, and the constraints to avoid impossible combinations
Your design also affects sample size. A widely used rule of thumb recommended by Richard Johnson and Bruce Orme links minimum sample size to tasks, alternatives, and the largest number of levels, often expressed as:
N > 500c / (t × a)
Use this formula as a starting point, then sanity-check with expected segmentation cuts and the precision you need.
- Pilot, then field
Pilot the conjoint analysis survey with a small sample first. Look for:
- Completion time
- Drop-off points
- Confusing attribute wording
- Straightlining or random clicking
If everything looks good, launch to your full sample; if not, make tweaks and changes to fix any issues.
- Clean and analyze survey data
Before you interpret conjoint analysis results, clean the survey data by removing speeders, checking for inconsistent responses, and confirming that respondents understood the task.
After it's been cleaned, start analyzing survey data by estimating utilities, relative importance, and running market simulations.
- Simulate scenarios and make a recommendation
It's at this stage that conjoint analysis starts to pay off. Use a market simulator to test:
- Price moves
- Feature swaps
- Bundle design
- Competitor matching scenarios
End with a clear decision: which configuration, which price, and which segment is best for your product offering
How to make a conjoint analysis survey
A good conjoint survey looks deceptively simple. Under the hood, it's careful wording, realistic levels, and a structure that keeps people engaged.
Here's a practical build checklist for a conjoint analysis survey:
Choose attributes you can act on
If you can't change it, don't test it. Conjoint analysis is for decisions, not trivia.
A tight set of attributes also helps you avoid respondent fatigue. If you're tempted to include more attributes, ask: will this change what we build, price, or position?
Define realistic attribute levels
Levels should feel plausible. If your pricing research tests prices far outside reality, your price sensitivity estimates won't generalize.
Also watch for hidden complexity: "24/7 support" sounds clear, but support quality can mean response time, channel, and expertise. Keep levels concrete.
Design choice tasks that feel human
When you're designing surveys for CBC, do the following:
- Keep each task readable on one screen
- Avoid dense text blocks
- Highlight what changes across options
Many teams include a short training task so respondents learn how the choice-based conjoint flow works.
Decide on opt-out logic
Opt-outs can improve realism, but they change interpretation. If you include "none," be ready to explain what "none" represents – i.e., no purchase, delay, stick with current solution, or no need for solution.
Keep branding consistent
Branding isn't just aesthetics. A cohesive survey experience improves trust and completion rates, especially for longer conjoint analysis surveys.
This is where tooling matters. In Checkbox, you can build complex surveys with logic, randomization, and consistent styling, and then pair it with reporting workflows through your analytics stack.
Conjoint analysis example: picking the best SaaS plan bundle
Let's walk through a concrete conjoint analysis example you can adapt. This is a choice-based conjoint analysis for a SaaS product.
The scenario
You're launching a new analytics tool for small and mid-sized teams. You need to decide:
- Which plan bundles to offer
- What to include in each plan
- Where to set price points
This is classic product and pricing research.
Step 1: Choose attributes and levels
Here's a manageable design with five attributes and realistic levels:
These are concrete features respondents can picture. They also map directly to costs and packaging decisions.
Step 2: Build a choice task
A simple CBC choice task might look like:
Ask the question "Which plan would you choose?"
Option A
- $29 / month
- 5 seats
- Email support
- 3 integrations
- 30-day retention
Option B
- $49 / month
- 10 seats
- Priority email
- 10 integrations
- 12-month retention
Option C
- $79 / month
- 25 seats
- Chat priority
- Unlimited integrations
- Unlimited retention
Optionally, you could add "None of these" depending on whether you want an opt-out as well – just make sure it's clear what people are selecting.
Respondents repeat this across multiple tasks. The design algorithm rotates combinations so you can estimate utility scores for each level.
Step 3: What insights should you expect?
After analysis, you'll typically get:
- Utility values for each attribute level, or part-worth utilities
- Relative importance of each attribute
- Preference scores for specific product configurations
- Willingness to pay estimates, often derived from the price coefficient
- Market simulations that predict market shares among bundles
Step 4: Example outputs (simplified)
Imagine your model estimates these utility values:
A few takeaways jump out:
- People strongly dislike the top price – price sensitivity is real here.
- Retention and integrations carry a lot of relative value.
- Support matters, but less than you might expect – a perceived value vs. stated importance gap.
Now you can test bundles:
- Bundle 1 – $29, 5 seats, email, 3 integrations, 30 days
- Bundle 2 – $49, 10 seats, priority email, 10 integrations, 12 months
- Bundle 3 – $79, 25 seats, chat + priority, unlimited, unlimited
A market simulator would turn those totals into predicted choice shares. That's how you move from conjoint data to deciding what you should launch.
Interpreting conjoint analysis results
Conjoint analysis results can look intimidating if you're new to the outputs. Most reports boil down to four things.
- Utilities (part-worths) and utility score
Utilities are the model's estimate of how much each level contributes to preference. They're sometimes called part-worth utility, preference scores, or utility values.
A few practical rules:
- Utilities are relative, not absolute
- Bigger positive numbers mean they're more preferred by users
- Negative numbers mean they're less preferred
- You sum them to get the total utility score for a profile
This workflow of converting perceptions into numbers is the heart of conjoint analysis.
- Relative importance (attribute importance)
Relative importance tells you how much an attribute influences choices compared to others.
It's usually based on the utility range within each attribute (max minus min), scaled to 100%. That's why choosing realistic levels matters – if your price range is huge, price will dominate.
Use relative importance as a directional guide, not a law of physics, as small differences aren't always meaningful.
- Willingness to pay and pricing insights
In conjoint analysis for pricing, you often translate utilities into dollar values using the price coefficient. This is where pricing conjoint analysis gets practical:
- What's the implied value of "Unlimited retention"?
- How much are people willing to pay for 10 more integrations?
- Which feature upgrades justify a price increase?
These are the insights product management teams use to defend packaging decisions.
- Market simulations and predicted market shares
A market simulator combines utilities across configurations and estimates choice probabilities. That's where you can:
- Predict market shares across bundles
- Test hypotheses like, "What if we raise the price by $10?"
- Model market share shifts when features change
If you're presenting to stakeholders, market simulations are often the most persuasive output because they connect directly to strategic insights and planning.
Best practices (and pitfalls) to avoid in conjoint studies
Even market research survey experts get tripped up by conjoint when the design is rushed. Here's what to watch.
Don't include too many attributes
"How many attributes" can you include? Enough to represent the real decision, but not so many that respondents shut down.
Warning signs you have too many:
- Long completion times
- Rising drop-off mid-survey
- Respondents choosing randomly
- Flat utilities that don't make sense
If you truly need more attributes, use simplification strategies:
- Merge overlapping attributes
- Reduce levels
- Use adaptive conjoint analysis
- Split research into two studies (core offer vs. add-ons)
Avoid unrealistic combinations
If your design allows impossible bundles, you'll estimate preferences for products that can't exist. Use constraints, especially when testing actual features with business rules (e.g., "Unlimited integrations" might only exist with higher tiers).
Get the price levels right
This is the biggest driver of bad pricing research. If levels are too low, you'll overestimate demand. If levels are too high, you'll overestimate price sensitivity.
Ground price levels in reality: current pricing, competitor ranges, and the price points you'd genuinely consider.
Plan for segmentation up front
If you want market segments or needs-based segmentation, plan sample size accordingly. Otherwise, you end up with beautiful overall results and weak segment readouts.
Treat small differences carefully
Conjoint outputs are estimates. Don't over-interpret tiny gaps in preference scores, especially if you haven't tested statistical stability.
Use a sanity check question
Add a simple holdout task or direct check to confirm the results make sense. If the model says people prefer what you believe to be the worst option, something is off in wording, sampling, or data quality.
Build the story, not just the spreadsheet
The best conjoint analysis results aren't just charts – they're a clear narrative:
- What people value
- What they trade off
- What you should build, price, and position
- What happens under different market simulations
Creating a detailed analysis, with examples and visualizations of the data, is what will help persuade stakeholders.
Conjoint analysis software
A lot of teams search for conjoint analysis tools because they assume the hardest part is the model. In practice, the survey build and the data workflow can cause more headaches than the math.
When evaluating a conjoint analysis tool, look for:
- Survey design flexibility – complex logic, randomization, quotas, and piping
- Brand control – white-label capabilities and consistent styling
- Data control – especially if you need on-premise or strict governance
- Export options – clean exports for your preferred conjoint analysis methods
- Reporting – dashboards for sharing results internally
If you're constrained by security requirements, customization needs, or data sovereignty, you may need a different setup to the standard platforms, such as on-premise research software like Checkbox.
Why run your conjoint survey in Checkbox?
Conjoint studies live or die on execution: clean survey flow, reliable data collection, and outputs your team can actually use.
Checkbox is built for research-driven teams that need flexibility and control – especially when surveys need to meet strict IT, governance, or branding requirements.
Here's what you can do faster and more reliably in Checkbox:
- Build a polished conjoint analysis survey with advanced logic and consistent branding
- Scale distribution without worrying about respondent limits
- Keep control of where data lives, including deployment options that support data sovereignty
- Export survey data cleanly for your preferred statistical techniques and conjoint model workflows
- Share results using built-in analytics and reporting features
If conjoint is part of your broader research program, it also pairs naturally with market research platform capabilities for ongoing studies and experimentation, as well as voice of the customer tools when you want conjoint insights alongside continuous VoC signals
Final thoughts
Conjoint analysis is one of the most practical ways to quantify consumer preferences. It helps you move beyond "people like it" into measurable trade-offs: what features matter, how much they matter, and what people will pay for them.
If you're planning a conjoint experiment – or you want to upgrade how you run product and pricing research – Checkbox can help you build and field a high-quality conjoint survey with the flexibility, security, and branding control serious research demands.
Ready to put conjoint analysis to work? Request a demo to see how quickly you can launch your next study.
Conjoint analysis FAQs
Most teams use conjoint studies to support decisions like:
- Optimal product configuration (which bundle to launch)
- Price sensitivity and willingness to pay (conjoint analysis for pricing)
- Attribute importance and relative importance of features
- Predicting market shares using a market simulator
- Needs-based segmentation (different utilities by segment)
- Market simulations for "what if" scenario planning
Use conjoint analysis when you need to make a decision involving trade-offs – especially pricing, packaging, product configurations, or feature sets. It's most useful when direct closed-ended questions like "What's most important?" aren't giving you clear prioritization.
Conjoint analysis in marketing research is a survey-based method that measures how people value attributes of a product or service by making trade-offs. It's commonly used for pricing research, bundle design, feature prioritization, and product positioning research.


