Skip to main content

Mintel Reports – analysis techniques

Learn about the analysis techniques our expert analysts employ, and how they do statistical forecasting for Mintel Reports.

Updated over 2 months ago

Mintel employs numerous quantitative and qualitative data analysis techniques to enhance the value of our consumer research. The techniques used vary from one report to another. This article will give you an overview of the most common techniques used by our expert analysts to create their reports.

📌 Note: Learn more about our research methods by reading this article.

💡 Tip: As a Mintel Reports subscriber you can find all reports within your subscription by applying the Report content type filter on Insight Home.


Repertoire Analysis

This technique is used to create consumer groups based on reported behaviours or attitudes. Consumer responses across a list of survey items are tallied into a single repertoire variable. The repertoire variable summarises the number of occurrences across the list of survey items.

📝 Example: A repertoire of brand purchasing might produce groups of those that purchase 1-2 brands, 3-4 brands and 5 or more brands. Each subgroup should be large enough (ie N=75+) to analyse.

Cluster Analysis

This segmentation technique assigns a set of individual people into groups called clusters based on their responses in the survey, so that respondents within the same cluster are in some sense closer or more similar to one another than to respondents that were grouped into a different cluster.


Mintel typically uses a 2-step approach to clustering.

  1. Factor analysis to reduce a set of attitudinal survey questions down to a condensed number of factors.

  2. K-means cluster analysis to group respondents into segments based on their response patterns across the factors.

Correspondence Analysis

This statistical visualisation method is used to display the associations between rows (eg image, attributes) and columns (eg brands, products) of a two-way contingency table in a perceptual map that is easy to understand by interpreting proximities.

The significance of the relationship between a brand and its associated attribute, for example, is measured using the Chi-square test. If two brands show similar patterns regarding their associated attributes, they are assigned similar scores on underlying dimensions and will be displayed close to each other in the perceptual map.

CHAID Analysis

A CHAID analysis (Chi-squared Automatic Interaction Detection) is used to highlight key target groups in a sample by identifying which sub-group is most likely to show a particular characteristic (eg interest in trying a new product).

It divides the sample into a series of subgroups that share similar characteristics (eg age, gender) and allows us to identify which combination of characteristics shows the highest response rate for the target variable (eg interest in trying a new product). The first predictor category on which the sample will be split (eg age) is most associated with the response variable, ie it gives the most differentiating groups of respondents. Each subgroup is then further split until the analysis does not find any significantly discriminating predictor anymore.

The output is a tree of which the branches are the predictor variables that split the sample in discriminating groups.

Key Driver Analysis

A Key Driver Analysis is used to identify and prioritise different factors that can impact consumer attitudes and behaviours (eg customer satisfaction, likelihood to recommend a brand or switch providers) by assessing their relative importance. This can be achieved by using either a logistic regression or correlation analysis.

Logistic regression is a predictive analysis used to identify the relationship between one dependent variable (eg customer satisfaction) and one or more independent variables (eg quality of customer service, product range).

Correlation analysis describes the strength and nature of the relationship between a dependent variable of interest (eg overall customer satisfaction) and one or more independent variables (eg satisfaction with customer service, product range).

TURF Analysis

A TURF (Total Unduplicated Reach & Frequency) analysis identifies the mix of features, attributes, or messages that will attract the largest number of unique respondents. It is typically used when the number of features or attributes must be or should be limited, but the goal is still to reach the widest possible audience. By identifying the Total Unduplicated Reach, it is possible to maximise the number of people who find one or more of their preferred features or attributes in the product line.


The resulting output from TURF is additive, with each additional feature increasing the total reach. The chart is read from left to right, with each arrow indicating the incremental change in total reach when adding a new feature, attribute etc. The final bar represents the maximum reach of the total population when all shown features, attributes etc are offered.

Price Sensitivity Analysis

Price sensitivity analysis is a way of measuring how the price of a product affects consumer purchasing behaviour. The analysis helps identify the ideal price, as well as a range of acceptable prices, for a specific good or service among consumers. Different price points are identified.

  • Point of Marginal Cheapness (PMC) is the point at which perception of the product quality starts to decline. Pricing below this value may be detrimental to the product line sales.

  • Point of Marginal Expensiveness (PME) is the price point at which consumers question the value of the product given the costs. Marketing above this price point may also be detrimental to the product line sales

  • Optimal Price Point (OPP) is the price point at which an equal number of consumers feel that the price exceeds either their upper or lower cost limits.

  • Range of Acceptable Prices (RAP) is the price range at which consumer’s expectation for what a product or service should cost.

The aggregate price points are plotted onto Price Maps to indicate the high/low price thresholds as well as the Optimal Price Point (OPP).

Statistical Forecasting

Statistical modelling

For the majority of reports for the United States, Canada, the United Kingdom, Germany and China, Mintel produces five-year central forecasts based on 'regression with ARIMA errors' which is a combination of two simple yet powerful statistical modelling techniques: regression and ARIMA (Auto-Regressive Integrated Moving Average). Regression allows us to model, thus predict, market sizes using exogenous information (eg GDP, unemployment). ARIMA allows us to model market sizes using endogenous information (lagged values). To estimate this type of model, Mintel uses the software R.

Historical market size data feeding into each forecast are collated in Mintel’s own market size database and supplemented by macro- and socio-economic data sourced from organisations such as the Economist Intelligence Unit and the Office for Budget Responsibility.

Within the forecasting process, we analyse relationships between actual market sizes and a selection of key economic and demographic determinants (independent variables) in order to identify those predictors that have the most influence on the market.

Factors used in a forecast are stated in the relevant report section alongside an interpretation of their role in explaining the development in demand for the product or market in question.

Qualitative insight

At Mintel we understand that historic data is limited in its capacity to act as the only force behind the future state of markets. Thus, rich qualitative insights from industry experts regarding future events that might impact upon various markets play an invaluable role in our post-statistical modelling evaluation process.

As a result, the Mintel forecast complements a rigorous statistical process with in-depth market knowledge and expertise to allow for additional factors or market conditions outside of the capacity of the statistical forecast.

A graphic with three circles. They say Statistical Modelling + Qualitative Insight = Mintel Forecast.

The Mintel fan chart

Forecasts of future economic outcomes are always subject to uncertainty. In order to raise awareness amongst our clients and to illustrate this uncertainty, Mintel displays market size forecasts in the form of a fan chart.

The fan chart shows the actual market size for the past 5 or 6 years, in some cases a current year estimate, a 5-year or 6-year horizon central forecast (resulting from statistical modelling and qualitative insight), and the forecast’s prediction intervals (resulting from statistical modelling).

The prediction intervals represent the range of values which the actual future market size will fall in with a specific probability.

A general conclusion: based on our current knowledge of given historic market size data as well as projections for key macro- and socio-economic measures that were used to create the forecast, we can say that the future actual market size will fall within the shaded fan with a probability of 95%. There is a small probability of 5% that the future actual market size will fall out of these boundaries.

Since 95% is in most applications the threshold that defines whether we can accept or refuse a statistical result, the outer limits of the 95% prediction interval can be seen as the forecast’s best and worst cases.

Weather analogy

To illustrate uncertainty in forecasting in an everyday example, let us assume the following weather forecast was produced based on the meteorologists’ current knowledge of the previous weather condition during the last few days, atmospheric observations, incoming weather fronts, etc.

A visualisation of a weather forecast for the upcoming week.

Now, how certain can we be that the temperature on Saturday will indeed be 15°C?

To state that the temperature in central London on Saturday will rise to exactly 15°C is possible but one can’t be 100% certain about that fact.

To say the temperature on Saturday will be between 14°C and 17°C is a broader statement and much more probable.

In general, we can say that based on the existing statistical model, one can be 95% certain that the temperature on Saturday will be between 14°C and 17°C, and only 50% certain it will be between about 14.5°C and 15.5°C. Finally, there is a small probability of 5% that the actual temperature on Saturday will fall out of these boundaries and thus will be below 14°C or above 17°C.

Did this answer your question?