Skip to main content

Mintel Reports – analysis techniques

Learn about the analysis techniques our expert analysts employ, and how they do statistical forecasting for Mintel Reports.

Updated this week

Mintel employs numerous quantitative and qualitative data analysis techniques to enhance the value of our consumer research. The techniques used vary from one report to another. This article will give you an overview of the most common techniques used by our expert analysts to create their reports.

📌 Note: Learn more about our research methods by reading this article.

💡 Tip: As a Mintel Reports subscriber you can find all reports within your subscription by applying the Report content type filter on clients.mintel.com.


Repertoire Analysis

This is used to create consumer groups based on reported behaviour or attitudes. Consumer responses of the same value (or list of values) across a list of survey items are tallied into a single variable. The repertoire variable summarises the number of occurrences in which the value or values appear among a list of survey items.

📝 Example: A repertoire of brand purchasing might produce groups of those that purchase 1-2 brands, 3-4 brands and 5 or more brands. Each subgroup should be large enough (ie N=75+) to analyse.

Cluster Analysis

This technique assigns a set of individual people into groups called clusters on the basis of one or more responses, so that respondents within the same cluster are in some sense closer or more similar to one another than to respondents that were grouped into a different cluster.

Correspondence Analysis

This is a statistical visualisation method for picturing the associations between rows (image, attitudes) and columns (brands, products, segments, etc) of a two-way contingency table. It allows us to display brand images (and/or consumer attitudes towards brands) related to each brand covered in this survey in a joint space that is easy to understand. The significance of the relationship between a brand and its associated image is measured using the Chi-square test. If two brands have similar response patterns regarding their perceived images, they are assigned similar scores on underlying dimensions and will then be displayed close to each other in the perceptual map.

CHAID Analysis

CHAID (Chi-squared Automatic Interaction Detection), a type of decision-tree analysis, is used to highlight key target groups in a sample by identifying which sub-groups are more likely to show a particular characteristic. This analysis subdivides the sample into a series of subgroups that share similar characteristics towards a specific response variable and allows us to identify which combinations have the highest response rates for the target variable. It is commonly used to understand and visualise the relationship between a variable of interest, such as interest in trying a new product, and other characteristics of the sample, such as demographic composition.

Key Driver Analysis

Key Driver Analysis can be a useful tool in helping to prioritise focus between different factors which may impact key performance indicators (KPIs), such as satisfaction, likelihood to switch providers and likelihood to recommend a brand. Using correlations analysis or regression analysis we can get an understanding of which factors or attributes of a market have the strongest association or link with a positive performance on KPIs. Hence, we are able to identify which factors or attributes are relatively more critical in a market category compared to others and ensure that often limited resources can be allocated to focusing on the main market drivers.

TURF Analysis

TURF (Total Unduplicated Reach & Frequency) analysis identifies the mix of features, attributes, or messages that will attract the largest number of unique respondents. It is typically used when the number of features or attributes must be or should be limited, but the goal is still to reach the widest possible audience. By identifying the Total Unduplicated Reach, it is possible to maximize the number of people who find one or more of their preferred features or attributes in the product line. The resulting output from TURF is additive, with each additional feature increasing total reach. The chart is read from left to right, with each arrow indicating the incremental change in total reach when adding a new feature. The final bar represents the maximum reach of the total population when all shown features are offered.

Price Sensitivity Analysis

Price sensitivity analysis shows consumer expectations about pricing of a finished product. Consumers were asked to provide a price point for the finished product. The aggregate price points are then plotted onto Price Maps to indicate Point of Marginal Cheapness (PMC), Point of Marginal Expensiveness (PME) as well as the Optimal Price Point (OPP).

Statistical Forecasting

Statistical modelling

For the majority of reports for the United States, Canada, the United Kingdom, Germany and China, Mintel produces five-year central forecasts based on 'regression with ARIMA errors' which is a combination of two simple yet powerful statistical modelling techniques: regression and ARIMA (Auto-Regressive Integrated Moving Average). Regression allows us to model, thus predict, market sizes using exogenous information (eg GDP, unemployment). ARIMA allows us to model market sizes using endogenous information (lagged values). To estimate this type of model, Mintel uses the software R.

Historical market size data feeding into each forecast are collated in Mintel’s own market size database and supplemented by macro- and socio-economic data sourced from organisations such as the Economist Intelligence Unit and the Office for Budget Responsibility.

Within the forecasting process, we analyse relationships between actual market sizes and a selection of key economic and demographic determinants (independent variables) in order to identify those predictors that have the most influence on the market.

Factors used in a forecast are stated in the relevant report section alongside an interpretation of their role in explaining the development in demand for the product or market in question.

Qualitative insight

At Mintel we understand that historic data is limited in its capacity to act as the only force behind the future state of markets. Thus, rich qualitative insights from industry experts regarding future events that might impact upon various markets play an invaluable role in our post-statistical modelling evaluation process.

As a result, the Mintel forecast complements a rigorous statistical process with in-depth market knowledge and expertise to allow for additional factors or market conditions outside of the capacity of the statistical forecast.

A graphic with three circles. They say Statistical Modelling + Qualitative Insight = Mintel Forecast.

The Mintel fan chart

Forecasts of future economic outcomes are always subject to uncertainty. In order to raise awareness amongst our clients and to illustrate this uncertainty, Mintel displays market size forecasts in the form of a fan chart.

The fan chart shows the actual market size for the past 5 or 6 years, in some cases a current year estimate, a 5-year or 6-year horizon central forecast (resulting from statistical modelling and qualitative insight), and the forecast’s prediction intervals (resulting from statistical modelling).

The prediction intervals represent the range of values which the actual future market size will fall in with a specific probability.

A general conclusion: based on our current knowledge of given historic market size data as well as projections for key macro- and socio-economic measures that were used to create the forecast, we can say that the future actual market size will fall within the shaded fan with a probability of 95%. There is a small probability of 5% that the future actual market size will fall out of these boundaries.

Since 95% is in most applications the threshold that defines whether we can accept or refuse a statistical result, the outer limits of the 95% prediction interval can be seen as the forecast’s best and worst cases.

Weather analogy

To illustrate uncertainty in forecasting in an everyday example, let us assume the following weather forecast was produced based on the meteorologists’ current knowledge of the previous weather condition during the last few days, atmospheric observations, incoming weather fronts, etc.

A visualisation of a weather forecast for the upcoming week.

Now, how certain can we be that the temperature on Saturday will indeed be 15°C?

To state that the temperature in central London on Saturday will rise to exactly 15°C is possible but one can’t be 100% certain about that fact.

To say the temperature on Saturday will be between 14°C and 17°C is a broader statement and much more probable.

In general, we can say that based on the existing statistical model, one can be 95% certain that the temperature on Saturday will be between 14°C and 17°C, and only 50% certain it will be between about 14.5°C and 15.5°C. Finally, there is a small probability of 5% that the actual temperature on Saturday will fall out of these boundaries and thus will be below 14°C or above 17°C.

Did this answer your question?