In management is common to rely on market research and statistics analysis to support the most important decisions in the company strategy.
Although the company top management don’t have to be experts in statistics, they must know how to identify the most common mistakes in Market Research studies.
1st Mistake: Correlation doesn't prove a cause-effect relation between the correlated variables
When a correlation is identified between 2 variables, you cannot automatically conclude that one reacts to another, establishing a cause-effect relationship between them.
Although it is possible that one of the variables causes a direct impact in the other variable, the correlation analysis is not enough to make that statement.
Example: Consider that there is a negative correlation between umbrella usage and beach towels usage in the summer.
Although there is a negative correlation you cannot conclude that if you force the population to use beach towels in a rainy day, they will automatically discard the umbrella usage in that same rainy day.
In this example the cause-effect relationship results from a 3rd variable not identified in the original correlation which is the weather in the summer. So, the weather in the summer is a 3rd variable that impacts directly both previous variables which were the umbrella usage and beach towel usage, creating a negative correlation between them.
2nd Mistake: “Confidence Interval” interpretation in a Market Research Study
When a market research report is presented, it’s common to rely only on the “Confidence Interval” to evaluate if the study conclusions are reliable or not.
That is a big mistake, because “Confidence Interval” is only relevant when presented also with the “Sampling Error”.
Why? It is easier to explain with examples:
Consider a study that has a 95% “Confidence Interval” and a 4% of “Sampling Error” usually is very reliable if the sampling method was appropriate an the interpretation of those statistical variables should be:
If I repeat the same study with other 100 samples of the same universe, probably I will get the same results only with 4% margin of error in 95 of those 100 samples.
On the other hand, if the study has the same 95% “Confidence Interval” but has 20% of “Sampling Error” usually that means that the results are not reliable, because the interpretation should be:
If I repeat the same study with other 100 samples of the same universe, probably I will get the same results with 20% margin of error in 95 of those 100 samples.
So, although you have the same “Confidence Interval”, the “Sampling Error” is critical to evaluate the quality of the Market Research study.
Although the company top management don’t have to be experts in statistics, they must know how to identify the most common mistakes in Market Research studies.
1st Mistake: Correlation doesn't prove a cause-effect relation between the correlated variables
When a correlation is identified between 2 variables, you cannot automatically conclude that one reacts to another, establishing a cause-effect relationship between them.
Although it is possible that one of the variables causes a direct impact in the other variable, the correlation analysis is not enough to make that statement.
Example: Consider that there is a negative correlation between umbrella usage and beach towels usage in the summer.
Although there is a negative correlation you cannot conclude that if you force the population to use beach towels in a rainy day, they will automatically discard the umbrella usage in that same rainy day.
In this example the cause-effect relationship results from a 3rd variable not identified in the original correlation which is the weather in the summer. So, the weather in the summer is a 3rd variable that impacts directly both previous variables which were the umbrella usage and beach towel usage, creating a negative correlation between them.
2nd Mistake: “Confidence Interval” interpretation in a Market Research Study
When a market research report is presented, it’s common to rely only on the “Confidence Interval” to evaluate if the study conclusions are reliable or not.
That is a big mistake, because “Confidence Interval” is only relevant when presented also with the “Sampling Error”.
Why? It is easier to explain with examples:
Consider a study that has a 95% “Confidence Interval” and a 4% of “Sampling Error” usually is very reliable if the sampling method was appropriate an the interpretation of those statistical variables should be:
If I repeat the same study with other 100 samples of the same universe, probably I will get the same results only with 4% margin of error in 95 of those 100 samples.
On the other hand, if the study has the same 95% “Confidence Interval” but has 20% of “Sampling Error” usually that means that the results are not reliable, because the interpretation should be:
If I repeat the same study with other 100 samples of the same universe, probably I will get the same results with 20% margin of error in 95 of those 100 samples.
So, although you have the same “Confidence Interval”, the “Sampling Error” is critical to evaluate the quality of the Market Research study.
No comments:
Post a Comment