The important distinction between ANOVAs and t-tests in general is that in a t-test, as you probably already have guessed, you get the same result if you randomly choose which way the dependent variable goes. But in an ANOVA, if you want to test the significance of an independent variable, all that you need to do is look at its results when compared to those of the other factors.

Let us say that you’re looking at the effects of race on the four dependent variables in our study. If you were looking for a way to tell whether race, gender, education and/or income would make a difference to the results of your NPS, but in your opinion, none of these factors were particularly important, then it would be wise to use a t-test or a chi-square test.

Chi-square tests, unlike t-tests, can give you a confidence interval around the value you’re trying to discover, and they can also tell you if you’re statistically significantly different from all other results. For instance, if there was one point difference in your score, and all four other factors had been zero, then your chi-square test tells you the probability of this difference being statistically significant. (And it wouldn’t be any good to use it for testing race.)

What about a t-test? A t-test tells you whether there is any significant difference among the data sets you’re comparing. In the case of race, it would tell you whether there was a significant difference in your results between the data sets with white people as the dependent variables and the data sets with black people. (assuming that black people are statistically significantly more likely to have higher scores than white people.)

On the other hand, if you were looking at the differences between sex and education on the four independent variables, then it would tell you whether there was a significant difference in the results between the data sets with female and male people, versus the data sets with men and women. (assuming that females tend to score higher.)

The thing is, in the case of the t-test, you don’t just find a difference between these things. Instead, you also expect there to be a difference. There is no way that this could be completely random. – without a significant trend – to expect the data sets with males and females to have the same value when you control for income and the like.

And it’s expected that there will be a difference. This difference means that there’s a correlation between these two variables, and if you’re lucky, it means that the effect of these two variables on your performance is a small effect.

But that’s what happens when you use anova test in the race example above. You look for differences between the means of the data sets you have, and the means of the data sets that contain black people as the dependent variable, and white people as the independent variable. And because there is a trend towards a difference, you expect that there’ll be some difference in the means of those data sets. – if the difference is very large, you’ll expect the difference to be statistically significant.

And since the p value for significance is very high – a p value of 0.05 in most cases – this means that your chance of finding a significant difference in the means of those data sets is pretty good. If there is a significant difference, you’ll conclude that there’s a trend, and a probability of getting a statistically significant difference – but you’ll also conclude that there aren’t a trend and no probability of getting a statistically significant difference at all.

So the significance of it is not high enough to be significant, even though it might not be very low. Because in many cases, the correlation between the race variable and the independent variable is not significant at all.

If you are going to use anova test in a case like this, you might want to check out the method of calculating the chi-square test, since it might give you a better result. But using anova test is better used for things like tests of correlation between variables, which are not dependent and are therefore not susceptible to a trend.