**Significance**: The
strength of evidence for an effect, measured by a *P*-value associated with the *F*-ratio
from analysis of variance. A significant effect has a small *P*-value indicating a small chance of
making a Type I error. For example, *P*
< 0.05 means a less than 5% chance of mistakenly rejecting a true null
hypothesis. For many tests this would be considered a reasonable level of
safety for rejecting the null hypothesis of no effect, in favour of the model
hypothesis of a significant effect on the response. The significance of an
effect is not directly informative about the size of the effect. Thus an effect
may be statistically highly significant as a result of low residual variation,
yet have little biological significance as a result of a small effect size in
terms of the amount of variation between sample means or the slope of a
regression. A non significant effect should be interpreted with reference to
the Type II error rate, which depends on the power of the test to detect
significant effects.

Doncaster, C. P. & Davey, A. J. H. (2007) *Analysis of Variance and Covariance: How to
Choose and Construct Models for the Life Sciences*. Cambridge: Cambridge
University Press.

http://www.southampton.ac.uk/~cpd/anovas/datasets/