This
material is published in Environmental
and Ecological Statistics (2014, 21: 239-261), the only definitive repository
of the content that has been certified and accepted after peer review. doi:
10.1007/s10651-013-0253-4

**Prospective evaluation of designs for analysis of variance without knowledge
of effect sizes
**

Estimation of design power
requires knowledge of treatment effect size and error variance, which are often
unavailable for ecological studies. In the absence of prior information on these
parameters, investigators can compare an alternative to a reference design for
the same treatment(s) in terms of its precision at equal sensitivity. This
measure of relative performance calculates the fractional error variance
allowed of the alternative for it to just match the power of the reference.
Although first suggested as a design tool in the 1950s, it has received little
analysis and no uptake by environmental scientists or ecologists. We calibrate
relative performance against the better known criterion of relative efficiency,
in order to reveal its unique advantage in controlling sensitivity when
considering the precision of estimates. The two measures differ strongly for
designs with low replication. For any given design, relative performance at
least doubles with each doubling of effective sample size. We show that
relative performance is robustly approximated by the ratio of reference to
alternative *α* quantiles of the *F* distribution, multiplied by
the ratio of alternative to reference effective sample sizes. The proxy is easy
to calculate, and consistent with exact measures. Approximate or exact
measurement of relative performance serves a useful purpose in enumerating
trade-offs between error variance and error degrees of freedom when considering
whether to block random variation or to sample from a more or less restricted
domain.

Access to the full article at Environmental and Ecological Statistics.