Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-02T19:13:15.775Z Has data issue: false hasContentIssue false

DESIGNED EXPERIMENTS: DO YOU KNOW WHAT POPULATION YOU ARE SAMPLING FROM?

Published online by Cambridge University Press:  21 June 2018

MARCIN KOZAK*
Affiliation:
Department of Botany, Warsaw University of Life Sciences – SGGW, Nowoursynowska 159, 02-776 Warsaw, Poland Department of Qualitative and Quantitative Studies, University of Information Technology and Management in Rzeszow, Sucharskiego 2, 35-225 Rzeszów, Poland
HANS-PETER PIEPHO
Affiliation:
Biostatistics Unit, Institute of Crop Science, University of Hohenheim, Fruwirthstrasse 23, 70593 Stuttgart, Germany
*
Corresponding author. Email: [email protected]

Summary

Consider a field experiment laid out in a randomized complete block design in which you study three types of fertilizers for two winter wheat cultivars. One year, one location – the experiment is not repeated. You design it and then spend a lot of time and money to conduct it. You cultivate the soil and take care of the plants; you worry about them; you never know what can happen, so you cannot wait for the crop to be harvested. And that day finally comes. The crop is harvested, and everything is fine. And here you are, all went great, you have the data in hands, and now a simple thing to do – analyse them. Well, yes, the experiment was conducted in one year, and you are aware you cannot be sure the outcome would be the same next year or elsewhere, but whatever – suffice it to consider the conclusions as preliminary and get on with the interpretation. Why should you not? It was a properly designed experiment that took samples from the underlying infinite populations of the two winter wheat cultivars in the three water regimes studied. Statistics is here to help you out, is not it? Well, it is not. Statistics will not help you out whether the experiment was poorly designed. Agricultural science literature seldom explains what populations are studied and what types of samples are taken in designed experiments. To fill this gap, we discuss various aspects of the sampling process in designed experiments. In doing so, we look at the survey sampling methodology, a statistical framework for studying finite populations – we do this because survey sampling has developed into the advanced theory of sampling processes, and this background can help us understand the intrinsic aspects of sampling in designed experiments.

Type
Review
Copyright
Copyright © Cambridge University Press 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Bailey, R. A. (2009). Design of Comparative Experiments. Cambridge: Cambridge University Press.Google Scholar
Caliński, T., Czajka, S., Kaczmarek, Z., Krajewski, P. and Pilarczyk, W. (2009) Analyzing the genotype-by-environment interactions under a randomization-derived mixed model. Journal of Agricultural, Biological, and Environmental Statistics 14:224241.Google Scholar
Caliński, T. and Kageyama, S. (2000). Block Designs: A Randomization Approach. Volume I: Analysis. Berlin: Springer. Lecture Notes in Statistics 150.Google Scholar
Cochran, W. G. (1977). Sampling Techniques. New York: Wiley.Google Scholar
Cooper, M., Stucker, R. E., DeLacy, I. H. and Harch, B.D. (1997). Wheat breeding nurseries, target environments, and indirect selection for gain yield. Crop Science 37:11681176.Google Scholar
Comstock, R. E. (1977). Quantitative genetics and the design of breeding programs. In Proceedings of the International Conference on Quantitative Genetics, 705–718 (Eds Pollack, E., Kempthorne, O. and Bailey, T. B. Jr.,), 16–21 Aug. 1976. Ames, Iowa: Iowa State University Press.Google Scholar
Damesa, T., Worku, M., Möhring, J. and Piepho, H.P. (2017). One step at a time: Stage-wise analysis of a series of experiments. Agronomy Journal 109:845857.Google Scholar
Efron, B. and Tibshirani, R. (1993). Introduction to the Bootstrap. London: Chapman & Hall.Google Scholar
Fox, P. N. and Rosielle, A. A. (1982). Reference sets of genotypes and selection for yield in unpredictable environments. Crop Science 22:11711175.Google Scholar
Gardner, M. J. and Altman, D. G. (1986). Confidence intervals rather than p values: Estimation rather than hypothesis testing. British Medical Journal 292:746750.Google Scholar
Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N. and Altman, D. G. (2016). Statistical tests, p values, confidence intervals, and power: A guide to misinterpretations. European Journal of Epidemiology 31:337350.Google Scholar
Hinkelmann, K. and Kempthorne, O. (1994). Design and Analysis of Experiments. Volume I: Introduction to Experimental Design. New York: Wiley.Google Scholar
Hu, X. (2015). A comprehensive comparison between ANOVA and BLUP to valuate location-specific genotype effects for rape cultivar trials with random locations. Field Crops Research 179:144149.Google Scholar
John, J. A. and Williams, E. R. (1995). Cyclic and Computer-Generated Designs. London: Chapman & Hall.Google Scholar
Kish, L. (1967). Survey Sampling. New York: Wiley.Google Scholar
Kitsche, A. and Schaarschmidt, F. (2015). Analysis of statistical interactions in factorial experiments. Journal of Agronomy and Crop Science 201:6979.Google Scholar
Kozak, M. (2008). Finite and infinite populations in biological statistics: Should we distinguish them? Journal of American Science 4:5962.Google Scholar
Läärä, E. (2009). Statistics: Reasoning on uncertainty, and the insignificance of testing null. Annales Zoologici Fennici 46:138157.Google Scholar
Mead, R., Gilmour, S. G. and Mead, A. (2012). Statistical Principles for the Design of Experiments. Cambridge: Cambridge University Press.Google Scholar
Neyman, J. (1934). On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection. Journal of the Royal Statistical Society 97:558625.Google Scholar
Piepho, H. P. and Möhring, J. (2005). Best linear unbiased prediction of cultivar effects for subdivided target regions. Crop Science 45:11511159.Google Scholar
Särndal, C. E., Swensson, B. and Wretman, J. (2003). Model Assisted Survey Sampling. Kluwer Academic Publishers, The Netherlands: Springer Science & Business Media.Google Scholar
Singh, S. (2003a). Advanced Sampling Theory with Applications: How Michael “Selected” Amy (Vol. 1). Kluwer Academic Publishers, The Netherlands: Springer Science & Business Media.Google Scholar
Singh, S. (2003b). Advanced Sampling Theory with Applications: How Michael “Selected” Amy (Vol. 2). Kluwer Academic Publishers, The Netherlands: Springer Science & Business Media.Google Scholar
Speed, T. P. (1991). Comment to: Samuels ML, Casella G, McCabe GP 1991 interpreting blocks as random factors. Journal of the American Statistical Association 86:798821.Google Scholar
Thompson, S. K. (2002). Sampling, 2nd ed. New York: Wiley.Google Scholar
Wasserstein, R. L. and Lazar, N. A. (2016). The ASA's statement on p-values: Context, process, and purpose. The American Statistician 70:129133.Google Scholar
Welham, S. J., Gezan, S. A., Clark, S. J. and Mead, A. (2015). Statistical Methods in Biology. Design and Analysis of Experiments and Regression. Boca Raton: CRC Press.Google Scholar
Yates, F. and Cochran, W. G. (1938). The analysis of groups of experiments. Journal of Agricultural Science 28:556580.Google Scholar