The independence assumption, although reasonable when examining cross-sectional data using single-factor experimental designs, is seldom verified by investigators. A Monte Carlo type simulation experiment was designed to examine the relationship between true Types I and II error probabilities in six multiple comparison procedures. Various aspects, such as patterns of means, types of hypotheses, and degree of dependence of the observations, were taken into account. Results show that, if independence is violated, none of the procedures control a using the error rate per comparison. At the same time, as the correlation increases, so does the per-comparison power.