Tests of statistical significance are often used by investigators in reporting the results of clinical research. Although such tests are useful tools, the significance levels are not appropriate indices of the size or importance of differences in outcome between treatments. Lack of "statistical significance" can be misinterpreted in small studies as evidence that no important difference exists. Confidence intervals are important but underused supplements to tests of significance for reporting the results of clinical investigations. Their usefulness is discussed here, and formulas are presented for calculating confidence intervals with types of data commonly found in clinical trials.