Much clinical research is concerned with the extent to which one or more factors affect the occurrence of an outcome. The factor may be dichotomous, in which case there is only one increment, or continuous, with multiple increments. Results of a study may be expressed as the comparative risk for occurrence of the outcome with incremental change in the factor. The most common expressions of comparative risk in the medical literature are the risk ratio and the odds ratio. The risk ratio is a ratio of probabilities, which are themselves ratios. The numerator of a probability is the number of cases with the outcome, and the denominator is the total number of cases. The risk ratio lends itself to direct intuitive interpretation. For example, if the risk ratio equals X, then the outcome is X‐fold more likely to occur in the group with the factor compared with group lacking the factor. The odds ratio is a ratio of odds, which are also, themselves, ratios. Odds have a numerator the same as a probability, the number of cases with the outcome. However, the denominator differs; it is the number of cases without the outcome, not the total cases. There is no simple quantitative interpretation for the odds ratio, except to the extent that it approximates the risk ratio.

Despite the intuitive difficulty of the odds ratio, it frequently appears as a measure of risk in multivariable analysis because of convenient mathematical properties of odds (ranging from 0 to +∞) compared with probabilities (limited to the interval between 0 and 1).^{1} For the reader trying to understand the magnitude of an effect, the divergence between the odds ratio and the risk ratio can be important. It can be shown that this divergence is particularly large when the outcome is common in the study population. There are methods to estimate risk ratios from odds ratios reported in cross‐sectional, cohort, and randomized studies.^{2,3} However, many readers are unfamiliar with these methods and may be led to an exaggerated impression of the risk.

This study has three objectives: 1) to estimate the frequency with which odds ratios are used in two major journals of obstetrics and gynecology, 2) to determine the proportion of evaluable studies in which an odds ratio and the estimated risk ratio differ substantially (greater than 20%), and 3) to identify misinterpretation of results due to confusion between the odds ratio and the risk ratio.

#### METHODS

The *American Journal of Obstetrics and Gynecology* and *Obstetrics & Gynecology* were searched from 1998 and 1999. Articles were selected using the search term “odds ratio” within PubMed (National Library of Medicine; http://www.ncbi.nlm.nih.gov/PubMed/). Each article was evaluated and classified by study type: cohort, case‐control, randomized clinical trial, cross‐sectional, or meta‐analysis. The statistical method used to generate the odds ratios was classified as logistic regression or other method. We then used the following procedure to choose the key odds ratio in each study: 1) Pick the odds ratio associated with the main study hypothesis, 2) in the absence of a unique odds ratio indicated by 1), pick the odds ratio in the abstract demonstrating the largest effect, whether direct or inverse.

We used the method of Zhang and Yu to estimate the risk ratio from the reported odds ratio.^{3} The term “estimated risk ratio” is used to emphasize that the value is calculated from the odds ratio (or adjusted odds ratio) and not from raw data. This method is suitable for use with cohort, cross‐sectional, and randomized trial data, and with univariate or multivariable analysis. To apply the method, it is necessary to have information about the frequency of the outcome among those lacking the factor (ie, in the unexposed group). Articles in which this information was missing could not be used for risk ratio estimates. Also, case‐control and meta‐analysis studies were excluded because the method is not suitable for these study types. Estimated risk ratios were plotted against the reported odds ratios. The percentage difference between the odds ratio and the estimated risk ratio was calculated as follows: 100% ×(odds ratio − estimated risk ratio) ÷odds ratio. The proportion in which this difference exceeded 20% was determined. We considered this large a difference to be potentially clinically significant.

Articles in which the authors concluded an “X‐fold risk,” or synonymous phrase, based on an odds ratio of X, were identified. The percentage difference between the odds ratio and the estimated risk ratio was calculated in these specific cases.

#### RESULTS

During 1998–1999, 77 articles in the *American Journal of Obstetrics and Gynecology* and 77 articles in *Obstetrics & Gynecology* contained “odds ratio” when searched with PubMed. Three articles were excluded. One used the term “odds ratio” to describe a prevalence ratio; another did not contain an odds ratio computation; and the third was a review article with no information about how the odds ratio was computed. One hundred (66.2%) of the remaining 151 articles were cohort studies; 29 (19.2%) were case‐control studies; five (3.3%) were randomized clinical trials; seven (4.6%) were cross‐sectional studies; and ten (6.6%) were meta‐analyses. Logistic regression was used in 104 (68.9%) of the studies, and the remainder used other methods. Of the 151 articles, 107 (70.9%) were suitable and contained sufficient information to calculate an estimated risk ratio.

The plot of estimated risk ratio versus odds ratio for the studies we analyzed is shown in Figure 1. It is clear that the odds ratio indicates a numerically greater effect size (farther from one) than the risk ratio. The difference between the odds ratio and the estimated risk ratio was greater than 20% in 47 (44%) of the articles in which a risk ratio could be estimated. The difference exceeded 50% in 18 (17%) of those articles. In 39 of 151 total articles (26%), quantitative conclusions of the form, “There is an X‐fold risk…” based on an odds ratio, were asserted by the authors. Twenty‐one of these were in *Obstetrics & Gynecology*, and 18 were in the *American Journal of Obstetrics and Gynecology*. Among the 30 of these 39 articles with calculable risk ratio estimates, the odds ratio differed from the estimated risk ratio by greater than 20% in half of them. The distribution of percentage differences between the odds ratio and the estimated risk ratio in these 30 cases is shown in Figure 2.

Figure 1 Image Tools |
Figure 2 Image Tools |

In only one study during the 2‐year interval did we find a statement justifying interpretation of the odds ratio as a close approximation of the risk ratio. The authors stated, “With the rare disease assumption, odds ratios were reported as relative risks.”^{4} In fact, their key odds ratio was 0.09, the risk in the unexposed group was 0.0104, and the estimated risk ratio was 0.091. An example of an erroneous conclusion based on misinterpretation of the odds ratio is in a study of the familial occurrence of dystocia.^{5} The authors conclude that “the risk is increased more than 20‐fold (odds ratio 24.0, 95% interval 1.5 to 794.5) if one twin sister had dystocia…” If a twin sister had no dystocia, the baseline risk for dystocia was 11%. Setting aside the issue of the large confidence interval, common sense would lead one to question a 20‐fold increase in an 11% risk. The estimated risk ratio is actually 6.75.

#### DISCUSSION

The odds ratio appears so often in clinical research reports because of its useful mathematical properties. The coefficients of logistic regression models convert readily into odds ratios, and the odds ratio is the measure of association derived from case‐control data. There are situations (eg, comparing two proportions, both of which are close to one) when the odds ratio makes more “sense” than the risk ratio. However, in everyday situations we think more often in terms of risk ratios. We found that misinterpretation of an odds ratio as a risk ratio is common and can substantially affect estimates of association between a risk factor and an outcome. Because decisions made by physicians and patients often depend on quantifying risk, it is important to avoid an error that inflates the apparent magnitude of risk.

Data from an article by Kurkinen‐Räty et al provide a simple illustration of the difference between the odds ratio and the risk ratio.^{6} In this study, the occurrence of chorioamnionitis was compared between patients with premature rupture of membranes (PROM) and a matched control group. Forty of 78 patients in the PROM group had chorioamnionitis; 38 did not. Twenty‐three of 78 patients in the control group had chorioamnionitis; 55 did not. The odds of chorioamnionitis in the PROM group is 40/38, and in the control group, 23/55. The odds ratio is (40/38)/(23/55) =2.5, as the authors indicated. The probability of chorioamnionitis in the PROM group is 40/78 and in the control group, 23/78. The risk ratio is (40/78)/(23/78) =1.7. The odds ratio differs from the risk ratio by 31% in this case. Had the authors concluded that there was a 2.5‐fold greater risk of chorioamnionitis in the PROM group, this would have been erroneous. These authors made no such claim.

Zhang and Yu recently proposed a formula to estimate the risk ratio from the odds ratio in cohort and cross‐sectional studies with univariate and multivariable analyses.^{3} They validated their method with a simulation incorporating two confounding variables. The risk ratio estimates proved to be close approximations to the true risk ratio. The formula they use is:

where *RR* is the estimated risk ratio, *OR* is the odds ratio, and P_{0} is the proportion of nonexposed individuals (ie, those lacking the factor) who experience the outcome.

From inspection of formula 1, it is clear that as P_{0} approaches zero, the denominator approaches one and RR approaches OR. This is in accord with the “rare disease assumption” (ie, for rare outcomes, the odds ratio closely approximates the risk ratio).7 As P_{0} approaches one, RR approaches one regardless of the value of OR, accounting for large differences between RR and OR when outcomes are common. When OR equals one, then RR also equals one, regardless of the value of P_{0}. Finally, for all P_{0} greater than zero and less than one, RR is less than OR when OR exceeds one, and RR is greater than OR when OR is less than one. In other words, the estimated risk ratio is always closer to one than the odds ratio. Interpreting the odds ratio as a risk ratio leads to an exaggerated assessment of the association between exposure and outcome. This pattern is demonstrated in Figure 1.

Translation of an odds ratio of X to an “X‐fold risk,” without explicit justification, was common among the articles we reviewed. Under some experimental conditions (eg, when the outcome is rare) the distinction between odds ratio and risk ratio may be quantitatively inconsequential. However, consider that with a modest 10% occurrence of the outcome in the unexposed group (ie, P_{0} =0.1), and an odds ratio of 10, the estimated risk ratio is only 5.3. Furthermore, as a consequence of the mathematical behavior of the odds ratio, estimates of the odds ratio may vary among studies of differing designs, even if the actual risk ratio is constant. A helpful rule of thumb is the “rule of five.” If the odds ratio is greater than one but no greater than five, and the occurrence of the outcome in the unexposed group is no greater than 5%, then the odds ratio exceeds the risk ratio by less than 20%.

Some authors of the articles we reviewed referred to an X‐fold increase in *odds* based on an odds ratio. Strictly speaking, this is accurate, and we did not include these studies among those displayed in Figure 2. However, for the reader unfamiliar with the differing behaviors of odds and probabilities, such a statement may be misinterpreted. We believe it is best to avoid quantitative statements about odds ratios.

The full topic of epidemiological measures of association is complex and well beyond the scope of our study.^{8} We were motivated by the frequent use of one measure, the odds ratio, and the particular difficulties associated with its interpretation. We confirmed that comparative risk is often expressed as an odds ratio in at least two journals widely read by obstetricians and gynecologists. Because of the mathematical properties of odds, we have shown that drawing quantitative conclusions based on the odds ratio may be hazardous. This is a risk not always appreciated by readers and authors alike.