Do you have a standard way of interpreting the standard deviation? A narrative review : Cancer Research, Statistics, and Treatment

Secondary Logo

Journal Logo

Statistical Resource

Do you have a standard way of interpreting the standard deviation? A narrative review

Darling, H. S.

Author Information
Cancer Research, Statistics, and Treatment 5(4):p 728-733, Oct–Dec 2022. | DOI: 10.4103/crst.crst_284_22
  • Open



Honest representation of data is as important as ethical data collection and deriving unbiased results from any clinical investigation. Shri Guru Nanak Dev Ji, the first Sikh guru, said, “Sachahu orai sabh ko upar sach āchār. ||5||” This statement (in the Gurmukhi script) translates to, “Truth is higher than everything, but higher still is true living (stanza 5)”.[1] The standard deviation is a measure of the spread of the data around the mean. It is used to specify whether or not the numbers in a dataset are clustered. A low standard deviation implies that all the data points are close to the average, whereas a high standard deviation suggests that the data points are widely scattered. It is used on “normally distributed” data to determine how much the data differ from the mean value. A normal distribution, also known as a Gaussian distribution, is a symmetrical probability distribution that describes the distribution of the values of a random variable. It has been quoted in various papers and provides the foundation for many statistical calculations.[2]

This narrative review aims at providing a basic understanding of the utility of the standard deviation as a statistical tool for practicing clinicians.


We conducted an online literature search in databases such as PubMed, Embase, and Cochrane, as shown in Figure 1. Search phrases such as “standard deviation,” “biostatistics,” and “clinical studies” were employed to study data from the past 10 years. We chose 932 non-duplicate citations from a pool of 992 available citations. After filtering 501 citations on the basis of the relevance of the titles and abstracts, a total of 431 publications were identified. After the full texts were studied, 375 articles were eliminated, and 48 articles were excluded during data extraction. Finally, eight papers were included that contained important data and illustrations for this review.

Figure 1:
Flow diagram depicting the search and selection process of the articles selected for inclusion in the review article on the standard deviation



Standard deviation is the dispersion of normally distributed data. In other words, the standard deviation is a measure of the representativeness of the mean of the sample data.[3]


Numerical data from any research study are tested for normal distribution, which allows certain statistical tests to be applied. Numerical data are usually presented along with measurements of central tendency (mean, median, and mode), as well as standard deviation, range, and interquartile range.[4] Clinical trials are based on the testing of a sample representative population in light of a hypothesis. Adequate sample size with an appropriate randomization technique will allow the sample data to be normally distributed. Presentation of the sample mean value with the standard deviation can correctly represent the variation in such data.[3] The normal distribution shows a bell-shaped curve but often the data are either left-skewed or right-skewed. Figure 2 represents a normal and skewed distribution. It is obvious that the mean, median, and mode are all the same in a normally distributed dataset. Many powerful statistical tests are based on the statistical assumption that the data are normally distributed, e.g., Student t-test and linear regression.[4] Although the normal distribution is the most desired, it is seldom found in reality. There are numerous other kinds of distributions, e.g., Hodgkin’s lymphoma has a bimodal distribution, i.e., the disease is seen in young adulthood and then again in old age. For non-normally distributed data, the assumptions and analyses are beyond the scope of this article. Range and interquartile range are reported along with the median and mode in non-normal distributions.[4]

Figure 2:
(a) Negative values for the skewness indicate data that are skewed left (b) The skewness for a normal distribution is zero, and any symmetric data should have a skewness near zero. (c) Positive values for the skewness indicate data that are skewed right

Calculating the standard deviation

Suppose the mean weight of 106 men in a sample population is 162.8 pounds. The sample mean symbol is x¯, pronounced “x bar”. Σ (sigma) is generally used to denote the sum of multiple terms.

Variance Σ(xi- x¯) 2/n-1 i.e., 45.095 pounds2

Standard deviation

√45.095 = 6.7 pounds[5]

Using excel to calculate the standard deviation

If the data have been entered in an Excel spreadsheet, using the function formula can compute the standard deviation without needing to enter the data again. Any changes in the data will trigger the function formula and the standard deviation will be automatically recalculated. Typing “=STDEV” into a blank cell in an Excel spreadsheet will lead to six different variations of the standard deviation formula. Once the desired formula has been selected, the “enter” button has to be pressed or the user can click away from the cell. The Excel spreadsheet will automatically calculate the standard deviation for the data entered. If there is a need to change the formula or inputs, double-clicking the cell will display the syntax. Clicking on a cell that contains a formula results in the syntax appearing in the formula bar at the top of the page, where changes can be made.

If the following 10 sample values were entered in a Microsoft Excel spreadsheet: A1:69; A2:80; A3:91; A4:82; A5:78; A6:85; A7:81; A8:75; A9:80; A10:79. One can type the formula = STDEV.S (A1:A10) in the blank cell beneath the entries. Alternatively, the spreadsheet can be entirely skipped, and the entries can be directly entered into the formula using the notation = STDEV.S (69,80,91,82,78,85,81,75,80,79). Excel computes a standard deviation of 5.79. This standard deviation is low, implying that the values are near the average of 80.


The standard deviation denotes the dispersion of the data points around the average value. The interval comprising one SD above and one below the mean (± 1 SD) includes 68.2% of the data. ± 2 SDs cover 95.4% of the data. ±3 SDs cover 99.7%. At a recruitment rally in which the mean chest circumference (CC) of adult males was 90 cm, the standard deviation was 5 cm. One standard deviation below the average was 90 – 5 = 85 cm. One standard deviation above the average was 90 + 5 = 95 cm. Therefore, ±1 standard deviation would include 68.2% of the participants, indicating that 68.2% of them would have a chest circumference ranging between 85 and 95 cm; 95.4% would have a chest circumference between 80 and 100 cm (±2 SD); 99.7% would have a chest circumference between 75 and 105 cm (± 3 SD). Assuming another similar set of the sample population, if the standard deviation were only 3, it is clear that the data would have a lesser spread than that in the first example [seen in Figure 3]. Hence, 68.2% of the people in the cohort would have a chest circumference between 87 and 93 cm; 95.4%, between 84 and 96 cm (±2 SD), and 99.7% would have a chest circumference between 81 and 99 cm (±3 SD).[2]

Figure 3:
Shapes of two populations with datapoints that have a large spread (high standard deviation) and a lesser spread (low standard deviation)

In statistics, the phenomenon of probability distribution distributes the values of a variable as per their corresponding probabilities. This is also called the Gaussian distribution, after the famous mathematician Friedrich Gauss. When repeated measurements are taken, such as in imprecision studies, the resulting data are often normally distributed. If we were to divide the dataset derived from these normally distributed data points into quartiles, or four equal sections, the quartile cut-off value is 0.68 standard deviations above and below the mean. A normal distribution also has an area of 1.96 standard deviations on either side of the mean which contains 95% of the values. This indicates that most data values should fall in the area extending between the two standard deviations above and below the mean. It is important to understand these normal distribution features to correctly interpret parametric test results. When a variable has a skewed distribution, the mean will not accurately reflect the center of the dataset; rather it will be a biased estimate of the data’s center, as seen in Figure 2.[6]

There are various methods to determine whether or not a continuous variable is normally distributed. Many parameters, such as height, weight, and blood pressure, are normally distributed in the community, but they may not be normally distributed in a study done in a select group, or a small cohort. The degree of closeness of the mean to the median suggests skewness. An informal check for normality is to determine whether the mean and median values are close to each other. A large difference between the mean and median values indicates that the particular variable being analyzed does not follow a normal distribution. To test for normality, a rough and ready test is to double the standard deviation of the variable and then subtract it from the mean value to obtain the lower limit of the range and add it to the mean value for the upper range. This yields the expected range within which 95% of the results should fall. The expected range of data values should be just within the actual range of data values. A variable that has a standard deviation greater than one-half of the mean value is non-normally distributed, assuming that negative values cannot occur.[6]


SEM or standard error of the mean is a statistical inference based on the sampling distribution.[3] SEM refers to the theoretical range spanned by the standard deviation of the sample means, i.e., the sampling distribution. In normally distributed data, the mean represents the central tendency of the data points. However, the shape of the distribution of a dataset cannot be entirely explained by the mean alone; standard deviation and the SEM are often provided along with the mean for this purpose. In a particular population, when samples of the same sample size are randomly taken repeatedly, they will differ from one another because of sampling variation as well as the distribution of different sample means. This sampling distribution follows a normal distribution pattern. Thus, the standard deviation of the sampling distribution can be calculated; this value is known as the SEM. The SEM measures the proximity of the sample mean to the population mean. In actuality, only one sample is drawn from the population. As a result, the SEM is calculated using the standard deviation and the sample size (estimated SEM).[3]

Performing any parametric statistical test requires a normally distributed dataset.[3] The standard deviation of the normal distribution is also the measurement’s standard uncertainty. Normality tests, such as the Kolmogorov-Smirnov and the Shapiro-Wilk tests, are used to verify the normal distribution. The probability that repeated measurements will be distributed around the mean is not fixed. As a result, the data are not uniformly dispersed around the mean.[7] The conventional methodology followed while conducting a meta-analysis of continuous outcomes, without access to the individual participant data, requires information on the mean and either the standard deviation, variance, or standard error values for each treatment group. When the distribution of continuous outcomes is skewed, the quartiles or range are used instead of the standard deviation, and the median is provided instead of the mean. To avoid bias and the loss of precision, such studies should be omitted from the meta-analysis; alternatively, another method should be used to calculate the missing mean or standard deviation from the data provided.[8] Standard deviation should be utilized only when the data are normally distributed. However, means and standard deviations are frequently misapplied to data that are not normally distributed. A basic test for normal distribution is to add two standard deviations to the mean and check whether the derived value is within the variable’s possible range. If we were to consider the length of hospital stay data for a cohort of patients, in which the mean stay was 12 days and the standard deviation was 9 days:

2 x SD = 18

mean – 2 × SD = 12 – (2 × 9) = 12 – 18 = - 6 days

As this is not a valid result this indicates that the data cannot be normally distributed. Therefore, the mean and standard deviation should not be used here.[2]


The SEM is lower than the standard deviation because the SEM is conventionally calculated by dividing the standard deviation by the square root of the sample size. Accordingly, researchers often describe their samples using SEM. If the sample sizes of the two groups are equal, it is appropriate to use either the SEM or standard deviation to compare them; nevertheless, the sample size must be provided to convey the correct information. For example, if a population has a lot of variation, the standard deviation of an extracted sample from that population has also to be a large number. If the sample size is increased on purpose, the SEM will be small. In such instances, employing the SEM in descriptive statistics could lead to a misinterpretation of the population.[3]

Role of the standard deviation in systematic reviews and meta-analyses

To combine data from various studies, particularly when preparing a systematic review or a meta-analysis, the results should be in a similar comparable format. When the outcome is a quantitative variable, means and standard deviations should be provided. Occasionally, trial outcomes are provided as medians and ranges or interquartile ranges. If at all possible, reviewers should obtain the individual patient-level data so that means and standard deviations can be calculated. However, it is not always possible to contact the study authors, or the raw data may not be accessible, or available. Reviewers may be forced to exclude these studies from the quantitative section of the review or rescue what they can from publications that provide only extremely brief summaries of the data. Reviewers may occasionally be able to determine the means and standard deviations by back calculation from the confidence intervals or P values.[9]

When comparing data sets statistically, researchers estimate each population sample and then determine whether they are identical. To calculate the population means, the SEM is employed rather than the standard deviation, which captures sample variation. Researchers can deduce from this method whether the sample in their study accurately represents the population within the error range given by the pre-set significance level.[3] Despite utilizing transparent systematic approaches to reduce bias and random variability in intervention evaluations, selective or incomplete trial reporting is responsible for imprecision and bias in meta-analyses.[8] To describe the generalizability of the results, details of the study sample or study groups must be provided in all research trial reports. Statistics that describe the data’s center and spread are ideal for this. Accordingly, for regularly distributed values, the mean and standard deviation are presented while for non-normally distributed data, the median and interquartile range are reported. Indicators of precision of regularly distributed variables, such as the standard error and 95% confidence intervals are useful for comparing groups or assessing the differences between groups. When evaluating journal articles, one may need to convert a measure of spread (standard deviation) to a measure of precision (SEM), or vice versa, to compare the results of one study with those from other studies. The SEM estimates the precision of the sample mean as a reflection of the population mean. Because the formula is simple, calculating the standard deviation from an SEM or vice versa is straightforward:

SEM = Standard Deviation/√n (where n is the sample size)

The standard deviation is ineffective for describing the spread of non-normally distributed data, and parametric tests should not be used to compare these groups. If the lower estimate of the 95% confidence interval is too low, the mean will overestimate the median. If the lower estimate is too large, the mean will be less than the median. In such a situation, the median and interquartile range will provide more precise estimates of the data’s center and spread, and non-parametric tests should be applied to compare the groups.[6]


Researchers should completely report the original trial data, to avoid a situation that necessitates attempts to recover missing statistical values like means or standard deviation. However, the methods discussed in this review are important because they allow researchers planning to conduct systematic reviews or meta-analyses to incorporate as much information as possible from completed trials that have already been reported and provide an advantage by avoiding the omission of trials with missing means or standard deviations from the meta-analysis.[8]

Questions to be considered at the time of a critical appraisal

When evaluating the results of published trials, the following questions should be asked:

  • Have the appropriate normality tests been conducted and reported?
  • Have the proper statistics been utilized to describe the data’s center and spread?
  • Do the mean ± 2 standard deviation values define a realistic 95% confidence interval?
  • Has the mean of either group been under or over-estimated if the distribution is skewed?
  • Have the median and interquartile range been provided if the data are skewed?[6]


Standard deviation depicts variation in normally distributed data, whereas the standard error of mean represents variation in the sample means of a sampling distribution. It is therefore appropriate to use the standard deviation (along with a test of normality) to describe the features of a sample; however, if the sample size is specified, the standard error of mean or confidence interval can be used for the same purpose. When providing statistical results, the combination of the standard error of the mean and the sample size allows for an intuitive comparison of the estimated populations via graphs or tables.[3]

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.


1. Sri Granth: Sri Guru Granth Sahib. 2022 Available from: https://www. [Last accessed on 2022 Dec 19].
2. Harris M, Taylor G Medical Statistics Made Easy. CRC Press 2003.
3. Lee DK, In J, Lee S Standard deviation and standard error of the mean. Korean J Anesthesiol 2015;68:220–3.
4. Bensken WP, Pieracci FM, Ho VP Basic introduction to statistics in medicine, Part 1:Describing data. Surg infect (Larchmt) 2021;22:590–6.
5. Peacock JL, Peacock PJ Oxford Handbook of Medical Statistics 1st ed Oxford US Oxford University Press 2011.
6. Barton B, Peat J Medical Statistics:A Guide to SPSS, Data Analysis and Critical Appraisal, 2nd Edition BMJ Books Wiley Blackwell New Jersey, USA 2014.
7. Statistical distributions commonly used in measurement uncertainty in laboratory medicine. Biochemia Medica. 29 Available from: [Last accessed on 2022 Dec 12].
8. Weir CJ, Butcher I, Assi V, Lewis SC, Murray GD, Langhorne P, et al. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes:A systematic review. BMC Med Res Methodol 2018;18:25.
9. Martin B Estimating mean and standard deviation from the sample size, three quartiles, minimum, and maximum. Int J Stat Med Res 2014;4:57–64.

Measures of central tendency; normal distribution; standard deviation

Copyright: © 2023 Cancer Research, Statistics, and Treatment