I keep six honest serving-men;

(They taught me all I knew)

Their names are What and Why and When

And How and Where and Who.

Rudyard Kipling (1902), “The Elephant’s Child” from Just So Stories

Descriptive statistics are specific methods basically used to calculate, describe, and summarize collected research data in a logical, meaningful, and efficient way. Descriptive statistics are reported numerically in the manuscript text and/or in its tables, or graphically in its figures.^{1–3}

As insightfully observed by Grimes and Schulz,^{4} “Descriptive studies often represent the first scientific toe in the water in new areas of inquiry. A fundamental element of descriptive reporting is a clear, specific, and measurable definition of the disease or condition in question. Like newspapers, good descriptive reporting answers the 5 basic W questions: who, what, why, when, where…and a sixth: so what?” This basic statistical tutorial discusses the following fundamental concepts about descriptive statistics and their reporting:

Total study sample size versus study group sizes
Study sample point estimate
Frequency, percentage, ratio, and proportion
Measures of the central tendency of data
Measures of the variability or dispersion of data
Confidence interval (CI) as measure of the precision of a point estimate
Graphically displaying different types of data
TOTAL STUDY SAMPLE SIZE VERSUS STUDY GROUP SIZES
An understanding of statistics is predicated on the key distinction between a study sample and the population from which it is selected, in that conventional inferential statistics allows one to extrapolate from the descriptive statistics of a sample to infer something or to make conclusions about the study population.^{3,} ^{5–7}

For example, in a clinical study like a randomized controlled trial, the statistical analysis of the descriptive data from a random sample of patients is used to make inferences and conclusions about the larger population and similar other, future patients.^{5}

Prospectively determining the appropriate study sample size and the study group or subsample sizes is crucial in conducting a valid and ethical quantitative research study.^{8–10} The mechanics of doing so will be the subject of a future statistical tutorial.

When reporting the descriptive findings of a research study, it is very important to provide (a) the total sample size (ie, the total number of subjects sampled) and (b) the study group sizes (ie, the number of subjects sampled in each study group or subsamples) in the text, tables, and figures.^{11,} ^{12}

SAMPLE POINT ESTIMATE
Based on a sample of data that is randomly collected at a given point in time, an estimated value can be generated. Such a sample point estimate is a single value that can be used to validly estimate the corresponding population parameter.^{3,} ^{13–15} A series of repeated, random (unbiased) samples of data collected from the same underlying population would be expected to generate different yet equally valid point estimates of the corresponding population parameter.^{13–15}

FREQUENCY, PERCENTAGE, RATIO, AND PROPORTION
As discussed in the previous statistical tutorial, some demographic and clinical characteristics can be parsed into and described using separate, discrete categories. Such categorical data can be either dichotomous (2 categories) or polytomous (more than 2 categories).^{2,} ^{16–19}

Dichotomous and polytomous categorical data can be described as (a) the raw counts or absolute frequencies (eg, “50”) of the categories or (b) the percentages or relative frequencies (eg, “50%”) of the categories. Relative frequencies can also be reported as ratios (eg, “50:50”), proportions (eg, “50/100”), or decimals (eg, “0.50”).

Provide Numerators and Denominators for All Reported Percentages or Proportions
When reporting any observed percentage or proportion in the abstract, text, and/or tables of a manuscript, the authors should provide the corresponding numerator and denominator.

MEASURES OF THE CENTRAL TENDENCY OF DATA Figure 1.: A hypothetical example of a histogram displaying a discordant mean, median, and mode values for a skewed, nonnormally (non-Gaussian) distributed data set.

The mean, median, and mode are 3 measures of the center or central tendency of a set of data.^{5,} ^{14,} ^{20–22} When a set of data is truly normally distributed, its distribution forms or follows a bell-shaped curve, and its mean, median, and mode are all at the same point, at the center of the distribution.^{23} However, this is seldom the case, but instead these 3 values vary (Figure 1 ).

Mean
The mean is likely the most widely known descriptive statistic. Dating back to their earlier days as high-achieving students, promising, future researchers were aware of their average score in a class and their grade-point average, as well as their score versus the average score on a standardized aptitude test. All of these average values were an arithmetic mean, which is defined as the total sum of the values divided by the number of observations (sample size).^{21–24}

The symbol for the mean of a sample is , which is pronounced “x bar.” The symbol for the mean of a population is the Greek letter μ, which is pronounced “mew” and spelled “mu” (in English).^{20}

The mean is typically reported for continuous (interval or ratio) data that have a normal (Gaussian) distribution.^{19} The mean can be greatly affected by or is very sensitive to outlying values (“outliers”), especially if they are extreme.^{21}

Median
The median is a middle number or a value in a sample or population. It is the number above and below which there are an equal numbers of data points. For example, it is the midpoint in a set of test scores, which marks the 50th percentile of their data distribution. If there is an even number of values, the median is the average of the 2 middle ones.^{20–24} While there is no one agreed on symbol for the median, it can be notated as P_{50} or Mdn.^{20,} ^{23}

The median is typically reported for ordinal data or continuous data that do not have a normal (Gaussian) distribution.^{19} The median is not unduly affected by outlying values (“outliers”), unless they are excessive.^{21}

Mode
The mode is the discrete number or integer that occurs most commonly or frequently in the data set. Most data distributions have only 1 mode and are thus referred to as unimodal. If a data distribution has 2 modes (2 “frequency peaks”), it is referred to as bimodal. There is no symbol for the mode.^{20–24}

MEASURES OF THE VARIABILITY OR DISPERSION OF DATA
In addition to a measure of its central tendency (mean, median, or mode), another important characteristic of a research data set is its variability or dispersion (ie, spread).^{25} In simplest terms, variability is how much the individual recorded scores or observed values differ from one another.^{26}

The most general measure of variability is the total range, namely, the absolute difference between the minimum (lowest, smallest) and the maximum (highest, largest) recorded values.^{22,} ^{25,} ^{26}

Of note, this observed total range of recorded scores or observed values should be distinguished from the possible range of scores of values for an applied test or measurement instrument (eg, 0–10 on the 11-point numerical rating scale for unidimensional pain intensity). Authors should always provide the possible range of values for any applied test or measurement instrument when describing it in the methods section of their manuscript.

Standard Deviation Accompanies a Mean
The standard deviation (SD) is typically reported for a mean. The SD represents the average amount of variability in a set of values or scores, and practically or roughly speaking, the SD is the average distance from the mean.^{25,} ^{26}

The larger the SD, the larger the average distance each data point is from the mean of the distribution of the overall data set, and hence, the more dispersed or spread apart the entire data set.^{26}

The SD of a sample is inversely related to the square root of the sample size (actually, n − 1), so the larger the sample size, the relatively smaller the SD.^{25–27} However, if the data have a normal (Gaussian) distribution, 68.3% of the recorded and analyzed values always fall within ±1 SD of the corresponding mean, 95.4% within ±2 SD of that mean, and 99.7% within ±3 SD of that mean.^{28}

The standard deviation is typically abbreviated as “SD” or “s” for a sample variable and as the Greek letter σ for a population parameter. The SD is expressed in the same units as the primary data.^{27} The SD can be reported with the sample mean as its plus/minus value (eg, 77.2 ± 12.5) or as its absolute value (eg, 77.2 [12.5]).

As an aside, the SD is technically the square root of the variance, another measure of the variability of a data set, but the variance is not conventionally reported mainly because its units are expressed as the square (x^{2} ) of the primary data.^{25–27}

Interquartile Range Accompanies a Median
The interquartile range (IQR) is another commonly reported measure of the variability or dispersion (spread) of the values in a data set. The IQR is typically reported for a median.

As its name implies, the IQR includes the range of values between the 25th percentile (the first quartile, Q_{1} ) and the 75th percentile (the third quartile, Q_{3} ).^{22,} ^{25,} ^{27} Operationally, all of the individual recorded scores or observed values are divided into 4 equally sized quartiles (“buckets”), and the IQR includes the 2 middle quartiles.^{25}

The interquartile range is typically abbreviated as “IQR.” The IQR is expressed in the same units as the primary data.^{27} The specific values for the 25th percentile (Q_{1} ) and the 75th percentile (Q_{3} ) are actually reported (eg, “IQR: 35, 65”).

Authors can also report the absolute range of the recorded scores or observed values for a variable, especially if it highlights an important aspect and/or provides additional insight.

CI AS MEASURE OF THE PRECISION OF A POINT ESTIMATE
Testing for statistical significance,^{29–31} calculating the observed treatment effect size (or the strength of the association between an exposure and an outcome),^{32} and generating a corresponding CI^{33,} ^{34} are 3 tools commonly used by researchers (and their collaborating biostatistician or epidemiologist) to validly make inferences and more generalized conclusions from their collected data and descriptive statistics.^{7} A number of journals, including Anesthesia & Analgesia , strongly encourage or require the reporting of pertinent CIs.^{35,} ^{36} The important concept of the CI will be introduced here and expanded on in a future statistical tutorial.

As noted above, a series of repeated, random samples of data collected from the same underlying population would be expected to generate different yet equally valid point estimates (eg, sample means, , , , etc) of the corresponding true population parameter (eg, the population mean, μ).^{13–15} Likewise, a series of clinical trials would each be expected to generate a different observed treatment effect (eg, the reduction in the frequency or incidence of postoperative myocardial infarction). Which of these sample point estimates is the most accurate (“correct”) estimate of the corresponding population parameter? One cannot ever be certain, but this is where a CI comes into play.

A CI can be calculated for virtually any variable or outcome measure in an experimental, quasi-experimental, or observational research study design.^{37,} ^{38} Specifically, a CI can be calculated for a single observed mean or proportion.^{39,} ^{40} It can also be calculated for an observed difference between 2 subgroups or subsample means or proportions, which is referred to in a clinical trial as the observed treatment difference or effect.^{41}

Generally speaking, in a clinical trial, the CI is the range of values within which the true treatment effect in the population likely resides.^{42,} ^{43} In an observational study, the CI is the range of values within which the true strength of the association between the exposure and the outcome (eg, the risk ratio or odds ratio) in the population likely resides.^{44,} ^{45}

The CI is formally defined statistically as follows: “if the level of confidence is set at 95%, it means that if data collection and analysis could be replicated many times, the CI should include within it the correct value of the measure 95% of the time.”^{44} Stated another way, if, for instance, one performed the data sampling process from the same population 100 separate times, generating 100 distinct sample point estimates, and then 100 corresponding CIs are calculated, 95% of these intervals are expected to contain the true (correct) value.^{15,} ^{43}

The confidence interval is typically abbreviated as “CI.” A CI can be calculated for any degree of confidence. While the 95% CI is traditionally and most commonly reported, a 90% CI or a 99% CI may be more relevant and instead reported. No matter its level (0%–100%) and its range or width, a CI has a lower boundary (limit) and an upper boundary (limit).^{39,} ^{44}

Counterintuitively, for the same data set and its given sample size, the lower boundary (limit) and upper boundary (limit) of its more precise CI (eg, for its 99% CI versus its 95% CI) are wider apart.^{7,} ^{43,} ^{46}

There is an inverse relationship between width of the CI and sample size, so the larger the relative sample size, the narrower and more precise the CI.^{7,} ^{43,} ^{46} Thus, a prime motivator for conducting a systematic review and then combining (“pooling”) the identified individual study data via a meta-analysis is to create a larger sample size and to generate a more precise pooled estimate of the treatment effect.^{42}

GRAPHICALLY DISPLAYING DIFFERENT TYPES OF DATA
There are a many ways to graphically display or illustrate different types of data, “A picture is really worth a thousand words.”^{47} While there is often latitude as to the choice of format, ultimately, the simplest and most comprehensible format is preferred.

Common examples of graphics include a stem-and-leaf plot, histogram (Figure 2 ), bar chart (Figure 3 ), pie graph (Figure 4 ), line graph (Figure 5 ), scatter plot (Figure 6 ), and box-and-whisker plot (Figure 7 ).^{22,} ^{47,} ^{53,} ^{54} Of note, with few exceptions, pie charts are generally suboptimal as the same information can better presented visually with a bar chart.^{56}

Figure 2.: Example of a histogram. Histogram of timing of postoperative stroke. Reproduced with permission from Hsieh et al.^{48}

Figure 3.: Example of a bar chart. The distribution of subjects by year of age. Reproduced with permission from Spielberg et al.^{49}

Figure 4.: Example of a pie graph. The most common outpatient pediatric pain locations or diagnoses. CRPS indicates complex regional pain syndrome. Data were derived from Vetter.^{50}

Figure 5.: Example of a line graph. The figure depicts the model estimated mean (with error bars) changes from baseline for both study groups at each time point. NRS indicates numerical rating scale. Reproduced with permission from Stocki et al.^{51}

Figure 6.: Example of a scatter plot. Relationship between the distance between the vocal cords and carina (length VC–C) and corrected age in 46 children (r = 0.84). VC–C indicates vocal cords–carina. Reproduced with permission from Sirisopana et al.^{52}

Figure 7.: Example of a box-and-whisker plot (box plot). Box plots of pain scores on movement assessed using an 11-point numerical rating scale (0 = no pain and 10 = the worst possible pain) over time after cesarean delivery (transversus abdominis plane block). TAP indicates transversus abdominis plane. Reproduced with permission from Tawfik et al.^{55}

The box-and-whisker plot is specifically intended to graphically display nonnormally (non-Gaussian) distributed continuous (interval or ratio) data.^{19,} ^{24} The box-and-whisker plot displays the following: (a) the median (Mdn) value (solid line within the box); (b) the IQR (2 ends of the box); (c) either the 5th–95th percentile values or the smallest and largest observations (2 lines with caps extending from the box, the so-called whiskers); and (d) any extreme outliers located beyond the so-called lower fence (Q1 − 1.5 × [IQR]) or the “upper fence” (Q3 + 1.5 × [IQR]).^{22,} ^{24}

The reader is referred to the most current version of the publication and style manuals of the American Medical Association^{12} and the American Psychological Association^{11,} ^{57,} ^{58} for in-depth discussions of how to display or illustrate research data tabularly or graphically.

CONCLUSIONS
Who? What? Why? When? Where? How? How Much? These are 7 questions kids learn in grade school or when first learning a foreign language. Like other baby boomers, no matter how rusty my trusty French may be, I take comfort that I can still remember these 7 basic words: Qui, quoi, pourquoi, quand, où, comment, and combien. They cover the basics and help you understand most situations and contexts.^{59} And so it is with descriptive statistics!

DISCLOSURES
Name: Thomas R. Vetter, MD, MPH.

Contribution: This author wrote and revised the manuscript.

This manuscript was handled by: Jean-Francois Pittet, MD.

REFERENCES
1. Salkind NJ. Statistics or sadistics? It’s up to you. In: Statistics for People Who (Think They) Hate Statistics. 2016:6th ed. Thousand Oaks, CA: Sage Publications, 5–18.

2. Urdan TC. Introduction to social science research principles and terminology.Statistics in Plain English. 2017:4th ed. New York, NY: Routledge, Taylor & Francis Group, 1–12.

3. Mendenhall W, Beaver RJ. An invitation to statistics.Introduction to Probability and Statistics. 2013:Boston, MA: CL-Wadsworth, 1–6.

4. Grimes DA, Schulz KF. Descriptive studies: what they can and cannot do. Lancet. 2002;359:145–149.

5. Motulsky H. From sample to population. In: Intuitive Biostatistics: A Nonmathematical Guide to Statistical Thinking. 2014:New York, NY: Oxford University Press, 22–28.

6. Salkind NJ. Significantly significant. In: Statistics for People Who (Think They) Hate Statistics. 2016:6th ed. Thousand Oaks, CA: Sage Publications, 177–196.

7. Urdan TC. Statistical significance, effect size, and confidence intervals. Statistics in Plain English. 2017:4th ed. New York, NY: Routledge, Taylor & Francis Group, 73–91.

8. Motulsky H. Choosing the sample size. In: Intuitive Biostatistics: A Nonmathematical Guide to Statistical Thinking. 2014:New York, NY: Oxford University Press, 216–229.

9. Schulz KF, Grimes DA. Sample size calculations in randomised trials: mandatory and mystical. Lancet. 2005;365:1348–1353.

10. Pandis N, Polychronopoulou A, Eliades T. Sample size estimation: an overview with applications to orthodontic clinical trial designs. Am J Orthod Dentofacial Orthop. 2011;140:e141–e146.

11. American Psychological Association. Publication Manual of the American Psychological Association. 2009.6th ed. Washington, DC: American Psychological Association.

12. Iverson C; American Medical Association. American Medical Association Manual of Style: A Guide for Authors and Editors. 2007.10th ed. New York, NY; Oxford: Oxford University Press.

13. Brown S. Estimating population parameters. Without Tears. 2017.

Stats Available at:

https://brownmath.com/swt/chap09.htm . Accessed July 7, 2017.

14. Brown S. Statistics! Stats Without Tears. 2017. Available at:

https://brownmath.com/swt/chap01.htm . Accessed July 7, 2017.

15. Mendenhall W, Beaver RJ. Large sample estimation.Introduction to Probability and Statistics. 2013:Boston, MA: CL-Wadsworth, 281–323.

16. Field A. Why is my evil lecturer forcing me to learn statistics? Discovering Statistics Using IBM SPSS Statistics: And Sex and Drugs and Rock ‘n’ Roll. 2013:Los Angeles, CA: Sage, 1–39.

17. Campbell MJ, Swinscow TDV. Data display and summary.Statistics at Square One. 2009:Chichester, UK; Hoboken, NJ: Wiley-Blackwell/BMJ Books, 1–13.

18. Hulley SB, Newman TB, Cummings SR. Hulley SB, Cummings SR, Browner WE, Grady DG, Newman TB. Planning the measurements: precision, accuracy, and validity. In: Designing Clinical Research. 2013:4th ed. Philadelphia, PA: Wolters Kluwer Health/Lippincott Williams & Wilkins, 32–42.

19. Vetter TR. Fundamentals of research data and variables: the devil is in the details. Anesth Analg. 2017;125:1375–1380.

20. Brown S. Numbers about numbers. Stats Without Tears. 2017. Available at:

https://brownmath.com/swt/chap03.htm . Accessed July 7, 2017.

21. Salkind NJ. Means to an end: computing and understanding averages. Statistics for People Who (Think They) Hate Statistics. 2016:6th ed. Thousand Oaks, CA: Sage Publications177–196.

22. Mendenhall W, Beaver RJ. Describing data with numerical measures. Introduction to Probability and Statistics. 2013:Boston, MA: CL-Wadsworth, 50–93.

23. Urdan TC. Measures of central tendency. Statistics in Plain English. 2017:4th ed. New York, NY: Routledge, Taylor & Francis Group, 13–20.

24. Motulsky H. Graphing continuous data. Intuitive Biostatistics: A Nonmathematical Guide to Statistical Thinking. 2014:New York, NY: Oxford University Press, 61–71.

25. Urdan TC. Measures of variability.Statistics in Plain English. 2017:4th ed. New York, NY: Routledge, Taylor & Francis Group, 21–32.

26. Salkind NJ. Vive la différence: understanding variability. In: Statistics for People Who (Think They) Hate Statistics. 2016:6th ed. Thousand Oaks, CA: Sage Publications, 43–57.

27. Motulsky H. Quantifying scatter. Intuitive Biostatistics: A Nonmathematical Guide to Statistical Thinking. 2014:New York, NY: Oxford University Press, 77–84.

28. Motulsky H. The Gaussian distribution.Intuitive Biostatistics: A Nonmathematical Guide to Statistical Thinking. 2014:New York, NY: Oxford University Press, 85–89.

29. Woolson RF, Kleinman JC. Perspectives on statistical significance testing. Annu Rev Public Health. 1989;10:423–440.

30. Hayat MJ. Understanding statistical significance. Nurs Res. 2010;59:219–223.

31. Pocock SJ, Ware JH. Translating statistical findings into plain English. Lancet. 2009;373:1926–1928.

32. McGough JJ, Faraone SV. Estimating the size of treatment effects: moving beyond p values. Psychiatry (Edgmont). 2009;6:21–29.

33. Fethney J. Statistical and clinical significance, and how to use confidence intervals to help interpret both. Aust Crit Care. 2010;23:93–97.

34. Akobeng AK. Confidence intervals and p-values in clinical decision making. Acta Paediatr. 2008;97:1004–1007.

35. Altman D. Gardner M, Altman D. Estimating with confidence. Statistics With Confidence: Confidence Intervals and Statistical Guidelines. 2000:2nd ed. London: BMJ Books, 3–5.

36. Altman D. Altman D, Machin D, Bryant T, Gardner M. Confidence intervals in practice. Statistics With Confidence: Confidence Intervals and Statistical Guidelines. 2000:2nd ed. London: BMJ Books, 6–14.

37. Vetter TR. Magic mirror, on the wall-which is the right study design of them all?-part II. Anesth Analg. 2017;125:328–332.

38. Vetter TR. Magic mirror, on the wall-which is the right study design of them all?-part I. Anesth Analg. 2017;124:2068–2073.

39. Motulsky H. Confidence interval of a proportion. Intuitive Biostatistics: A Nonmathematical Guide to Statistical Thinking. 2014:New York, NY: Oxford University Press, 29–42.

40. Motulsky H. Confidence interval of a mean. Intuitive Biostatistics: A Nonmathematical Guide to Statistical Thinking. 2014:New York, NY: Oxford University Press, 95–103.

41. Sedgwick P. Randomised controlled trials: inferring significance of treatment effects based on confidence intervals. BMJ. 2014;349:g5196.

42. Montori VM, Kleinbart J, Newman TB, et al.; Evidence-Based Medicine Teaching Tips Working Group. Tips for learners of evidence-based medicine: 2. Measures of precision (confidence intervals). CMAJ. 2004;171:611–615.

43. Altman D. Altman D, Machin D, Bryant T, Gardner M. Confidence intervals rather than P values. Statistics With Confidence: Confidence Intervals and Statistical Guidelines. 2000:2nd ed. London: BMJ Books, 15–27.

44. Rothman KJ. Random error and the role of statistics. Epidemiology: An Introduction. 2012:2nd ed. Oxford: Oxford University Press, 148–163.

45. Rothman KJ, Greenland S, Lash TL. Rothman KJ, Greenland S, Lash TL. Precision and statistics in epidemiologic studies. Modern Epidemiology. Revised 2012:3rd ed. Philadelphia, PA: Lippincott Williams & Wilkins, 148–167.

46. Daly L. Altman D, Machin D, Bryant T, Gardner M. Confidence intervals and sample sizes. Statistics With Confidence: Confidence Intervals and Statistical Guidelines. 2000:2nd ed. London: BMJ Books; 139–152.

47. Salkind NJ. A picture is really worth a thousand words. In: Statistics for People Who (Think They) Hate Statistics. 2016:6th ed. Thousand Oaks, CA: Sage Publications, 59–79.

48. Hsieh JK, Dalton JE, Yang D, Farag ES, Sessler DI, Kurz AM. The association between mild intraoperative hypotension and stroke in general surgery patients. Anesth Analg. 2016;123:933–939.

49. Spielberg DR, Barrett JS, Hammer GB, et al. Predictors of arterial blood pressure control during deliberate hypotension with sodium nitroprusside in children. Anesth Analg. 2014;119:867–874.

50. Vetter TR. A clinical profile of a cohort of patients referred to an anesthesiology-based pediatric chronic pain medicine program. Anesth Analg. 2008;106:786–794.

51. Stocki D, Matot I, Einav S, Eventov-Friedman S, Ginosar Y, Weiniger CF. A randomized controlled trial of the efficacy and respiratory effects of patient-controlled intravenous remifentanil analgesia and patient-controlled epidural analgesia in laboring women. Anesth Analg. 2014;118:589–597.

52. Sirisopana M, Saint-Martin C, Wang NN, Manoukian J, Nguyen LH, Brown KA. Novel measurements of the length of the subglottic airway in infants and young children. Anesth Analg. 2013;117:462–470.

53. Mendenhall W, Beaver RJ. Describing data with graphs.Introduction to Probability and Statistics. 2013:Boston, MA: CL-Wadsworth, 7–49.

54. Mendenhall W, Beaver RJ. Describing bivariate data.Introduction to Probability and Statistics. 2013:Boston, MA: CL-Wadsworth, 94–122.

55. Tawfik MM, Mohamed YM, Elbadrawi RE, Abdelkhalek M, Mogahed MM, Ezz HM. Transversus abdominis plane block versus wound infiltration for analgesia after cesarean delivery: a randomized controlled trial. Anesth Analg. 2017;124:1291–1297.

56. Hickey W. The worst chart in the world. Business Insider: Markets 2013. Available at:

http://www.businessinsider.com/pie-charts-are-the-worst-2013–6 . Accessed July 27, 2017.

57. Nicol AAM, Pexman PM. Displaying Your Findings: A Practical Guide for Creating Figures, Posters, and Presentations. 2013.6th ed. Washington, DC: American Psychological Association.

58. Nicol AAM, Pexman PM. American psychological association. Presenting Your Findings: A Practical Guide for Creating Tables. 2010.Washington, DC: American Psychological Association.

59. 7 key questions: who, what, why, when, where, how, how much? Consultants Mind: Thinking Through the Problem. 2013. Available at:

http://www.consultantsmind.com/2013/03/07/7-key-questions-who-what-why-when-where-how-how-much/ . Accessed July 7, 2017.