Share this article on:

Survey Research

Alderman, Amy K. M.D., M.P.H.; Salem, Barbara M.S.W., M.S.

Plastic & Reconstructive Surgery: October 2010 - Volume 126 - Issue 4 - pp 1381-1389
doi: 10.1097/PRS.0b013e3181ea44f9
Special Topics: EBM Special Topics/Outcomes Articles

Summary: Survey research is a unique methodology that can provide insight into individuals' perspectives and experiences and can be collected on a large population-based sample. Specifically, in plastic surgery, survey research can provide patients and providers with accurate and reproducible information to assist with medical decision-making. When using survey methods in research, researchers should develop a conceptual model that explains the relationships of the independent and dependent variables. The items of the survey are of primary importance. Collected data are only useful if they accurately measure the concepts of interest. In addition, administration of the survey must follow basic principles to ensure an adequate response rate and representation of the intended target sample. In this article, the authors review some general concepts important for successful survey research and discuss the many advantages this methodology has for obtaining limitless amounts of valuable information.

Ann Arbor, Mich.

From the Section of Plastic Surgery, Department of Surgery, The University of Michigan Medical Center; and Division of General Medicine, Department of Internal Medicine, University of Michigan.

Received for publication November 9, 2009; accepted January 15, 2010.

Disclosure: The authors have no conflicts of interest to declare.

Amy K. Alderman, M.D., M.P.H., Plastic Surgery; University of Michigan; 2130 Taubman Center; 1500 East Medical Center Drive; Ann Arbor, Mich. 48109-0340;

The basic concept of survey research involves capturing beliefs, attitudes, or outcomes that can be generalized to a population from which the sample was selected. The sample can include any population of interest, such as physicians, health care administrators, patients, and patients' significant others. The outcomes of interest in surgery can range from national epidemiological trends in surgical care to physicians' beliefs about the surgical management of a disease to patient-reported surgical outcomes. For example, suppose that you are interested in rheumatologists' attitudes toward hand surgery for rheumatoid arthritis. A random sample of several hundred rheumatologists in the United States could be mailed a survey that addresses the specific attitudes you want to study. The responses from the rheumatologists are then coded into a standardized form that can be recorded electronically in a quantitative manner. Responses are then subjected to an aggregated analysis to describe the attitudes of the rheumatologists toward hand surgery and determine correlations among different responses. Conclusions reached are then generalized to the entire U.S. population of rheumatologists.1–3

Survey research is a unique methodology that can provide insight into individuals' perspectives and experiences. Successful outcomes in plastic surgery are often measured by improvement in a patient's quality of life rather than by mortality rates, which are used by other surgical areas. As such, survey research is a valuable tool for plastic surgeons to consider when interested in understanding the impact of surgery on patient-reported outcomes. With the large variety of surgical techniques available in plastic surgery, choosing the best procedure for a particular patient can by a daunting task, even for experienced surgeons and highly educated patients. Patient-reported outcome measures collected through survey research can provide patients and physicians with reliable and useful information to assist in this decision-making process. In particular, patient satisfaction and health-related quality of life data offer patients a means of evaluating and comparing options based on previous patients' experiences and perspectives. Furthermore, survey research is a reliable approach to obtaining data on what is effective, not just efficacious, by examining patient-reported outcomes on a large sample of patients treated by a many different surgeons in different treatment settings.

Our purpose is to describe the following regarding survey research: (1) the clinical question that is suitable for survey research, (2) key elements for conducting a high quality survey, (3) advantages and disadvantages of this methodology, (4) how to report survey results, and (5) interpreting and deriving evidence from survey research.

Back to Top | Article Outline


Clinical questions best suited for survey research include questions that are: (1) descriptive, (2) explanatory, or (3) explorative.3 One use for clinical research is to make descriptive assertions about a population. In this case, the clinical question is not directed at why an observed distribution exists but rather what the distribution is.3 For example, you may be interested in knowing how many mastectomy-treated breast cancer patients receive breast reconstruction. In this example, one is not interested in patient and system-level factors influencing the use of postmastectomy breast reconstruction, but rather the research question is limited to describing epidemiological trends in receipt of surgery treatment.

Survey research can also be used to make explanatory assertions about a population. This research design almost always requires the use of multivariate analysis, which is the simultaneous examination of two or more variables.3 An example would be if you were interested in knowing why the use of postmastectomy breast reconstruction across the United States was low with large geographical variations. A survey could be designed to study general surgeons' referral patterns for postmastectomy breast reconstruction. A multivariate analysis could be performed that looked at the association between surgeon and treatment facility characteristics and referral practices for reconstruction.4

Survey research can also be explorative, for which the study question is less defined than in the previous examples. In this case, it can be used to initiate an inquiry into a particular area. A loosely structured survey can be designed in the initial stages of the development of a research project, and the results can be used to inform and substantially revise the research design and survey instrument.3 An example would be if you were interested in better understanding outside factors that influence a patient's decision regarding postmastectomy breast reconstruction. A loosely structured survey could be designed that asked about a variety of influences on the surgical decision, ranging from health providers, significant others, friends, and news media. The survey could be piloted on a small sample and the feedback used to refine the measures for the final survey instrument.

Back to Top | Article Outline


1. Conceptual Model

The most important step in survey research is the development of the survey, which should be based on a conceptual framework that explains the relationships of the independent and dependent variables. The conceptual model is a working strategy containing the major concepts and their relationships. The model formulates the research questions and hypotheses.5 Researchers can modify preexisting conceptual models or develop their own. An example would be a researcher who was interested in studying the patient decision-making process for postmastectomy breast reconstruction. The first step is to design a conceptual model to help explain the decision-making process. In this case, the researcher could use a conceptual model from social science research called the Transtheoretical Model, Stages of Change Construct (Fig. 1).6 This model proposes that patients move through a series of steps or stages when making decisions and taking actions about breast reconstruction: precontemplation (lack of knowledge of or desire for breast reconstruction); contemplation (thinking about reconstruction); preparation (action-oriented activities, such as presurgical consultation); and action (receipt of reconstruction). Each box represents a concept that must be measured in the survey, such as (1) knowledge and attitudes toward reconstruction; (2) external influences, such as family and friends; and (3) enabling factors, such as health insurance. Some concepts do not have previously validated measures, such as knowledge and attitudes toward breast reconstruction. In this case, ad hoc questions must be designed. When available, however, validated measures should be used, such as when assessing patients' decisional satisfaction,7 decisional regret,8 and health-related quality of life.9 Issues related to measure validation are discussed in the next section.

Back to Top | Article Outline

2. Survey Items

The questionnaire items are of primary importance in survey research. Collected data are only useful if it accurately measures the concepts of interest. In other words, a good question-and-answer process is one that produces answers that provide meaningful information about what is trying to be described.10 In addition, the measurement process must produce consistent results.10 When developing a survey research project, a researcher is confronted with a set of abstract concepts that are thought to help explain a clinical area of interest. These abstract concepts must be converted into questions in a survey instrument to collect empirical data that can help the researcher better understand the clinical question.3 Accurately designing an instrument is a rigorous process that involves several stages, such as item generation, item reduction, pretesting, field management, and attribute testing.11–13 For more information on survey development, we recommend reading Health Measurement Scales: A Practical Guide to Their Development and Use by David Streiner and Geoff Norman, Oxford Medical Publications, 2003. This thorough process of instrument development helps ensure measurement quality by testing the reliability (i.e., consistent responses) and validity (i.e., corresponds to the “true value”) of the items.3,10 When possible, it is best to use measures that have been previously validated with proven reliability, validity, and responsiveness to change.14–17 Examples of validated patient-reported measures include the Michigan Hand Questionnaire for hand function18 and the Breast-Q for quality of life and patient satisfaction after breast surgery19 (see Table 1 for additional examples20–33).

It is important to realize that these instruments lose validity if they are changed in any way. If modifications are deemed necessary, the instrument must undergo repeat validation testing. Investigators should resist the temptation to use in-house, nonvalidated instruments for assessing patient-reported outcomes. This practice decreases the research quality and eliminates the ability to compare the results with other studies.11 Sometimes the researcher is interested in understanding concepts that do not have previously validated measures. In these cases, ad hoc questions can be designed by the researcher but must be carefully reviewed by the research team along with pilot testing to ensure that the questions accurately measure the concept of interest.

Back to Top | Article Outline

3. Question Construction

Several guidelines can be helpful when developing questions in the case where previously validated instruments are not available. We recommend reading: Improving Survey Questions: Design and Evaluation by Floyd J. Fowler, Jr. (Sage Publications, 1995), and The Survey Kit, which is a series of set books edited by Arlene Fink (Sage Publications, 1995; especially “How to Conduct Self-Administered and Mail Surveys” by Linda B. Bourque and Eve P. Fielder) for a thorough discussion of effective survey questions. We will address a few of the more salient points here. When possible, closed-ended questions are preferred because they provide greater uniformity of responses and are more easily analyzed. Open-ended questions can be very difficult to code for data entry and analysis.3 When using closed-ended questions, however, the responses must be both exhaustive and mutually exclusive.3 Double-barreled questions should also be avoided.3 Look for this situation when the word and is present in the question. An example would be: “A patient with stage 4 breast cancer should be treated with a mastectomy and should delay breast reconstruction.” Survey items should be clear, short statements without bias. Negative items should also be avoided.3 Avoid questions that ask about more than one thing at a time. For example, rather than asking: “Did you get information about treatments such as radiation and chemotherapy?” these should be asked as two separate questions. Questions that ask respondents to report on other people's beliefs or experiences should also be avoided as respondents do not tend to be accurate reporters for others. Researchers should also avoid questions that ask respondents to assign causality to a particular situation. Instead, the components to be studied should be asked separately, and the researcher can then calculate whether there is an association between the various items. For example, rather than asking if a respondent regrets having surgery because of complications (this is both double barreled and causal), one could ask a series of questions about satisfaction with various aspects of the surgery and another series of questions about the occurrence of various problems that could have been associated with the surgery. Table 2 provides examples of poorly stated questions.

Normalizing statements can be an important tool for encouraging respondents to answer questions accurately rather than answering with perceived socially desirable responses. For example, in a physician survey about surgeon–patient communication, a series of questions about potential areas of conflict could be introduced with the following: “Involving your patients in treatment decisions can be difficult for many reasons. We want to know your opinions about the challenges to actively engaging your patients in treatment decision making.” The appearance and language level of the survey is also significant. Wording of the questions should be aimed at the lowest education level of the anticipated respondent population. The layout should be clean, directions easy to follow, and pages should not be too crowded so as to avoid overwhelming respondents and lowering their likelihood of participating.

A Likert scale is the psychometric scale most commonly used to measure response options in survey research. The scale measures a respondent's level of agreement with a statement.34 Scales can vary in the number of response options provided. A five-point scale is most commonly used, although a four-point scale could be administered if the researchers want to force the respondent into a level of agreement or disagreement, as the middle neutral response is eliminated.35 Table 3 displays a variety of commonly used responses for the Likert scale.

Back to Top | Article Outline

4. Ordering Questions in the Survey

The order of questions in the survey can greatly influence responses. An example would be if the first question in a survey asked about physicians' attitudes toward reimbursement for postmastectomy breast reconstruction. Any subsequent questions, such as “What is the most important health policy issue in plastic surgery?” would be influenced by the first financial question related to breast reconstruction. The safest way to guard against such potential bias is to be sensitive to the issue. Random ordering of questions creates a chaotic survey that respondents will not want to complete. If the researchers have substantial concern for question order bias, multiple versions of the survey can be developed with different ordering of items and piloted on a sample to determine the effects of the item order.3 It is also important to consider placing the most important items early in the questionnaire in case the respondent does not complete the entire survey.

Back to Top | Article Outline

5. Pilot Studies

Conducting pretests of the study administration, survey, and analysis is extremely important. Before survey administration, intensive individual interviews should be performed with the survey questions to evaluate the responder's understanding of the questions.10 For example, a researcher is interested in understanding rheumatologists' management of rheumatoid hand disease. Previously validated measures are not available. The researcher designs questions aimed at understanding physicians' beliefs about medical versus surgical treatment. The researcher should then have individual rheumatologists take the survey and explain what they believe each question is asking. This process helps ensure the content validity of the survey items. A representative sample of the target population should be administered the final survey. Data collection, cleaning, coding, and analysis should representative of the final research methods and analysis. The survey can then be revised based on information received about missing data, variances in responses, and internal validation along with respondent feedback on question clarity and questionnaire flow and format.3 Pretesting does add time and cost to the research; however, valuable information will be gained through this process, and the final deliverables from the research will be greatly improved.

Back to Top | Article Outline

6. Survey Administration

Surveys can be administered in several ways, such as telephone interviews and electronic mail. Electronic methods have the advantage of lower costs and faster survey administration, and can be easier for the respondent when skip patterns are present because the questionnaire can be electronically formatted so that the respondent only sees the questions they are supposed to answer.36 Examples of email survey software include SurveyMonkey, SurveyTracker, QuestionPro, and SurveyShare. Many, however, are concerned about sample representativeness due to exclusion of participants from lower socioeconomic status that might not have access to a computer.37 Response rates are often lower with electronic surveys compared with mailed surveys.38 This review article will focus on self-administered mailed surveys, which are one of the most common distribution methods. The basic method for self-administered mailed surveys is to send the respondent the questionnaire along with a letter of explanation and a return envelope with either business-reply postage or stamps.3 When surveying a group, it can be beneficial to privately ask members of the group whose opinion they respect most regarding the research topic. Response rates may improve if you have that person explain in the cover letter why the responder's opinion is important. The outgoing survey should have an identification number that will link the questionnaire to the respondent, which will allow for follow-up mailings to the nonresponders. Often an incentive is provided to encourage response rates, such as a small cash gift or gift card.39,40 Returns must be closely monitored, and follow-up mailings to nonresponders must proceed in a timely fashion. In general, three mailings (an original and two follow-ups) produce the best response rates. Time between mailings is 2 to 3 weeks3,41 The follow-up mailings can be limited to a letter of encouragement for participation. The most effect method, however, is to enclose a new copy of the survey with the incentive gift, along with follow-up letters and phone calls. Other ways that have been shown to improve response rates include university sponsorship, short questionnaires, personalized letters, recorded delivery with stamped return envelopes, increased number of contacts with the respondent, and an advance letter notifying the respondent that a survey will follow.3,39,42

Achieving a high response rate is an essential goal of survey research. The higher the response rate, the lower the chance for a nonresponse bias and the greater likelihood that the responses represent the target population. In general, a response rate of at least 50 percent is considered adequate for analysis and reporting. A response rate of 60 percent is considered good, and 70 percent is considered very good.3 Response rates are generally calculated by dividing the number of completed survey by the net sample size. The net sample size is derived from the initial sample size minus subjects that could not be administered a survey due to death, bad addresses, etc.3

Back to Top | Article Outline

7. Importance of a Statistician

Consider consulting with a statistician early in the process of questionnaire development and study design. Statisticians can help assess the reliability of ad hoc questions from pilot questionnaires. They can assist with sample size calculations to ensure that you have enough statistical power to analyze the data appropriately. It is also important to discuss how the measures will be analyzed and reported before survey administration so that adjustments can be made as appropriate in questionnaire response categories.

Back to Top | Article Outline


Survey research has many advantages. A limitless amount of desirable and useful information can be collected from large population-based samples. The information can be used to describe characteristics of a large population, which few other research methodologies can provide. For example, survey research can allow for the examination of surgical outcomes across multiple providers and multiple health systems. This population-based approach provides useful information on what treatment is effective in the general population under “real world” circumstances rather than what is efficacious under ideal treatment conditions, such as those in a clinical trail.43 In addition, if a large sample size is used, this will increase the statistical power of the study and allow for the analysis of multiple covariates. Surveys can also be administered from remote locations. Use of standardized or previously validated measures will ensure precise and reliable measurement and allow for comparisons with other studies.

Survey research has some disadvantages. This method can be costly, depending on the incentive gift and postage. The use of previously validated instruments may be too general for the research questions of interest. For example, the Short Form Health Survey measures general health states not specific enough for most surgical outcomes. The accuracy of responses can also be an issue in survey research. Although self-reporting is consistently more accurate than proxy-reporting,10 the accuracy of self-reporting can be affected by time to recall an event, inadequate knowledge regarding a topic, and the respondent's willingness to report on a given topic.10 For example, recall is more likely to be accurate the more recent the event and the greater impact of the event.10,44 When survey instruments obtain information subject to recall bias, statistical methods should be employed to help limit this bias. These methods include controlling for the time between the event and the survey completion and controlling for factors that may affect an individual's ability to recall information, such as age. Medical diagnosis is a good example of inadequate knowledge, for which several studies have shown a mismatch between the medical conditions patients report and the conditions recorded in the medical record.10,45 Lastly, but most importantly, the validity of the study results is highly dependent on the response rate, as a poor response rate can result in a nonresponse bias.

Back to Top | Article Outline


It is beyond the scope of this article to address in detail the process for data analysis. How to Report Statistics in Medicine by Thomas Lang and Michelle Secic (American College of Physicians, Philadelphia, 1997) provides a comprehensive summary of how to report statistical information for clinical research. In general, the analytic portion of the manuscript should provide: (1) general descriptive data of the study sample, (2) bivariate comparisons between the covariates and the dependent variable of interest, and (3) regression analyses that shows the independent association of covariates on the dependent variable while controlling for other important clinical and demographic characteristics that may confound the results.

The methods section of the manuscript should also include detailed information that is specific to survey research, such as: (1) the survey instrument; (2) sampling technique, inclusion and exclusion criteria, and power analysis when appropriate; (3) survey administration, including incentive gifts and number of contacts; (4) response rate along with descriptive analyses between responders and nonresponders; (5) analytic coding for variables (e.g., continuous, categorical); and (6) a well-described analytic process that relates the variables of interest to the study's hypotheses.

Back to Top | Article Outline


One must critically appraise the study's sampling techniques, respondent population, and reporting methods when evaluating the validity of the results. How representative is the study sample to the population of interest? Would the nonresponders' answers to the survey vary significantly from the answers by those who completed the questionnaire? Did the study subjects have adequate knowledge to accurately answer the questions? Has there been too much time between the event in question and the time to complete the survey that might result in inaccurate responses from the study subjects? It is also important to consider whether the instruments used were previously validated or ad hoc questions. In cases of ad hoc questions, is there adequate information about how the questions were tested for validity and reliability in the study population of interest? Lastly, it is important that the results address the study's original hypotheses and that the analytic process adequately controlled for possible confounding factors through regression techniques.

Back to Top | Article Outline


Survey research is a unique methodology that can provide valuable insight into individuals' perspectives and experiences, and can be collected on a large population-based sample. The information from survey research in plastic surgery can provide patients and providers with accurate and reproducible information to assist with medical decision-making.

Back to Top | Article Outline


This work was supported by a career development award from the Robert Wood Johnson Foundation to Dr. Alderman.

Back to Top | Article Outline


1. Alderman AK, Ubel P, Kim H, Fox D, Chung K. Surgical management of the rheumatoid hand: Consensus and controversy among rheumatologists and hand surgeons. J Rheumatol. 2003;30:1464–1472.
2. Alderman AK, Chung K, Kim H, Fox D, Ubel P. Effectiveness of rheumatoid hand surgery: Contrasting perceptions of hand surgeons and rheumatologists. J Hand Surg (Am.) 2003;28:3–11.
3. Babbie E. Survey Research Methods. 2nd ed. Belmont, Calif.: Wadsworth; 1998.
4. Alderman AK, Hawley ST, Waljee J, et al. Correlates of referral practices of general surgeons to plastic surgeons for mastectomy reconstruction. Cancer 2007;109:1715–1720.
5. Riegelman R. Studying a Study and Testing a Test. Philadelphia: Lippincott Williams & Wilkins; 2000.
6. Glanz K, Lewis FM, Rimer BK. Health Behavior and Health Education. San Francisco: Jossey-Bass; 1997.
7. Holmes-Rovner M, Kroll J, Schmitt N, et al. Patient satisfaction with health care decisions: The satisfaction with decision scale. Med Decis Making 1996;16:58–64.
8. Brehaut JC, O'Connor AM, Wood TJ, et al. Validation of a decision regret scale. Med Decis Making 2003;23:281–292.
9. Sprangers MA, Groenvold M, Arraras JI, et al. The European Organization for Research and Treatment of Cancer breast cancer-specific quality-of-life questionnaire module: First results from a three-country field study. J Clin Oncol. 1996;14:2756–2768.
10. Fowler FJ. Improving Survey Questions: Design and Evaluation. London: Sage; 1995.
11. Bindra RR, Dias JJ, Heras-Palau C, et al. Assessing outcome after hand surgery: The current state. J Hand Surg (Br.) 2003;28:289–294.
12. Meadows KA. So you want to do research? 5: Questionnaire design. Br J Community Nurs. 2003;8:562–570.
13. Rattray J, Jones MC. Essential elements of questionnaire design and development. J Clin Nurs. 2007;16:234–243.
14. Dias JJ, Bhowal B, Wildin CJ, Thompson JR. Assessing the outcome of disorders of the hand: Is the patient evaluation measure reliable, valid, responsive and without bias? J Bone Joint Surg Br. 2001;83:235–240.
15. Angst F, Goldhahn J, Pap G, et al. Cross-cultural adaptation, reliability and validity of the German Shoulder Pain and Disability Index (SPADI). Rheumatology (Oxford) 2007;46:87–92.
16. Bot SD, Terwee CB, van der Windt DA, et al. Clinimetric evaluation of shoulder disability questionnaires: A systematic review of the literature. Ann Rheum Dis. 2004;63:335–341.
17. Cohen J. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. Hillsdale, N.J.: Lawrence Erlbaum Associates; 1983.
18. Chung KC, Hamill JB, Walters MR, Hayward RA. The Michigan Hand Outcomes Questionnaire (MHQ): Assessment of responsiveness to clinical change. Ann Plast Surg. 1999;42:619–622.
19. Pusic AL, Reavey PL, Klassen AF, et al. Measuring patient outcomes in breast augmentation: Introducing the BREAST-Q Augmentation module. Clin Plast Surg. 2009;36:23–32, v.
20. Bellamy N, Campbell J, Haraoui B, et al. Clinimetric properties of the AUSCAN Osteoarthritis Hand Index: An evaluation of reliability, validity and responsiveness. Osteoarthritis Cartilage 2002;10:863–869.
21. Levine DW, Simmons BP, Koris MJ, et al. A self-administered questionnaire for the assessment of severity of symptoms and functional status in carpal tunnel syndrome. J Bone Joint Surg Am. 1993;75:1585–1592.
22. Changulani M, Okonkwo U, Keswani T, Kalairajah Y. Outcome evaluation measures for wrist and hand: Which one to choose? Int Orthop. 2008;32:1–6.
23. Chung KC, Hamill JB, Walters MR, Hayward RA. The Michigan Hand Outcomes Questionnaire (MHQ): Assessment of responsiveness to clinical change. Ann Plast Surg. 1999;42:619–622.
24. Brady MJ, Cella DF, Mo F, et al. Reliability and validity of the Functional Assessment of Cancer Therapy-Breast quality-of-life instrument. J Clin Oncol. 1997;15:974–986.
25. Niezgoda HE, Pater JL. A validation study of the domains of the core EORTC quality of life questionnaire. Qual Life Res. 1993;2:319–325.
26. Hjermstad MJ, Fossa SD, Bjordal K, Kaasa S. Test/retest study of the European Organization for Research and Treatment of Cancer Core Quality-of-Life Questionnaire. J Clin Oncol. 1995;13:1249–1254.
27. Hormones JM, Lytle LA, Gross CR, Ahmed RL, Troxel AB, Schmitz KH. The Body Image and Relationships Scale: Development and validation of a measure of body image in female breast cancer survivors. J Clin Oncol. 2008;26:1269–1274.
28. Lasry J. Effect of Cancer on Quality of Life. Boca Raton, Fla.: CRC Press; 1991.
29. Oster C, Willebrand M, Dyster-Aas J, et al. Validation of the EQ-5D questionnaire in burn injured adults. Burns 2009;35:723–732.
30. Scheier MF, Carver CS. Optimism, coping, and health: Assessment and implications of generalized outcome expectancies. Health Psychol. 1985;4:219–247.
31. Svelnikas K. Optimism and Pessimism as Regulators of Self Motivation [unpublished bachelor's thesis]. Tartu, Estonia: University of Tartu; 1998.
32. Carver CS. You want to measure coping but your protocol's too long: Consider the brief COPE. Int J Behav Med. 1997;4:92–100.
33. Beck AT, Weissman A, Lester D, Trexler L. The measurement of pessimism: The hopelessness scale. J Consult Clin Psychol. 1974;42:861–865.
34. Likert R. A technique for the measurement of attitudes. Arch Psychol. 1932;140:1–55.
35. Dawes J. Do data characteristics change according to the number of scale points used? An experiment using 5-point, 7-point, and 10-point scales. Int J Market Res. 2008;50:61–77.
36. Bachmann D, Elfrink J. Tracking the progress of email versus snail-mail. Market Res. 1996;8:31–35.
37. Dillman D. Mail and Internet Surveys: The Tailored Design Method. New York: John Wiley & Sons; 2000.
38. Andreson S, Gansneder BM. Using electronic mail surveys and computer monitored data for studying computer mediated communication systems. Soc Sci Comput Rev. 1995;13:33–46.
39. Edwards P, Roberts I, Clarke M, et al. Increasing response rates to postal questionnaires: Systematic review. BMJ. 2002;324:1183.
40. Halpern SD, Ubel PA, Berlin JA, Asch DA. Randomized trial of 5 dollars versus 10 dollars monetary incentives, envelope size, and candy to increase physician response rates to mailed questionnaires. Med Care 2002;40:834–839.
41. Hoddinott S, Bass MJ. The Dillman Total Design Survey Method: A Sure-Fire Way to Get High Survey Return Rates. Can Fam Physician 1986;32:2366–2368.
42. Fox R, Crask MR, Kim J. Mail survey response rates. Public Opin Q. 1988;52:467–491.
43. Wennberg DE, Lucas FL, Birkmeyer JD, et al. Variation in carotid endarterectomy mortality in the Medicare population: Trial hospitals, volume, and patient characteristics. JAMA. 1998;279:1278–1281.
44. Cannel C, Marquis K, Laurent A. A Summary of Studies. Vol. 69. Washington, D.C.: Government Printing Office; 1977.
45. Cannell C, Fisher G, Bakker T. Reporting of Hospitalization in the Health Interview Survey. Vol. 6. Washington, D.C.: Government Printing Office; 1965.
Back to Top | Article Outline

Instructions for Authors: Key Guidelines

Manuscript Length/Number of Figures

To enhance quality and readability and to be more competitve with other leading scientific journals, all manuscripts must now conform to the new word-count standards for article length and limited number of figure pieces:

* Original Articles and Special Topics/Comprehensive Reviews are limited to 3000 words and 20 figure pieces.

* Case Reports, Ideas & Innovations, and Follow-Up Clinics are limited to 1000 words and 4 figure pieces.

* Letters and Viewpoints are limited to 500 words, 2 figure pieces, and 5 references.

©2010American Society of Plastic Surgeons