Share this article on:

Everything you ever wanted to know about conducting surveys

Danhauer, Jeffrey L.

doi: 10.1097/01.HJ.0000387925.52357.70
Page 10

Surveys are a central fact of life in audiology and in hearing aid dispensing. Practitioners use them in many ways, including to evaluate their practice, measure fitting outcomes and patient satisfaction, and help them in counseling. Jeff Danhauer has been designing, conducting, and writing about surveys for 30 years, and this month he shares his expertise with HJ readers.

People have been conducting surveys for a long time. Historians seem to agree that the Babylonians were the first society to have taken a population census, around 3800 BC, which collected data on the number of citizens, as well as livestock and other goods. The ancient Roman census survey was considered the most accurate and, since it was conducted for tax purposes, this census contributed significantly to the wealth of the Empire.

Today, of course, there are surveys related to just about everything. Often the survey is used for marketing. Who hasn't heard of the Pepsi Challenge, which has been around since 1975? Surveys are so commonly used in advertising that most of you even remember the percentage of dentists who recommend Crest.

Surveys are valuable in the world of audiology and hearing aid dispensing, too. In some cases, they are used to evaluate or improve your practice. Other times they are important for direct patient care. Hearing aid fitting outcome surveys such as the COSI, APHAB, or IOI-HA can provide critical information for patient counseling and hearing aid adjustments. Moreover, I recall that a few years back, Sergei Kochkin informed us that the simple act of conducting a hearing aid satisfaction survey improves hearing aid satisfaction!

We also use surveys for research purposes. If you browse through published audiologic surveys over the past 30 or so years, one name that shows up more than any other is this month's guest author, Jeff Danhauer, PhD.

If you associate Dr. Danhauer's name with Santa Barbara, that's because he's been on the faculty at UCSB for the past 32 years, and is now professor and department chair. Jeff is also actively involved with his private practice, Hearing Consultants of California, where he works with his wife, Kim, and son, Tate.

You probably also know Jeff from his work in publishing. He was the editor for audiology for both College Hill Press and Singular Publishing Group, working with his friend, colleague, and mentor, the late Sadanand Singh. He, of course, also did a few books of his own.

So, if you are interested in surveys, or are thinking about doing one yourself, this excellent article is for you. Oh, and did I mention that a 2010 survey found that 72.6% of readers pick The Hearing Journal as their favorite audiology trade journal? I'm not sure if any Babylonians were included in the sample.

Gus Mueller

Page Ten Editor

Back to Top | Article Outline

1 You and your colleagues always seem to be associated with some type of audiology survey. What's the deal?

You know, 73.6% of audiologists ask me that very question! I just happen to be contributing to a world inundated with and almost controlled by surveys. Surveys help shape how we live our lives, whether they ask us our views on presidential candidates or our favorite brand of beer. From an early age, most of us can recall rating cars, ball players, and even members of the opposite sex on a scale of 1 to 10. You know you did; admit it! The use of surveys in audiology research and clinical practice is certainly not a novel concept either.

Back to Top | Article Outline

2 Could you give me some examples of how professional surveys are used?

Sure. I'm betting that if you dispense hearing aids you probably completed one in the last year for that hearing aid manufacturer who rewarded you with a gift card to your favorite coffee shop. Remember? All you had to do was rate the manufacturer's products, services, and rank with the competition.

Also, consider that surveys and questionnaires (either standardized or specifically designed for a particular practice setting) continue to be extremely useful tools for clinicians. For example, most clinicians should be quite familiar with—and, hopefully, routinely employ—various pre-hearing/hearing aid evaluations such as the Hearing Handicap Inventory for the Elderly (HHIE)1 and Client Oriented Scale of Improvement (COSI),2 among many others. These instruments can be administered to patients in the waiting room and used as intake measures to help determine their levels of hearing difficulties and their perceptions of and willingness to take ownership of their hearing losses.

Other versions of these surveys can be useful in determining how patients' families and significant others view the hearing loss and its impact on them. These measures can be extremely useful in determining how hearing loss affects health-related quality of life (HRQoL) of patients and their families as we determined from our recent systematic review on this topic.3 Comparing results for these measures is useful for determining patient satisfaction with audiometric services and hearing devices.4 These are all surveys—more or less!

Back to Top | Article Outline

3 That's true. Some of our outcome measures are simply surveys. What other applications are you referring to?

Practitioners can administer other surveys after they provide services to gain a better understanding of how their patients view their facilities, front-office staff, audiologists, and services provided as quality-assurance measures. We recently designed a survey that helped us determine patients' perceptions and use of a battery club in our practice.5 It takes very little time to design surveys for specific practice needs. Clinicians should always remember to get their patients' e-mail addresses for easy and efficient correspondence, especially now that techno-savvy baby boomers and even some elderly are using the Internet to help access assistance for their communication needs.

Back to Top | Article Outline

4 That's something I might try. Can you give me some ideas regarding patient survey formats?

Surveys can be constructed and conducted in several ways depending on their proposed intent. For example, at one time or another I've conducted a survey administered via paper and pencil, face-to-face interview, electronically, by regular mail, or on the telephone. You'll have to decide which is best for you.

Regardless of which you choose, all survey methods have advantages and disadvantages that users must recognize and consider prior to administering them. For researchers, surveys can be an invaluable source of information and, if designed properly, they permit the sampling of wide varieties of participants and their knowledge about, experiences with, and attitudes toward given topics in a short amount of time.

Back to Top | Article Outline

5 So, did your interest in conducting surveys start with research projects?

Yes, over the past 30 years, my students, colleagues, and I have done several. One of the first involved people's perceptions of and attitudes toward individuals wearing hearing aids (i.e., “The Hearing Aid Effect”).6,7 These surveys employed questionnaires using Likert scales on which participants provided their judgments (i.e., ratings) on 1–5, 1–7, or 1–9 intervals where lower numbers indicated one extreme of a variable while higher numbers represented the opposite extreme.

Here's a helpful tip for designing this type of response sheet. It is usually advisable to reverse or invert the rating scales so that participants don't get into a pattern of responding (e.g., always circling number 1 or 2) and to ensure that they are actually reading each item carefully.

Back to Top | Article Outline

6 What work have you done with surveys in clinical audiology?

We've used surveys to evaluate the knowledge and experience of physicians, parents, and providers with universal newborn hearing screening programs (UNHSPs),8–12 hearing and balance screening in the elderly,13,14 and, most recently, use of complementary and alternative medicines or CAMs (e.g., xylitol) as a prophylaxis for acute otitis media in children.15 In some cases, we've used surveys for outreach to determine and meet the informational needs of various groups of stakeholders.

Back to Top | Article Outline

7 What exactly do you mean by “outreach”?

Well, it is surprising to discover just how little other healthcare providers, even primary-care physicians, know about hearing loss and its treatment. That often ends up limiting their patients' access to needed care. We have used informational outreach to provide physicians with tools to use in screening their patients for hearing loss and balance disorders. Coincidentally, we tell them about survey results!

Back to Top | Article Outline

8 Which patient survey tools do you recommend?

Those that we use fairly routinely include the HHIE,1 Dizziness Handicap Inventory (DHI),16 and Tinnitus Handicap Questionnaire (THQ).17 Because of their short length and the quality of the information they provide, audiologists can use these materials as screening tools to help evaluate patients' hearing and balance and to market their services to physicians.

Back to Top | Article Outline

9 Let's say I want to start using surveys for outreach. Any advice?

If you're going to design your own, you should always consider and be respectful of the intended participants' limited time, especially physicians and other professionals in busy practices. Therefore, questionnaires should usually be designed to be completed in less than 10 minutes. That increases the chances of obtaining a response. Survey topics must be of interest to potential respondents; otherwise, they are less likely to take them. Designers of surveys must keep in mind the old adage of GIGO: “garbage in, garbage out.”

Back to Top | Article Outline

10 How long should surveys be?

Surveys can be as brief as one or two questions or as exhaustive as several pages. We've all received lengthy telephone, e-mail, and postal surveys from our professional organizations on numerous topics over the years. However, one of the best examples of a short one-item survey for use with patients was created by Helena Solodar and Kadyn Williams, who simply asked patients to rate their hearing on a 1–10 scale.13 I believe that this one question (and perhaps an additional one asking patients to rate on a 1–10 scale their willingness to do something about their hearing difficulties soon (if they indeed have hearing loss) can be invaluable and a tremendous time saver for clinicians. So, surveys don't have to be long.

Back to Top | Article Outline

11 I've often thought about designing a survey. Where do I start?

Survey construction is critical. Simply jotting down questions and asking people to respond without using structured, well thought-out and tested questions will rarely supply useful information. One of the best references for designing questionnaires and surveys that I have found (and rely on frequently) is a chapter by Cummings and Hulley19 in a text by Hulley and colleagues. These authors provided concise steps that should be taken in constructing questionnaires and surveys, whether by beginners or seasoned “surveyologists.”

Back to Top | Article Outline

12 Well, Mr. Surveyologist, can you review a couple things I need to know?

The first piece of advice I'll give you is to look for an existing survey that might meet your present needs. Rarely, though, have we found surveys that met our needs exactly. So, it is not unusual to have to design surveys from scratch. If that's the case, keep in mind that there are advantages and disadvantages to all decisions made in the process.

In designing a survey, first decide on a particular vehicle or vehicles for administration (e.g., via mail or Internet) and select the response formats (e.g., open- or closed-ended questions) that are best suited to the purpose of your study and that are most likely to elicit meaningful results. It is also important to consider the level of measurement being used, i.e., will the data generated be nominal, ordinal, interval, or ratio? That determines the types of statistical testing you can apply to the data.

Moreover, always develop hypotheses prior to constructing your survey. If you will be using statistical tests, they should be designed to answer a priori experimental questions rather than going on “fishing expeditions” by conducting multiple T-tests to find “pay dirt” after the data have been collected.

Unambiguous, easy-to-read and easy-to-answer questions with mutually exclusive, non-overlapping categories will usually produce the best results. Even in some of our early work, we found pilot testing and factor analysis of responses invaluable in selecting the final items for surveys.6 Avoid convoluted questions with hidden assumptions or question and response options that don't make sense. They will produce ambiguous and unreliable responses if respondents have difficulty figuring out the intent of the items.

Format the survey with neat, visually attractive questions and leave plenty of space between items and responses. This will improve your chances of obtaining accurate answers from respondents. Also, spacing must be consistent between response options to establish equivalence among all the points along a scale or continuum. Use clear and simple wording, and avoid loaded phrases and sensitive issues that might make respondents uncomfortable. Pilot testing a survey is very important. And, once the instrument is finalized, give participants a time frame in which to respond.

Back to Top | Article Outline

13 That was more than a “couple things,” but thanks. Do you usually get a good response rate to your surveys?

Usually, but not always. Return or response rate is a key element in how much faith can be placed in the results of any survey, and it can be influenced by several factors. Some that we have encountered include delivery problems due to inaccurate postal or e-mail addresses, recipients' failure to open their regular mail or e-mail, protective receptionists who intercept surveys and fail to direct them to their busy physician employers, and potential participants' lack of time or interest in the topic.

Back to Top | Article Outline

14 Any tricks for enhancing response rates?

Some researchers have attempted to increase return rates by prompting potential participants. Remember the last time you received a survey with a nice, crisp $1 bill attached? Did you respond? Was it an effective prompt?

Researchers use other types of prompts, including: informing potential participants that they will be receiving a survey in the near future, reminding them at intervals throughout the data-collection phase, and sending follow-up questionnaires to those who fail to respond. These prompts can be in the form of postcards, e-mail, phone calls, or personal contact. There are no guarantees that prompts will work, and in some cases they don't.

In our experience, even face-to-face promises by physicians to complete a survey have failed to garner their responses. Further, in our surveys of iPod users, we used iPod shuffles, iTunes gift cards, and cash to entice high school and college students to respond.20,21 Unfortunately, these enticements had little effect on response rates and significantly increased the costs of doing the studies. Therefore, we are not inclined to use them in future surveys.

Having a specified pool or panel database of experienced survey takers can certainly enhance responses, as evidenced by the 84% return rate reported for the most recent MarkeTrak VIII survey.22 Audiologists should be aware that response rates in the audiology and otology literature have ranged from the low 20% range to the high 90% range and almost everywhere in between.

Back to Top | Article Outline

15 I have to admit I have kept a buck or two without filling out the survey. Is there any specific group with typically poor response rates?

We have learned that physicians do not typically respond well to non-medical surveys. It is important, therefore, not to get discouraged and to remember that low response rates do not necessarily mean that results are invalid or unimportant, especially if any potential subjects' response biases are accounted for in the data collection and analysis process.23 Thus, even surveys with low response rates but good cross-sectional representation of the populations sampled can provide meaningful and useful information.

On the other hand, if a survey produces responses from a disproportionately female group of participants or users of only one type of hearing aid, then response biases will likely skew the results. Certainly, possible response bias must be considered in evaluating the findings of any survey, whether it's the bias of the researchers, the clinical practitioners or others interested in consuming and using the data, or prospective patients and family members who might be affected by the results.

Back to Top | Article Outline

16 How about interpreting the results? Do I have to be a wiz at stats?

It helps, but if you aren't a wiz try to find a friend who is. Unfortunately, power analyses have not been routinely conducted or reported in studies in the audiologic literature. Yet this information is critical to knowing if researchers have sufficient numbers of subjects to find any significant differences between groups. Recently, power analyses have become more prevalent in audiologic studies, but they may not always be appropriate or easy to calculate for audiologist-designed surveys.

In this case, you'll probably want to conduct pilot testing to obtain estimations of variability (e.g., standard deviations) and clinically relevant differences (e.g., effect sizes) for power calculations. If you're using traditional standardized outcome measures, then you can often obtain these values from normative data. Nevertheless, consumers of survey findings should consider sample sizes when evaluating the results of studies, especially as clinicians might apply them to their own patients.

When interpreting the results of any survey, there is one thing you must always keep in mind. The results can speak only to the opinions of those people who elected to respond for whatever reason. We cannot infer how those who did not respond to a survey would have responded if they had chosen to do so. The degree to which you can generalize a survey's results to a wider population depends on how closely the participants sampled represent the greater population. It is important that interpretations of survey results not go beyond the data.

Back to Top | Article Outline

17 Any other factors or precautions I need to know about?

Because results obtained from surveys are collected from human beings, it is important that all surveys be approved by the institutional review board(s) (IRBs) of the person(s) conducting them. This is especially critical when minors are used in a survey, and is usually important if responses are to be obtained via the Internet.

For example, in our recent survey of high school students and iPods,21 we had to obtain IRB approval and written consent from parents and children alike before being able to solicit their participation via email. Most IRBs will “exempt” surveys that are conducted solely on adults, but application for approval must still be sought prior to conducting the study. Typically, journals will require a statement that IRB approval has been received before they publish a study.

Other factors you need to consider before conducting a survey include how the data will be collected and analyzed. A survey of patient perceptions of a small audiology practice might lend itself to a paper-and-pencil questionnaire that can be analyzed easily by hand.

We have also conducted large surveys using paper-and-pencil and/or e-mail, such as in the national questionnaires that we administered to pediatricians and otolaryngologists and college students. Both paper-and-pencil and regular mail questionnaires used in those surveys proved to be expensive and cumbersome to administer and analyze.

Back to Top | Article Outline

18 Isn't there some easier way to administer a large survey?

I was just getting to that. When large data sets are involved, we've found that Survey Monkey ( can be invaluable. This is an online, real-time data-collection and analysis system that can be purchased for a minimal fee to expedite conducting surveys. The data collection is done in real time so that researchers (or you in your clinic) can literally sit in front of a computer screen and watch the data come in. We have also found it useful to convert survey data collected by paper and pencil to Survey Monkey for easy analysis and data storage. You can even combine data obtained via paper and pencil with those collected via the Internet.

Back to Top | Article Outline

19 What surveys have you been working on lately?

Some of our recent publications and lines of research have been part of the evidence-based practice movement in audiology and involved systematic reviews of evidence available in the literature supporting certain treatment options for hearing loss.3,24 Even systematic reviews are a kind of survey in that they prescribe precise and rigorous protocols for how we search and assess (i.e., survey) the evidence on given topics (e.g., cochlear implants, bone-anchored hearing aid systems [Bahas], CAMs like xylitol for acute otitis media in children, and effects of hearing aids on users and their significant others' HRQoL). So, you see, we cannot really get away from surveys.

We've looked at such things as “The Hearing Aid Effect;” parents, professionals, physicians, and other stakeholders and universal newborn hearing screening programs; physicians and hearing and balance screening in the elderly; physicians and cochlear implants; patients and hearing aid satisfaction; and college and high school students and iPods, to name a few. We have used results from some of these surveys in physician-outreach programs, such as two of our recent articles in medical journals.25,26 Some of the surveys have been useful in monitoring universal and local newborn screening programs and have been included in outreach to hospitals, pediatricians, otolaryngologists, and other stakeholders.25–27 Still others have been used in outreach programs to parents, physicians, K-3 teachers, and school systems regarding prevention of acute otitis media.

You might keep an eye out for one of our upcoming articles in a special issue of the Journal of the American Academy of Audiology that surveyed AuD-degree holders and program chairs about AuDs in academic tenure-track positions.28 The surveys provided some interesting insights into AuDs and academic tenure-track positions. You will have to wait for the article for the results, because I don't want to give away the punch line here.

Back to Top | Article Outline

20 Any parting words about surveys?

Just one thing. I have to give credit to Carole Johnson, my long-time collaborator and friend, for co-authoring most of these studies and for helping you write these questions. And that's a good stopping place, as that's enough on surveys for now. Besides, I need to go cast my votes for the latest episodes of American Idol and Dancing with the Stars. Oh, no, they are really surveys too, aren't they?

Back to Top | Article Outline


I would like to acknowledge my loving wife, Kim, who typed the first draft of this manuscript on a laptop and provided great comments about it as I dictated while driving from a recent ski trip at Mammoth Mountain. If this seems disjointed, then it is her fault. I would like to thank Michelle J. McLain, Ashley S. Page, Tasha A. Snelson, and Jennifer S. Stockwell for comments on this manuscript. I also want to acknowledge Dr. Carole E. Johnson's input on this manuscript and for her patience and endurance in working with me on many surveys and all our other research over the last few decades.

Back to Top | Article Outline


1. Ventry IM, Weinstein BE: The Hearing Handicap Inventory for the Elderly: A new tool. Ear Hear 1982;3(3):128–134.
2. Dillon H, James A, Ginis J: The Client Oriented Scale of Improvement (COSI) and its relationship to several other measures of benefit and satisfaction provided by hearing aids. JAAA 1997;8:27–43.
3. Chisolm TH, Johnson CE, Danhauer JL, et al.: A systematic review of health-related quality of life and hearing aids: Final report of the American Academy of Audiology Task Force on the Health-Related Quality of Life Benefits of Amplification in Adults. JAAA 2007;18(2):151–183.
4. Williams VA, Johnson CE, Danhauer JL: Hearing aid outcomes: Effects of gender and experience on patients' use and satisfaction. JAAA 2009;20: 422–432.
5. Williams VA, Danhauer JL, Johnson CE: Study explores benefits of a battery club to a private practice and its patients. Hear J 2008;61(8):24–28.
6. Blood GW, Blood I, Danhauer JL: The hearing aid effect. Hear Instr 1977;28(6):12.
7. Johnson CE, Danhauer JL, Gavin RB, et al.: The “Hearing Aid Effect” 2005: A rigorous test of the visibility of new hearing aid styles. AJA 2005;14:169–175.
8. Danhauer JL, Johnson CE, Finnegan D, et al.: A national survey of pediatric otolaryngologists and early hearing detection and intervention programs. JAAA 2006;17:708–721.
9. Danhauer JL, Johnson CE: Parents' perceptions of an emerging community-based newborn hearing screening program: A case study. JAAA 2006;17:202–220.
10. Danhauer JL, Pecile AF, Johnson CE, et al.: Parents' compliance with and impressions of a maturing community-based early hearing detection and intervention program: An update. JAAA 2008;19: 612–629.
11. Danhauer JL, David KB, Johnson CE, Meyer DH: Survey of pediatricians and early hearing detection and intervention programs at a precise local level: An academic medical center. Sem Hear 2009;30:165–184.
12. Mathews MR, Johnson CE, Danhauer JL: Pediatricians' knowledge of, experience with, and comfort levels for cochlear implants in children. AJA 2009;18:129–143.
13. Johnson CE, Danhauer JL, Latiolais Koch L, et al.: Hearing and balance screening and referrals for Medicare patients: A national survey of primary care physicians. JAAA 2008;19:171–190.
14. Danhauer JL, Celani KE, Johnson CE: Use of hearing and balance screening with local primary care physicians. AJA 2008;17:3–13.
15. Danhauer JL, Johnson CE, Rotan S, et al.: National survey of pediatricians' opinions about and practices for acute otitis media and xylitol use. JAAA 2010;21:329–346.
16. Jacobson GP, Calder JH: A screening version of the Dizziness Handicap Inventory (DHI-S). Am J Otol 1998;19:804–808.
17. Kuk FK, Tyler RS, Russell D, Jordan H: The psychometric properties of a tinnitus handicap questionnaire. Ear Hear 1990;11:434–442.
18. Palmer CV, Solodar HS, Hurley WR, et al.: Self-perception of hearing ability as a strong predictor of hearing aid purchase. JAAA 2009;20:341–347.
    19. Cummings SR, Hulley SB: Designing questionnaires and interviews. In Hulley SB, Cummings SR, Browner WS, et al., eds., Designing Clinical Research, third ed. Philadelphia: Lippincott Williams & Wilkins, 2007: 241–255.
    20. Danhauer JL, Johnson CE, Byrd A, et al.: Survey of college students on iPod use and hearing health. JAAA 2009;20:5–27.
    21. Danhauer JL, Johnson CE, Dunne AF, et al.: Survey of high school students' iPod use and hearing health. JAAA, in review.
    22. Kochkin S, Beck DL, Christensen LA, et al: MarkeTrak VIII: The impact of the hearing healthcare professional on hearing aid user success. Hear Rev 2010;17(4):12–34.
    23. Templeton L, Deehan A, Taylor C, et al.: Surveying general practitioners: Does a low response rate matter? Brit J Gen Pract 1997;47:91–94.
    24. Danhauer JL, Johnson CE, Corbin NE, Bruccheri KG: Xylitol as a prophylaxis and/or treatment for acute otitis media: A systematic review. IJA, in press.
    25. Johnson CE, Newman CW, Danhauer JL, Williams VA: Screening for hearing loss, risk of falls: A hassle free approach. J Family Pract 2009;20:422–432.
    26. Johnson CE, Newman CW, Danhauer JL, Williams VA: Physicians' roles and responsibilities in the medical home management of children with hearing loss and their families. J Family Pract, in press.
    27. Johnson CE, Danhauer JL, Harrison J: Physician outreach: Overview. Sem Hear 2009;30:139–149.
    28. Johnson CE, Danhauer JL, Page AS, et al.: AuD degree holders in tenure-track positions: Survey of program chairs and AuD degree holders. JAAA, in press.
    © 2010 Lippincott Williams & Wilkins, Inc.