Journal Logo

Eriksholm Workshop: Ecological Validity

Ecological Momentary Assessment in Hearing Research: Current State, Challenges, and Future Directions

Holube, Inga; von Gablenz, Petra; Bitzer, Jörg

Author Information
doi: 10.1097/AUD.0000000000000934
  • Open



The most commonly used tools for evaluation of hearing and hearing devices are speech tests and questionnaires. Although speech is seen as the most important sound for social interactions, speech tests in quiet or stationary background noise are said to lack “ecological validity”, that is, they do not necessarily reflect real-life hearing-related function, activity, or participation (Keidser et al. 2020). Therefore, speech tests are often complemented by questionnaires describing example naturalistic listening situations (Barker et al. 2015). Questionnaires are seen as a reliable estimate of the respondents’ experience in everyday life. However, individual’s everyday environments can be quite diverse (Wagener et al. 2008) and might differ from those addressed in the questionnaire. In addition, context sensitivity of the questions (i.e., how well the questions and answer options meet the real experiences of a specific situation and its description) is limited, and the questionnaires cannot adjust to the individual importance of situations or particular demands of the individual listener. Moreover, the functionalities of today’s hearing instruments might remain undetected in a questionnaire study because they contain signal processing algorithms optimized for specific acoustical circumstances not encompassed by the situations described in the questionnaire. One example situation is car driving when directional microphones could be optimized to enhance speech coming from the passenger or back seat.

Another challenge is that the questionnaires are filled out retrospectively, on a specific occasion and ask about experiences the respondent has had over some preceding period. As a consequence, the responses are potentially influenced by incomplete memory retrieval and errors due to interference of several events (Bradburn et al. 1987). Responses might not represent averages over a longer time frame but could be influenced by recent events and/or by events that stand out and as such are easily recalled from memory (Kahneman 1999). In addition, retrospective self-reports of emotions are increasingly influenced by belief-consistent bias if less episodic memories for the questions posed are available (Robinson & Clore 2002). These beliefs are related to theories about situations, that is, the respondent’s expectations (e.g., listening in a loud restaurant is exhausting) and personality (e.g., I am easily exhausted). This effect might be especially critical when asking for memories of sounds because auditory memory is inferior to visual or tactile memory (Bigelow & Poremba 2014). For storing auditory information in long-term memory, certain regular sound patterns (such as voice characteristics) are supportive (Winkler & Cowan 2005).

One approach to overcome the shortcomings of retrospective questionnaires is to collect data close to the moment of experience, for example, with diaries filled out daily. The advantages and disadvantages of health diaries in comparison to retrospective questionnaires were reviewed by Verbrugge (1980). He summarized that daily records have resulted in higher reporting levels, lower recall errors, and higher validity than retrospective questionnaires. The use of diaries in hearing research and clinical practice was recommended by Tye-Murray et al. (1993) as a method to assess communication strategies, and paper-and-pencil diaries have been used in several studies to evaluate hearing aids in everyday life, mostly on a daily basis (May et al. 1990; Tesch-Römer 1997; Surr et al. 2002; Palmer et al. 2006; Bentler et al. 2008; Skagerstrand et al. 2014). Nevertheless, it was observed that paper-and-pencil diaries tend to be filled out en bloc before an appointment (Stone et al. 2002). To increase participant compliance, to increase data quality by avoiding reading and coding errors during manual data entry, to shorten the time for data transfer from paper to electronic records (Barrett & Barrett 2001; Hufford & Shields 2002), and to explore individual experiences in natural environments even closer in time to the event, an approach often called ecological momentary assessment (EMA) can be pursued. This article defines terms related to EMA, illustrates strengths and challenges of EMA procedures, summarizes current applications in hearing research, and discusses possible future directions.


The term EMA is often used interchangeably with experience sampling method and ambulatory assessment (Trull & Ebner-Priemer 2014). Experience sampling method was introduced by Csikszentmihalyi et al. (1977) for frequently repeated paper-and-pencil assessments. The term EMA is mostly associated with electronic assessments applied in behavioral medicine and goes back to Stone and Shiffman (1994). Stone et al. (2007) define EMA as “real-time collection of data about momentary states, collected in natural environments, with multiple repeated assessments over time.” The term ambulatory assessment, introduced by Fahrenberg (1996), includes a variety of methods in addition to self-report, for example, measures of physiological function, physical behavior, and ambient environmental parameters ( Alternative descriptions are ambulatory, or in situ, monitoring (Suls & Martin 1993) or intensive repeated measurements in naturalistic settings (Moskowitz et al. 2009). Although other nomenclature might be historically more appropriate for field data collection in hearing research, the term EMA is already in use and therefore retained for this article. Herein, we classify research approaches as EMA if they include electronically recorded momentary outcome data collected repeatedly in natural environments, corresponding to “high-tech” EMA as described by Wu et al. (2015).


Applications of EMA and related approaches in different research fields are numerous. Wilhelm and Perrez (2013) stated in their comprehensive historical review that Pawlik and Buse were the first to apply a digital system for field research in 1982, recording environmental and behavioral data simultaneously. Also in the 1980s, Perrez and Reicherts (1987), using a portable pocket computer, monitored coping behavior in stressful real-life situations. Further technologies and applications in use around the turn of the millennium were reviewed, for example, in Barrett and Barrett (2001),Hufford and Shields (2002), and Kubiak and Krog (2012).

Contemporary EMA studies involve computer- or smartphone-assisted methodologies for the collection of data in natural environments on an individual, momentary, repeated, and often multimodal basis. Researchers using EMA may collect both objective (behavioral, physiological, and environmental) and self-report data (Fahrenberg et al. 2007; Stone et al. 2007; Trull & Ebner Priemer 2013). Connor et al. (2015) characterized the methodology in short as “real-world,” “real-time,” and “within-person.” EMA provides “snapshots” (Stone et al. 2007) of real life and EMA outcomes promise to have a higher degree of ecological validity than laboratory measurements (Reis 2012). EMA allows for context-sensitive (Reis 2012) and interactive assessment of individual experiences and focuses on individual problems. The use of signals to prompt participants is one option to stimulate momentary assessments without recall bias (Stone & Shiffman 1994; Stone et al. 1998). Although the veracity of both paper-and-pencil and electronic self-reports might be disputable, electronic administration has the advantage of providing date and time stamps. Hence, the assessments are at known points in time and therefore allow for compliance analysis (Stone et al. 2002; Fahrenberg et al. 2007). Complex surveys consisting of several questions and assessments can be implemented using an adaptive flow (Bolger et al. 2003) and including diverse response and signal options (Fahrenberg et al. 2007; Wilhelm & Perrez 2013). Studies with EMA provide a large amount of data per participant that opens up the possibility to analyze intrasubject and intersubject variability (Bolger et al. 2003; Hedeker et al. 2012), variations of dynamic processes over time (Bolger et al. 2003; Shiffman et al. 2008), and causal connections between events and experiences. Using advanced technological solutions, real-time data analysis and interactive data collection including uni- or bidirectional communication with the researcher over mobile phone and web-based data transfer is possible (Fahrenberg et al. 2007).

A crucial point in the design of any EMA study is the definition of the sampling strategy. The sampling strategy determines which data are collected in which way, in order to achieve the research objective, thus fundamentally influencing the study findings and the scope of their interpretation. Inspired by the classification of Suls and Martin (1993), the sampling strategies used in EMA may be categorized according to the data source and the sampling schedule of the survey (see Table 1). Note that multiple sources and schedules may be combined in one study.

TABLE 1. - Classification of sampling strategies
Report Type Examples
Data source Self-report Report of
• Situations and activities
• Ratings on predefined scales
• Self-administered test results
Surveillance report Administered by
• Significant others
• Healthcare providers
• Trained observers
Automated report Data collection with technology of, e.g.,
• Environmental characteristics
• Device settings
• Physiological measures
• Global Positioning System
• Physical activity/Motion patterns
Sampling schedule Time- or interval-contingent At prespecified times or intervals, e.g., morning, noon, evening, every hour
Signal-contingent When prompted with a signal; randomly, fixed in time, or within predefined time frames
Event-contingent In specific circumstances, e.g., physiological conditions, in certain situations or activities
Manually selected Decision by participant, self-initiated
Continuous Automated reports
Controlled/structured Restricted to certain setting, e.g., the work place; preselected or arranged


The application of EMA in hearing research contributes to the pursuit of ecological validity through all four purposes as described by Keidser et al. (2020).

Purpose A (Understanding): EMA supports the understanding of the role of hearing in everyday life. It can be used to describe environments, activities, and behavior, and allows for the subjective assessment of those environments. The results reflect real-life hearing-related function, activity, or participation. Subjective assessments can be combined with objective data of acoustic or other characteristics. The analysis of data on properties of natural environments and the corresponding human perceptions is a prerequisite for selecting scenarios for hearing-related laboratory testing to increase the ecological validity of research findings. For this purpose, however, it is particularly important to consider the EMA sampling strategies (see Table 1) and participant’s instructions. Representative records of everyday life most probably contain many quiet and relaxed situations and might miss rare, but important, challenging situations.

Purpose B (Development): Several of the EMA studies conducted in hearing research have been driven by the need for a method to support the evaluation of hearing devices with more ecologically valid outcomes. Clinical protocols were applied to compare the performance using hearing instruments to the unaided condition or to compare different settings of the device. The outcomes complement lab results by evaluating whether advantages of new developments that were established in artificial test scenarios can be replicated in natural environments.

Purpose C (Assessment): EMA may be one of the methods to assess the ability of people and systems to accomplish specific, real-world, hearing-related tasks. Assistive devices should support communication abilities in natural, often noisy, environments. Such scenarios are not easily established in the lab, since real life requires speech comprehension and appropriate reactions, rather than the replication of speech tokens as in conventional speech audiometric tests. Therefore, feasible methods for assessments outside the lab are important to complement the standardized laboratory tests.

Purpose D (Integration and Individualization): EMA is per se a person-centered approach for collecting data on individual disability and needs, and therefore it has the potential to be used in more integrated and individualized hearing health care. The method lends itself to involve significant others as potential observers for people with hearing loss leading to a new perspective. Testing is located in the individual’s living environments, including public and often private spaces, and thus strongly relates to individual lifestyles. Application possibilities of EMA arise, for example, in the process of hearing-device fitting, with regard to interactive fine tuning in natural environments, or for documenting the benefit or shortcomings of the device in terms of the manufacturer-required postmarket surveillance.

EMA certainly has advantages and disadvantages compared to laboratory experiments (Keidser et al. 2020). Although mainly regarded as increasing ecological validity of outcomes, EMA cannot claim to reflect real-life, hearing-related function, activity, or participation to a full degree due to the challenges described later. EMA should not be regarded as a replacement of laboratory experiments (which allow much greater control of variables thereby supporting detailed study of specific effects), but as an addition to the inventory of methods.


In hearing research compared to other scientific disciplines (e.g., chronic pain research, May et al. 2018; psychopathology research, Trull & Ebner-Priemer 2020; eating disorders and obesity research, Engel et al. 2016), EMA is still in its “infancy” (Jacobs & Kaye 2015). A number of earlier studies accessed real-life information using paper-and-pencil methods, partly in combination with different types of recording devices, but smartphone technology was the technological leap forward that now drives EMA in hearing research. Smartphones are flexible in their usage and have become everyday objects. They facilitate the implementation of surveys to be administered during or shortly after an experience. In addition, objective acoustical parameters extracted from head or body-worn microphone signals, settings from the hearing aid’s signal processing unit, or other environmental information can be stored alongside the questionnaire data. Advantages are participant-specific, context-sensitive, information on activities, experiences, and preferences. Thus, the results of EMA might contribute to knowledge about the “audiological exposome,” a term inspired by the concept of exposome which describes the environmental exposures over a lifetime supplementing the effect of the genome on health (Wild 2005).

In the audiological field, Galvez et al. (2012) and Henry et al. (2012) are typically acknowledged for being the first to show the applicability of EMA and Hasan et al. (2013) were the first to propose mobile phones and web technology. In terms of methodological considerations, Wu et al. (2015) and Timmer et al. (2017) investigated EMA’s construct validity. For this purpose, Wu et al. (2015) compared paper-and-pencil self-reports to recordings from a noise dosimeter, whereas Timmer et al. compared smartphone self-reports to data from a hearing aid’s environment classifier. The potentially biased selection of situations for self-reports was analyzed by Schinkel-Bielefeld et al. (Reference Note 1). Timmer et al. (2018b) gave an overview of the use of EMA in audiological research including guidance for best practice. The following paragraphs summarize EMA studies in hearing research. We will first focus on clinical objectives and auditory ecology, excluding hearing aid applications. Hearing aid evaluations themselves are covered in the second section, followed by an overview of the sampling strategies applied in the EMA studies. Details on the EMA studies, as far as available, are given in Tables 2 to 4, inspired by the Checklist for Reporting EMA Studies (CREMAS) recommended by Liao et al. (2016).

TABLE 2. - Participants and methodologies used for ecological momentary assessment studies in hearing research
Study N Age/Years Period Time of day Frequency No. of questions Adaptive questions No. of surveys
[Ga12] 24 42–78 2 wks 8 A.M.–8 P.M. 4 times/day Up to 24 Yes 991
[He12] 24 28–69 2 wks 8 A.M.–8 P.M. 4 times/day 19 Yes 1210
[Wi15] 20 38–65 2 wks 9 A.M.–8 P.M. 4 times/day 6 No 889
[Pr17] 350 avg 45 1–415 days 24 hr 3–18 times/day 3 No 17,209
[Bu20] 44 46–77 2 wks 8 A.M.–9:30 P.M. 6 times/day Up to 7 No 2963
[Ti17] 29 57–79 2 wks 8:30 A.M.–7 P.M. 4 hr Up to 17 Yes 1128
[Sc20] 20 24–82 3 wks Individual 8–12 times/day Up to 27 Yes 3752
[Wu18] 20 65–80 5–6 wks 10 hr 2 hr 6 Yes 894
[Sm20] 19 42–90 9 days Individual 2 hr Up to 11 Yes 1131
[Ho19] 47 56–82 4 days Individual 30 min Up to 10 Yes 2814
[Ti18] 10 57–81 4 wks 8:30 A.M.–7 P.M. 4 hr Up to 16 Yes 860
[Ga20] 16 48–76 4 days Individual 30 min Up to 15 Yes 1705
[Ha14/15/17] 19/34/58 64–88 4–5 wks Individual 1.5 hr Up to 26 Yes 3437/5671/?
[Wu19] 54 65–88 4 wks Individual 2 hr Up to 8 Yes 7579
[An19] 12 23–75 2 wks 8 A.M.–8 P.M. 8 times/day Up to 7 Yes 3140
[Al16] 15 22–79 6 wks Individual avg 5.5/day 3 No 3579
[Sm19] 10 53–87 2 wks Individual 1.5 hr 4 No 1044
[Je19] 16 avg 67 1 wk 8:30 A.M.–8:30 P.M. 2 hr 16 Yes 648

TABLE 3. - Devices for self-reports and response categories
Study Device for self-report Subjective data
[Ga12]/[He12] Software CERTAS on PDA 7-point Likert scale
[Wi15] Internet browser on smartphone Scale from 0 to 100
[Pr17] App TrackYourTinnitus on own smartphone (iOS or Android) Visual analog scale from 0 to 1
[Bu20] App Lifedata on Android smartphone Scale from 0 to 10
[Ti17/18] MobEval app on Motorola G smartphone Categories
[Sc20] Sivantos EMA app on Galaxy S7 smartphone Categories
[Wu18] Own app on Galaxy S3 smartphone Categories
[Sm20] Google Forms online on Motorola G4 Play mobile phone Categories
[Ho19] MobEval app on smartphone Categories
[Ga20] App olMEGA on Nexus 5 smartphone Categories
[Ha14/15/17] AudioSense on Android OS smartphone Scale from 1 to 100, categories
[Wu19] AudioSense on Galaxy S3 smartphone Visual analog scale from 0 to 10
[An19] Oticon A/S app on iPhone SE Scale from 0 to 10
[Al16] App HALIC on Nexus Galaxy smartphone A/B comparison
[Sm19] Google forms on Motorola G4 Play smartphone A/B comparison
[Je19] Widex app on iPhone 7 smartphone A/B comparison
For abbreviations of studies, see Table 2.

TABLE 4. - Devices for objective data collection and type of objective data
Study Device for objective data Objective data
[Ti17] Mini-BTE with classifier and streamer SPL, signal to noise ratio, 4 sound classes
[Sc20] Signia 7Nx M HA with Bluetooth to smartphone 6 sound classes
[Wu18] LENA Audio
[Ho19]/[Ga20] App olMEGA on smartphone, microphones in hearing aid housings [Ho20] or on glasses [Ga19] Features calculated from audio
[Ha17] Sound recorder around neck Features calculated from audio
[An19] Oticon EVOTION mini-RITE 4 sound classes
[Al16] Oticon Intiga 10 HA and Oticon streamer with microphone to smartphone Audio, 6 sound classes, SPL, Global Positioning System
[Je19] Widex Unique 440 FS RIC HA and WidexLink to smartphone SPL, sound classes
For abbreviations of studies, see Table 2.

Hearing Assessments in Natural Environments

Galvez et al. (2012) and Henry et al. (2012) demonstrated the feasibility of EMA for analyzing hearing difficulties and tinnitus, respectively. Wilson et al. (2015) confirmed the feasibility of EMA in tinnitus studies, and Probst et al. (2017) identified time-of-day changes of tinnitus loudness and tinnitus distress. The first use of an EMA approach to measure fatigue was conducted by Burke and Naylor (Reference Note 2). However, they were not able to demonstrate the expected differences in fatigue levels and patterns between participants with normal-hearing and participants with hearing loss.

Analyzing acoustical environments and participant’s experiences in everyday life is a prerequisite for selecting scenarios for laboratory testing (Smeds et al. 2020). A systematic analysis of speech situations was conducted by Wu et al. (2018), who made all-day audio recordings. Time sections of these recordings corresponding to self-reports were edited off-line, monitored, and manually labeled to estimate speech levels, noise levels, and signal to noise ratios (SNRs). The Common Sound Scenarios framework (Wolters et al. 2016) was used by Smeds et al. (2020) in an EMA study. The data of hearing aid users utilizing self-reports can be analyzed to derive information about possibly required scenarios for hearing-related laboratory testing.

Kissner et al. (2015) and Bitzer et al. (2016) described a privacy-preserving, smartphone-based EMA system (olMEGA) developed in our laboratory. In the first version of this open-source system, two microphones were included in hearing aid housings but without hearing aid functionality. They were connected by cable to a soundcard attached to the smartphone. This system allowed for self-reports on the smartphone, along with the recording of objective acoustical features from the stereo output of the microphones. To ensure privacy (see later), the audio signals were converted on-the-fly to frame-based features. The system was used by elderly listeners to describe locations and activities, as well as perceptual assessments (Holube et al. 2019).

Hearing Aid Evaluation

A large proportion of contemporary EMA studies address topics related to hearing aid evaluations. Timmer et al. (2018a) conducted an EMA study over three data-collection periods (without hearing aids, with hearing aids, and again without hearing aids) including participants with mild hearing impairments. They concluded that EMA is able to detect changes in self-reported hearing performance with and without hearing aids, and that showing individual variations is one of the advantages of EMA. Additionally, Timmer et al. (2018a) demonstrated that for this group of participants, listening effort might be a better indicator for difficulties in challenging listening situations than speech intelligibility. A comparison between the unaided and the aided condition was also conducted by von Gablenz et al. (2020) using a new version of the olMEGA system, which is compatible with hearing aid use (Kowalk et al. 2017; Groenewold et al. 2018; Franz et al. 2018). They used EMA with regular patients seeking for hearing aid uptake and showed the variability of listening targets and hearing aid benefit

Different hearing aid technologies were used in EMA studies based on the AudioSense platform developed by Hasan et al. (2013). In Hasan et al. (2014), self-rated hearing aid outcomes were related to the locations and activities described by the participants, as well as to the hearing aid features. Hasan et al. (2015) expanded the approach to predict hearing aid outcomes and proposed applying EMA for identification of users that had a high probability for low hearing aid satisfaction. To reduce the participant’s burden with the description of environments, Hasan et al. (2017) predicted the auditory context from the audio data. They achieved an accuracy of 68% when comparing objectively estimated noise levels to perceived noise levels and predicted listening activity with an accuracy of 70%. Wu et al. (2019) compared hearing aid technologies using self-report on a smartphone in natural environments against several outcomes in the laboratory. They noted that the EMA results revealed more information on situation-specific effects than the retrospective questionnaires and that not all benefits of hearing aids measured in the laboratory were experienced in natural environments. Self-reports using EMA were compared to a retrospective questionnaire by Andersen et al. (2020) for two different hearing aid settings. They found a trend for higher (better) scores and larger interindividual differences in EMA than in the retrospective questionnaire SSQ12 (Noble et al. 2013).

Direct paired comparisons of hearing aid settings in natural environments were administered by Aldaz et al. (2016) using self-reports and objective environmental data. They concluded that smartphones can be used to train hearing instruments to the preferred settings of the individual. Paired comparisons were also used by Smeds et al. (2019) and Jensen et al. (2019) to compare two different hearing aid programs in natural environments. Additionally, Smeds et al. contrasted the EMA results to laboratory experiments, whereas Jensen et al. included objective data from the hearing aids in their analysis. The results revealed large interindividual variations in program preferences, supporting the individual approach of the procedure.

A public hearing health-policies-oriented approach for hearing aid provision with big data is pursued in the EVOTION project (Gutenberg et al. 2018). Environmental parameters collected by hearing aids were combined with time-stamped user control information (program and volume control settings) and stored via smartphones in the cloud. The information can be used to optimize hearing aid fittings and to support communication between the audiologist and the hearing aid user (Pontoppiddan et al. 2017). Data on hearing aid usage, together with clinical and demographic information, allow for analysis of factors related to the success of treatment with hearing aids (Christensen et al. 2019). By additionally including clinical outcomes and biosensors (Dritsakis et al. 2018), data for a more holistic approach is collected to support health-policy decisions. This project approach is special among the studies cited so far because it aims for hearing aid evaluation in natural environments, but does not include self-reports directly. The hearing aid user is not explicitly rating the hearing aid performance, but instead the hearing aid settings and their changes over time are interpreted as an indirect self-report. This approach is especially of interest since auto classification systems of autonomous environment and behavior logging will expand methodologies in future studies (Caduff et al. 2020; Mehra et al. 2020).

Sampling Strategies in Hearing Research

As outlined earlier, EMA sampling strategies can be characterized by the source of the report and the type of sampling schedule (see Table 1). For researching hearing in natural environments in general, the type and frequency of the situations captured should be representative of natural environments. For purposes with a clinical orientation, representative selections, as well as situations with special needs, might be of interest.

With regard to the reporting source, all EMA studies in hearing research included electronic self-reports. Aldaz et al. (2016) added a sort of surveillance report by trained raters, who classified the natural environments while listening to the audio recordings of the participants. Automated reports were collected as audio signals or features calculated by a hearing aid, such as sound-pressure levels or sound environment classifier results (see Table 4). Aldaz et al., von Gablenz et al. (2020), and Holube et al. (2019) used apps for hearing aid independent extraction of signal features.

With regard to the sampling schedule, EMA studies have used diverse approaches. All studies except Aldaz et al. (2016) applied signal-contingent sampling of self-reports by prompting the participants with a signal (see Table 5). Signals are typically auditory or vibratory alarms from the smartphone. Hasan et al. (2014) included a visual prompt in their AudioSense platform to alert the participants. Many studies allowed for additional, manually selected, self-reports by the participants. Timmer et al. (2017) and Schinkel-Bielefeld et al. (Reference Note 1) included event-contingent prompts for loud environments, as classified by the hearing aids. Although it is manually selected, the instructions of Burke and Naylor (Reference Note 2) to initiate a self-report when fatigued could be classified as event-contingent. Also, manually selected self-reports in other hearing studies might be inspired by acoustically adverse situations. Burke and Naylor (Reference Note 2) additionally asked for self-reports at predefined times, thus being classified as time-contingent. Objective data was mainly collected with continuous automated reports. Jensen et al. (2019) restricted the automated reports to times when self-reports were administered.

TABLE 5. - Types of sources and sampling schedules in audiological ecological momentary assessment studies
Self-report Automated report
Study Time Signal Event Manual Event Continuous
[Ga12] X
[He12] X
[Wi15] X
[Pr17] X
[Bu20] X X X X
[Ti17] X X X X
[Sc20] X X X X
[Wu18] X X X
[Sm20] X X
[Ho19] X X X
[Ti18] X X
[Ga20] X X X
[Ha14/15/17] X X X
[Wu19] X X
[An19] X X X
[Al16] X X
[Sm19] X X
[Je19] X X X
For abbreviations of studies, see Table 2.

Although EMA is most often seen as a concept for momentary assessments of the current situation to minimize recall or reconstruction bias, Galvez et al. (2012) feared that too many experiences, especially from rare, difficult, and important situations between the assessments, would be missed. Therefore, they asked the participants to respond according to all their experiences since the last prompt. Most of the EMA studies in hearing research request a momentary (“right now”) description when signaled. Sometimes (Timmer et al. 2017; von Gablenz et al. 2020), time intervals between the experience and the self-report can be indicated. Jenstad et al. (2019) compared assessments during and after the event and did not find a difference between assessment times for focused and passive listening, but information about the time delay is necessary when self-reports are aligned to objective data collected by the smartphone in the respective situation.


Audiological research has now amassed some experience with designing and conducting EMA studies but still draws on the fundamental methodological framework developed in psychology. Particularly, Stone et al. (2007) already addressed the key points that challenge the use of EMA, such as acceptance and feasibility, participant burden, compliance, reactivity, data analysis, and privacy issues. Timmer et al. (2018b) adapted these issues and suggested guidelines for EMA studies in hearing research, which form the structure of the following paragraphs.

Acceptance, Feasibility, and Burden

Compared to laboratory experiments, EMA studies put more burden on the participants and might reduce the willingness to sign up or to perform with high compliance. The possible selection bias might impact the generalization of the results because they depend very much on the individual study participants and the selected group of participants (Dhami et al. 2004). To keep the burden within acceptable limits, various factors discussed in the following sections should be taken into account.

In general, EMA is time-consuming for the study participants. There is a trade-off between the data collection period, the frequency of self-reports, and the length of the questionnaires influencing the motivation and annoyance of the participants on the one hand and the desire to collect as much data as possible on the other (Smeds et al. 2019). For a typical data collection period of several days to weeks, self-reports are administered between 3 and 16 times a day with the latter being done every 30 min within 8 hr (see Table 2). The frequency of self-reports in most studies followed the recommendation of Morren et al. (2009) of 4 to 8 per day. In Schinkel-Bielefeld et al. (Reference Note 1), a data-collection period of 3 weeks was reported to be too long for the participants. In that study, the participants were, in each situation, free to decide whether they wanted to fill out a long survey of up to 27 questions or a short survey of up to 7 questions. They found that in 85% of all assessments, participants selected the long survey. This result, which might be dependent on participant’s age, other health issues, mental capability, or temporal stress, puts into perspective the proposal to observe a maximum of 20 questions (Timmer et al. 2018b). While Stone and Shiffman (2002) recommended limiting the duration of each survey from 1 to 3 min, Timmer et al. (2018b) suggested a quite high maximum duration of 5 min. EMA studies in hearing research have reported survey durations from 1 min to 1 min 40 sec (Galvez et al. 2012; Henry et al. 2012; Timmer et al. 2017; von Gablenz et al. 2020; Schinkel-Bielefeld et al., Reference Note 1) with the exception of Jensen et al. (2019) reporting 3.4 min on average.

Usability is key in human–device interaction. Therefore, EMA should be implemented as user friendly as possible to maximize acceptance. Smartphones and, potentially, additional equipment (see Table 4), introduce previously unknown burdens, especially for technically challenged users (Ramsey et al. 2016; Burke et al. 2017). The smartphone and additional equipment have to be initiated and typically need recharging every night. The questions and answers on the screens have to be visually perceptible and the touch functionalities have to be suitable for users during unsupervised handling of the equipment. Requirements include easy operations, large font sizes, and easily distinguishable color schemes (Hasan et al. 2013,2014). Shortcomings in usability, partly combined with technical instabilities, are a serious source of error. Timmer et al. (2017), as well as Holube et al. (2019) experienced the loss of objective data due to technical limitations of the system, especially related to connection loss between system components, and the necessity to start one or several apps. Direct wireless transmissions between hearing aids or head-worn devices and kiosk modes on the smartphones (where only the required app runs on the smartphone, which has no further functionality) might reduce risk of data loss or technical failure in general (Burke et al. 2017). Another technical drawback was reported by Aldaz et al. (2016), since the Global Positioning System used in their study required 10 sec before delivering stable results. However, even if the technology and the handling do not cause problems, the carrying of a smartphone in addition to a self-owned smartphone can be perceived as a burden (Schinkel-Bielefeld et al., Reference Note 1).

The design of the survey is another key factor for both limiting the participant’s burden and maximizing the validity of responses. Questions should cover simple concepts, be easy to understand, limited to the essentials in number and complexity, and include a restricted and suitable set of response options. For example, Wu et al. (2019) reported difficulties with a visual analog scale on a touchscreen of smartphones and recommended a 5- to 7-point scale with buttons as the response format. The use of everyday language should be preferred over technical expressions. In smartphone applications, all information should be presented on one screen without the necessity to scroll the display for more response alternatives. Fortunately, apps facilitate adaptive item sequences (see Table 2), for example, by restricting ratings of speech understanding to environments where speech is present. However, it is to date unknown whether adaptive implementations hinder establishing a routine when administering the self-report and whether the sequence of items might impact specific ratings. Furthermore, when participants are establishing a routine, the concepts of hearing dimensions, particularly speech understanding and listening effort, might not be kept as separate items.

The instruction and training of study participants is important to improve feasibility (Piasecki et al. 2007). The obvious approach is to administer all questions in the self-report together with the participants, to ensure that all questions are clearly understood, and that the device can be managed. Written instructions complemented by illustrations support recall. To obtain stable self-reports, Piasecki et al. (2007) recommended a training phase of at least 30 to 60 min and a run-in or trial period of several days. Wu et al. (2018) included a 3-day practice training at home. In some studies, participants were contacted during the field phase to ensure their compliance and error-free data collection (Timmer et al. 2017; Schinkel-Bielefeld et al., Reference Note 1; Burke & Naylor, Reference Note 2). When uploading data to the cloud during the EMA study, the data quality can be checked during the study (Timmer et al. 2018b; Schinkel-Bielefeld et al., Reference Note 1).


In general, studies aim to ensure that participants comply with the tasks, that is, follow the instructions as well as possible. However, the variety of sampling strategies, often combining different sampling schedules in EMA studies, complicates benchmarking the degree of compliance. As shown in Table 5, most EMA studies in hearing research use a combined signal-contingent and manual sampling schedule. For those and other approaches, the authors do not know of any agreement on how to calculate a key metric that allows for evaluating the participants’ compliance, for example, for between-study comparisons. Even the distinction between signal-contingent and manually selected responses will be difficult when the prompts are set at comparatively short intervals that are identical or close to the time shift granted for assessing a situation. One can calculate the simple ratio of the number of alarms and responses within a defined time interval, but in doing so, this figure needs to be considered relative to the number of out-of-time, or manually selected, responses. Moreover, the frequency of alarms requires due consideration when evaluating the compliance with the task. When prompting an alarm every 30 min in full-day EMA, a 100% response rate can hardly be expected. For these reasons, compliance needs to be interpreted with regard to the sampling strategy and, in addition, to the participants’ instructions. Insofar as calculation rules are given, the compliance of individual participants within an EMA study should be quantified. In hearing research EMA studies, compliance rates can vary dramatically between participants, as shown by Jensen et al. (2019) from 33 to 94%, by Smeds et al. (2020) from 38 to 99%, and by Burke and Naylor (Reference Note 2) from 21 to 100%.

To maximize compliance in future studies, some recommendations can be derived from previous studies. Nevertheless, they need to be modified according to the research question. In general, participants should be encouraged to respond to every prompt (Stone & Shiffman 2002). Several studies provided incentives per survey (Stone & Shiffman 2002; Morren et al. 2009; Galvez et al. 2012; Henry et al. 2012; Schinkel-Bielefeld et al., Reference Note 1) or an overview of the individual EMA results (von Gablenz et al. 2020) to increase compliance. There is a trade-off between recognizing the alarm and the alarms not being annoying or irritating for participants and bystanders. Reasons for missing self-reports when prompted were assumed to be related to unheard alarms in noisier and social situations, presumably outside the home, or to the inappropriateness of the situations (Galvez et al. 2012; Burke & Naylor, Reference Note 2). This assumption was confirmed by Schinkel-Bielefeld et al. (Reference Note 1), who analyzed missing surveys for random prompts and concluded that social situations and some other situations outside home are underrepresented, leading to a selection bias for those situations. The main reasons for not responding were politeness with respect to communication partners and safety (e.g., while driving a car). In addition, the willingness to participate in an EMA study might depend on the daily activities of the participants, leading to a selection bias which varies systematically with one of the variables of interest. Anecdotal comments of volunteers led to the impression that some of the participants in von Gablenz et al. signed up for days during which they had no special other important events on their agenda, for example, no traveling or no larger family gatherings. Finally, it can be assumed that the willingness and compliance will be greater and can be maintained longer if the participants understand the meaning, and support the purpose, of EMA either for themselves or for the gain in knowledge.


“Reactivity is defined as the potential for behavior or experience to be affected by the act of assessing it” (Shiffman et al. 2008, p. 20). Reactivity can be based on the content of the assessment or follow from the applied method (Hufford & Shiffman 2002). It might be due to awareness, adaptation, sensitization, or coping tendencies (Fahrenberg 1996). Self-monitoring can impact behavior and be a form of intervention (for examples, see Wilhelm & Perrez 2013). In addition, the environment and the activities of the user can be affected by the alarm notifying an assessment. In spite of these reservations, studies show that reactivity does not necessarily impact the results of EMA (Barta et al. 2012). Although evidence is missing that reactivity in EMA results are replicated in retrospective questionnaires or scores, questionnaires have been used before and after the EMA period for verifying the absence of reactivity (Galvez et al. 2002; Henry et al. 2012; Timmer et al. 2017; Burke & Naylor, Reference Note 2). Nevertheless, Galvez et al. (2012) reported a greater sensitization towards hearing issues post-EMA, and Henry et al. (2012) documented that most participants became more (mostly positively) aware of their tinnitus.

Data Analysis and Reporting Results

EMA studies usually provide a large amount of data. The data set increases dramatically when features about the acoustical environments or hearing aid parameters are continuously collected. Handling of those data differs substantially from data collected in laboratory studies. Participants engage differently in EMA studies, resulting in a different number of assessments per participant and a different data quality. Moreover, not all participants might have experienced all natural environments the examiner might want to analyze. Although the participants contribute with many self-reports, the mixture of natural environments evaluated in one phase of the study might not be repeated in another phase, for example, with and without hearing instruments. Furthermore, there is a trade-off between the precision (i.e. narrowness) of the situation descriptions and the number of times each listening situation will be noted. When the situations are described narrowly, it might be necessary to collapse categories to achieve a sufficient number of occurrences for data analysis. Wolters et al. (2016) categorized natural sound environments using a context-driven approach and proposed the framework of Common Sound Scenarios that could be applied in hearing-device research.

For analyzing the data and estimating predictors, mixed or random effect models or multilevel analysis have typically been used (Galvez et al. 2012; Hedeker et al. 2012; Henry et al. 2012; Probst et al. 2017; Timmer et al. 2018a; Wu et al. 2019). Those models are well known from lab experiments and promise to disentangle the effect of different factors on the outcomes. However, this approach was criticized by Ram et al. (2017). They state that the pooling of heterogeneous person-related data into one model for all participants compromises the level of ecological validity in the sense of nongeneralizability of the results across natural environments and recommend person-specific models. For paired comparisons conducted in EMA studies, the method proposed by Leijon et al. (2019) could be applied. This method allows for varying amount of data from different test participants.

Objective data collected from the acoustical environments pose another challenge for the analysis. Sound-pressure levels and SNRs, for example, are of interest. In contrast to laboratory experiments, however, the voice of the study participant is recorded, as well as voices of communication partners, bystanders, and other environmental sounds. The participant’s own voice typically has higher levels than other voices at microphones worn on or close to the body. For valid estimations, therefore, segments containing the participant’s own voice have to be removed (Bitzer et al. 2018). On the other hand, information about the duration of own-voice segments, or their characteristics, might add to the analysis of communication behaviors in natural environments.

In terms of reporting EMA results, Liao et al. (2016) adapted the checklist for Strengthening the Reporting of OBservational studies in Epidemiology (STROBE) and compiled CREMAS. Another aspect for comparing results of different EMA studies refers to the applied surveys themselves. Surveys of audiological studies differ largely in items, wording, and response alternatives. To allow for data pooling from different studies, the development of at least a basic standardized set of questions should be considered.

Ethics, Privacy, and Data Safety

The debate on ethical issues in EMA studies focuses mostly on data protection (Fahrenberg 1996). However, EMA data are, per definition, collected in natural environments including private homes and public spaces, and thus often provide life-style information, for example, about the activities, habits, media consumption, and the frequency of social interactions. Today’s hearing devices are already able to collect part of the information including data about the acoustic environment with their data logging functionality. Yet few clients are asked by their clinicians to give explicit consent to this. Gathering data of this type might be regarded as “harmless” since they do not affect the life or health of the participants. Even if the participants gave informed consent, which would usually be the case in research studies, EMA still raises the serious question of whether this invasion of privacy is justified. The Declaration of Helsinki aims not only to protect life and health but also the privacy of research participants. Therefore, violating privacy in EMA studies has to be justified with the foreseeable benefits of the study outcomes for participants or for society. From the authors’ point of view, this issue should be taken seriously in its own right, without reducing it to the problem of data protection. At the very least, this means that the data collected in the participant’s private space should be deleted whenever the participant prefers not to have those recorded (Fahrenberg et al. 2007) as requested by General Data Protection Regulation (GDPR). In general, the interplay between GDPR, approvals of local ethics committees, and regulations in national laws has to be clarified for each research study.

Special care is needed when objective data about the acoustic environment is collected. Audio recordings were used by Hasan et al. (2017) and Wu et al. (2018). Nevertheless, the law in many countries requires signed consents of all communication partners and bystanders to preserve privacy. This restriction limits the information about environmental acoustics to statistical data such as sound-pressure levels and sound classes. An on-the-fly feature extraction approach in a smartphone without storing the audio data was followed by Bitzer et al. (2016). The features were smoothed over time before storage to prevent reconstruction, which unfortunately makes estimates of the environmental acoustics, for example, SNRs or reverberation times, difficult.

Another issue in data safety is that data of self-reports and objective data about environments or hearing aid settings are either stored in the smartphone or in the cloud. For cloud services, data access has to be regulated. In Europe, implementations according to the GDPR are required. For research purposes and clinical applications, informed consent from the participants is essential. Due to the trend to make research data freely available and to apply “Big Data” mining in analysis procedures, this issue is especially important (Connor et al. 2015).


EMA opens up new insights into natural environments and behavior, the impact of hearing impairment, and the improvement of hearing-related abilities with hearing devices or other treatments. It is to be noted that the procedure is still in its infancy in hearing research and requires more research. Mostly based on the publications referred earlier, we identified several areas of interest for future research.

Interoperability of Research Activities

Nowadays, each research group conducting EMA studies implements different surveys and tools for data handling. To achieve synergy effects and to enable comparisons, a proposal for the agreement on EMA-specific questions is necessary. In addition, a common data structure to collect and exchange time-stamped self-assessments and objective data (audio/features, hearing aid parameters, Global Positioning System, physiological parameters), while respecting privacy should be agreed upon and developed (Laplante-Lévesque et al. 2016). To ensure a privacy-preserving data set, online extraction of relevant features should be expanded while allowing to extract information for study-specific research questions (Arora & Chaspari 2018).

Exploration of Methodology

Although several EMA studies have been conducted in hearing research, the implemented surveys were often inspired by lab studies or by retrospective questionnaires. Not much work has been done so far in hearing research to explore the used methodologies themselves. One research purpose still in discussion is the identification of meaningful response formats for naive users. This issue will even get more important when EMA elements are applied in regular hearing device provisions instead of research studies. To explore this option, the willingness for participation by clients or patients instead of volunteers has to be investigated. Other important issues are the test–retest reliability and the identification of effect sizes. For those investigations, as well as for a comparison of virtual reality studies in the laboratory and EMA, it might be necessary to at least partly control the visited natural environments.

Application of EMA

EMA could be used for systematic analysis of natural environments of different groups, in different societies/cultures, of different age, and of different health conditions including different hearing loss. This analysis would provide insights into group differences and might result in recommendations for barrier reduction, societal inclusion, and possibly target-specific hearing devices. Individualized and patient-centered analysis of changes over longer time periods with and without treatment including the identification of specific needs might individualize health care. In addition, EMA enables the analysis of hearing-related perceptions of the change of environmental acoustics in the course of a day

Advancements of Methodology

Technological progress enables the measurement of an increasing number of body functions in natural environments using sensors (see, e.g., Caduff et al. 2020). These advancements enable the development of an integrative approach to measure hearing abilities, social, cognitive, and physical health and the effect of treatments. At least part of those developments could be incorporated in a concept for including EMA in counseling and therapy for example, hearing aid fitting process for nonresearch clinicians, or tinnitus. For this application, a concept for data “de-noising” and aggregation is necessary to enable clinical use and data interpretation by clinicians. As in other mobile applications, patients’ self-monitoring or monitoring by significant others could be offered for counseling and motivation boost. A concept for such an approach might include elements from auditory perceptual training and might be part of postmarket surveillance according to the new European guidelines (Medical Device Regulation). Last, but not least, data analysis methodologies have to be advanced. One challenge is the handling of categorical and interval data with different amounts of assessments in diverse natural environments collected longitudinally. Another approach is Big Data and machine learning algorithms (Slaney et al. 2020) to extract individualized and condition-dependent deficits and benefits from hearing devices. This approach might result in increased user satisfaction with an adaptable individualized and contextualized hearing device.


This article focuses on the methodology of EMA and its application in hearing research. Although several studies have used this approach to investigate hearing abilities in natural environments and the benefit of hearing devices, many future research directions are identified. The progress in technology and methodology will likely increase ecological validity of outcomes obtained with EMA and hence open up new insights for hearing research and will hopefully lead to improvements for people suffering from disabilities.


This study was funded by the Hearing Industry Research Consortium (IRC), project IHAB-RL. English language services were provided by


    Aldaz G., Puria S., Leifer L. J. Smartphone-based system for learning and inferring hearing aid settings. J Am Acad Audiol, (2016). 27, 732–749
    Andersen L., Andersson K., Wu M., Pontoppidan N., Bramsløw L., Neher T. Assessing daily-life benefit from hearing aid noise management: SSQ12 vs. ecological momentary assessment. Proc ISAAR, (2020). 7, 273–280.
    Arora P., Chaspari T. Exploring Siamese neural network architecture for preserving speaker identity in speech emotion classification. 2018). Proceedings of the 4th International Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, Boulder, CO, USA, New York: Association for Computing Machinery.. pp. 15–18)
    Barker F., MacKenzie E., Elliott L., de Lusignan S. Outcome measurement in adult auditory rehabilitation: A scoping review of measures used in randomized controlled trials. Ear Hear, (2015). 36, 567–573
    Barrett L. F., Barrett D. J. An introduction to computerized experience sampling in psychology. Soc Sci Comput Rev, (2001). 19, 175–185
    Barta W. D., Tennen H., Litt M. D. Mehl M. R., Connor T. S. Measurement reactivity in diary research. In Handbook of Research Methods for Studying Daily Life, (2012). Guilford. pp. 108–123
    Bentler R., Wu Y. H., Kettel J., Hurtig R. Digital noise reduction: Outcomes from laboratory and field studies. Int J Audiol, (2008). 47, 447–460
    Bigelow J., Poremba A. Achilles’ ear? Inferior human short-term and recognition memory in the auditory modality. PLoS One, (2014). 9, e89914
    Bitzer J., Kissner S., Holube I. Privacy-aware acoustic assessments of everyday life. 2016). 395–404. J Audio Eng Soc, 64
    Bitzer J., Bilert S., Holube I. Evaluation of binaural own voice detection (OVD) algorithms. 2018). Speech Communication; 13th ITG-Symposium, Oldenburg, Germany, VDE. pp. 161–165)
    Bolger N., Davis A., Rafaeli E. Diary methods: Capturing life as it is lived. Annu Rev Psychol, (2003). 54, 579–616
    Bradburn N. M., Rips L. J., Shevell S. K. Answering autobiographical questions: The impact of memory and inference on surveys. Science, (1987). 236, 157–161
    Burke L. E., Shiffman S., Music E., Styn M. A., Kriska A., Smailagic A., Siewiorek D., Ewing L. J., Chasens E., French B., Mancino J., Mendez D., Strollo P., Rathbun S. L. Ecological momentary assessment in behavioral research: Addressing technological and human participant challenges. J Med Internet Res, (2017). 19, e77
    Caduff A., Feldman Y., Ishai P. B., Launer S. Physiological monitoring and hearing loss: Towards a more integrated and ecologically validated health mapping. Ear Hear, (2020). 41(Suppl 1), 120S130S.
    Christensen J. H., Pontoppidan N. H., Anisetti M., Bellandi V., Cremonini M. Improving hearing healthcare with big data analytics of real-time hearing aid data. 2019). IEEE World Congress on Services (SERVICES), Milan, Italy(pp. IEEE. 307–313)
    Connor T. S., Mehl M. R. Scott R., Kosslyn S., Pinkerton N. Ambulatory assessment – Methods for studying everyday life. Emerging Trends in the Social and Behavioral Science, (2015). Wiley. pp. 1–15
    Csikszentmihalyi M., Larson R., Prescott S. The ecology of adolescent activity and experience. J Youth Adolesc, (1977). 6, 281–294
    Dhami M. K., Hertwig R., Hoffrage U. The role of representative design in an ecological approach to cognition. Psychol Bull, (2004). 130, 959–988
    Dritsakis G., Kikidis D., Koloutsou N., Murdin L., Bibas A., Ploumidou K., Laplante-Lévesque A., Pontoppidan N. H., Bamiou D. E. Clinical validation of a public health policy-making platform for hearing loss (EVOTION): Protocol for a big data study. BMJ Open, (2018). 8, e020978
    Engel S. G., Crosby R. D., Thomas G., Bond D., Lavender J. M., Mason T., Steffen K. J., Green D. D., Wonderlich S. A. Ecological momentary assessment in eating disorder and obesity research: A review of the recent literature. Curr Psychiatry Rep, (2016). 18, 37
    Fahrenberg J. Fahrenberg J., Myrtek M. Ambulatory assessment: Issues and perspectives. Ambulatory Assessment: Computer-Assisted Psychological and Psychophysiological Methods in Monitoring and Field Studies, (1996). Hogrefe and Huber. pp. 3–20
    Fahrenberg J., Myrtek M., Pawlik K., Perrez M. Ambulatory assessment – Monitoring behavior in daily life settings. Eur J Psychol Assess, (2007). 23, 206–213
    Franz S., Groenewold H., Holube I., Bitzer J. Open hardware mobile wireless serial audio transmission unit for acoustical ecological momentary assessment using Bluetooth RFCOMM. 2018). 144th Convention of the Audio Engineering Society, Milan, Italy, AES. p. 42–313)
    Galvez G., Turbin M. B., Thielman E. J., Istvan J. A., Andrews J. A., Henry J. A. Feasibility of ecological momentary assessment of hearing difficulties encountered by hearing aid users. Ear Hear, (2012). 33, 497–507
    Gatehouse S., Elberling C., Naylor G. Aspects of auditory ecology and psychoacoustic function as determinants of benefits from and candidature for non-linear processing in hearing aids. 1999). 18th Danavox Symposium, Copenhagen: Holmens Trykkeri.. pp. 221–233)
      Groenewold H., Franz S., Holube I., Bitzer J. Wearable mobile Bluetooth device for stereo audio transmission to a modified android smartphone. 2018). 144th Convention of the Audio Engineering Society, Milan, Italy, AES. 42–)
      Gutenberg J., Katrakazas P., Trenkova L., Murdin L., Brdaric D., Koloutsou N., Ploumidou K., Pontoppidan N. H., Laplante-Lévesque A. Big data for sound policies: Toward evidence-informed hearing health policies. Am J Audiol, (2018). 27(3S):493–502
      Hasan S. S., Brummet R., Chipara O., Wu Y.-H. Assessing the performance of hearing aids using surveys and audio data collected in situ. 2017). IEEE Conference on Computer Communication Workshop (INFOCOM WKSHPS), Atlanta, GA, USA, IEEE. pp. 283–288
      Hasan S. S., Brummet R., Chipara O., Wu Y.-H., Yang T. In-situ measurement and prediction of hearing aid outcomes using mobile phones. 2015). International Conference on Healthcare Informatics, Dallas, TX, USA, IEEE. pp. 525–534)
      Hasan S. S., Chipara O., Wu Y.-H., Aksan N. Evaluating auditory contexts and their impacts on hearing aid outcomes with mobile phones. 2014). Proceedings of the 8th International Conference on Pervasive Computing Technologies for Healthcare, Oldenburg, Germany, ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering). pp. 126–133)
      Hasan S. S., Lai F., Chipara O., Wu Y.-H. AudioSense: Enabling real-time evaluation of hearing aid technology in-situ. (2013). Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, CBMS, Porto, Portugal, IEEE. pp. 167–172
      Hedeker D., Mermelstein R. J., Demirtas H. Modeling between-subject and within-subject variances in ecological momentary assessment data using mixed-effects location scale models. Stat Med, (2012). 31, 3328–3336
      Henry J. A., Galvez G., Turbin M. B., Thielman E. J., Mcmillan G. P., Istvan J. A., et al. Pilot study to evaluate ecological momentary assessment of tinnitus. Ear Hear, (2012). 32, 179–290
      Holube I., von Gablenz P., Kowalk U., Bitzer J. Assessment of acoustical properties and subjective perception in everyday life. 2019). Proceedings of the 23rd International Congress on Acoustics, Aachen, Germany, Berlin: German Acoustical Society.. 7639
      Hufford M. R., Shields A. L. Electronic diaries: Applications and what works in the field. Appl Clin Trials, (2002). 11, 38–43
      Hufford M. R., Shiffman S. Methodological issues affecting the value of patient-reported outcomes data. Expert Rev Pharmacoecon Outcomes Res, (2002). 2, 119–128
      Jacobs P. G., Kaye J. A. Ubiquitous real-world sensing and audiology-based health informatics. J Am Acad Audiol, (2015). 26, 777–783
      Jensen N. S., Hau O., Lelic D., Herrlin P., Wolters F., Smeds K. Evaluation of auditory reality and hearing aids using an ecological momentary assessment (EMA) approach. 2019). Proceedings of the 23rd International Congress on Acoustics, Aachen, Germany, Berlin: German Acoustical Society.. pp. 6545–6552
      Jenstad L. M., Gillen L., Singh G., DeLongis A., Pang F. A laboratory evaluation of contextual factors affecting ratings of speech in noise: Implications for ecological momentary assessment. Ear Hear, (2019). 40, 823–832
      Kahneman D. Kahneman D., Diener E., Schwarz N. Objective happiness. Well-Being: The Foundations of Hedonic Psychology, (1999). Russell Sage Foundation. pp. 85–105
      Keidser G., Naylor G., Brungart D., Caduff A., Campos J., Carlile S., Carpenter M., Grimm G., Hohmann V., Holube I., Launer S., Lunner T., Mehra R., Rapport F., Slaney M., Smeds K. (The quest for ecological validity in hearing science: What it is, why it matters, and how to advance it. Ear Hear, (2020). 41(Suppl 1), 5S–19S
      Kissner S., Holube I., Bitzer J. A smartphone-based, privacy-aware recording system for the assessment of everyday listening situations. Proc ISAAR, (2015). 5, 445–452.
      Kowalk U., Kissner S., von Gablenz P., Holube I., Bitzer J. An improved privacy-aware system for objective and subjective ecological momentary assessment. Proc ISAAR, (2017). 6, 25–30B.
      Kubiak T., Krog K. Mehl M. R., Connor T. S. Computerized sampling of experiences and behavior. Handbook of Research Methods for Studying Daily Life, (2012). Guilford. pp. 124–143
      Laplante-Lévesque A., Abrams H., Bülow M., Lunner T., Nelson J., Riis S. K., Vanpoucke F. Hearing device manufacturers call for interoperability and standardization of internet and audiology. Am J Audiol, (2016). 25(3S):260–263
      Leijon A., Dahlquist M., Smeds K. Bayesian analysis of paired-comparison sound quality ratings. J Acoust Soc Am, (2019). 146, 3174–318309
      Liao Y., Skelton K., Dunton G., Bruening M. A systematic review of methods and procedures used in ecological momentary assessments of diet and physical activity research in youth: An adapted STROBE Checklist for Reporting EMA Studies (CREMAS). J Med Internet Res, (2016). 18, e151
      May A. E., Upfold L. J., Battaglia J. A. The advantages and disadvantages of ITC, ITE and BTE hearing aids: Diary and interview reports from elderly users. Br J Audiol, (1990). 24, 301–309
      May M., Junghaenel D. U., Ono M., Stone A. A., Schneider S. Ecological momentary assessment methodology in chronic pain research: A systematic review. J Pain, (2018). 19, 699–716
      Mehra R., Brimijoin O., Robinson P., Lunner. Potential of augmented reality platforms to improve individual hearing aids. Ear Hear, (2020). 41(Suppl 1), 140S–146S.
      Morren M., van Dulmen S., Ouwerkerk J., Bensing J. Compliance with momentary pain measurement using electronic diaries: A systematic review. Eur J Pain, (2009). 13, 354–365
      Moskowitz D. S., Russell J., Sadikaj J. J., Sutton R. Measuring people intensively. Can Psychol, (2009). 50, 131–140
      Noble W., Jensen N. S., Naylor G., Bhullar N., Akeroyd M. A. A short form of the Speech, Spatial and Qualities of Hearing scale suitable for clinical use: The SSQ12. Int J Audiol, (2013). 52, 409–412
      Palmer C., Bentler R., Mueller H. G. Evaluation of a second-order directional microphone hearing aid: II. Self-report outcomes. J Am Acad Audiol, (2006). 17, 190–201
      Perrez M., Reicherts M. Dauwalder H. P., Perrez M., Hobi V. Coping behavior in the natural setting: A method of computer-aided self-observation. Controversial Issues in Behavior Modification. Annual Series of European Research in Behavior Therapy, (1987). 2, Swets and Zeitlinger. pp. 127–137
      Piasecki T. M., Hufford M. R., Solhan M., Trull T. J. Assessing clients in their natural environments with electronic diaries: Rationale, benefits, limitations, and barriers. Psychol Assess, (2007). 19, 25–43
      Pontoppidan N. H., Li X., Bramslow L., Johansen B., Nielsen C., Hafez A., Petersen M. Data-driven hearing care with time-stamped data-logging. Proc ISAAR, (2017). 6, 127–134.
      Probst T., Pryss R. C., Langguth B., Rauschecker J. P., Schobel J., Reichert M., Spiliopoulou M., Schlee W., Zimmermann J. Does tinnitus depend on time-of-day? An ecological momentary assessment study with the “TrackYourTinnitus” application. Front Aging Neurosci, (2017). 9, 253
      Ram N., Brinberg M., Pincus A. L., Conroy D. E. The questionable ecological validity of ecological momentary assessment: Considerations for design and analysis. Res Hum Dev, (2017). 14, 253–270
      Ramsey A. T., Wetherell J. L., Depp C., Dixon D., Lenze E. Feasibility and acceptability of smartphone assessment in older adults with cognitive and emotional difficulties. J Technol Hum Serv, (2016). 34, 209–223
      Reis H. T. Mehl M. R., Connor T. S. Why researchers should think “real-world”: A conceptual rationale. Handbook of Research Methods for Studying Daily Life, (2012). Guilford. pp. 3–21
      Robinson M. D., Clore G. L. Belief and feeling: Evidence for an accessibility model of emotional self-report. Psychol Bull, (2002). 128, 934–960
      Shiffman S., Stone A. A., Hufford M. R. Ecological momentary assessment. Annu Rev Clin Psychol, (2008). 4, 1–32
      Skagerstrand Å., Stenfelt S., Arlinger S., Wikström J. Sounds perceived as annoying by hearing-aid users in their daily soundscape. Int J Audiol, (2014). 53, 259–269
      Slaney M., Lyon R.F., Garcia R., Kemler B., Gnegy C., Wilson K., Kanevsky D., Savla S., Cerf V. Ecological auditory measures for the next billion users. Ear Hear, (2020). 41(Suppl 1), 131S–139S
      Smeds K., Dahlquist M., Larsson J., Herrlin P., Wolters F. LEAP, a new laboratory test for evaluating auditory preferences. 2019). Proceedings of the 23rd International Congress on Acoustics, Aachen, Germany, Berlin: German Acoustical Society.. pp. 7608–7615
      Smeds K., Gotowiec S., Wolters F., Herrlin P., Larsson J., Dahlquist M. Selecting scenarios for hearing-related laboratory testing. Ear Hear, (2020). 41(Suppl 1), 20S–30S.
      Stone A. A., Shiffman S. Ecological momentary assessment (EMA) in behavioral medicine. Ann Behav Med, (1994). 16, 199–202
      Stone A. A., Shiffman S. Capturing momentary, self-report data: A proposal for reporting guidelines. Ann Behav Med, (2002). 24, 236–243
      Stone A. A., Schwartz J. E., Neale J. M., Shiffman S., Marco C. A., Hickcox M., Paty J., Porter L. S., Cruise L. J. A comparison of coping assessed by ecological momentary assessment and retrospective recall. J Pers Soc Psychol, (1998). 74, 1670–1680
      Stone A. A., Shiffman S., Atienza A. A. The Science of Real-Time Data Capture – Self-Reports in Health Research, (2007). Oxford University Press.
      Stone A. A., Shiffman S., Schwartz J. E., Broderick J. E., Hufford M. R. Patient non-compliance with paper diaries. BMJ, (2002). 324, 1193–1194
      Suls J., Martin R. E. Daily recording and ambulatory monitoring methodologies in behavioral medicine. Ann Behav Med, (1993). 15, 3–7
      Surr R. K., Walden B. E., Cord M. T., Olson L. Influence of environmental factors on hearing aid microphone preference. J Am Acad Audiol, (2002). 13, 308–322
      Tesch-Römer C. Psychological effects of hearing aid use in older adults. J Gerontol B Psychol Sci Soc Sci, (1997). 52, P127–P138
      Timmer B. H. B., Hickson L., Launer S. Ecological momentary assessment: Feasibility, construct validity, and future applications. Am J Audiol, (2017). 26, 436–442
      Timmer B. H. B., Hickson L., Launer S. Do hearing aids address real-world hearing difficulties for adults with mild hearing impairment? Results from a pilot study using ecological momentary assessment. Trends Hear, (2018a22, 2331216518783608
      Timmer B. H. B., Hickson L., Launer S. The use of ecological momentary assessment in hearing research and future clinical applications. Hear Res, (2018b369, 24–28
      Trull T. J., Ebner-Priemer U. Ambulatory assessment. Annu Rev Clin Psychol, (2013). 9, 151–176
      Trull T. J., Ebner-Priemer U. The role of ambulatory assessment in psychological science. Curr Dir Psychol Sci, (2014). 23, 466–470
      Trull T. J., Ebner-Priemer U. W. Ambulatory assessment in psychopathology research: A review of recommended reporting guidelines and current practices. J Abnorm Psychol, (2020). 129, 56–63
      Tye-Murray N., Knutson J. F., Lemke J. H. Assessment of communication strategies use: Questionnaires and daily diaries. Semin Hear, (1993). 14, 338–349
      Verbrugge L. M. Health diaries. Med Care, (1980). 18, 73–95
      von Gablenz P., Kowalk U., Bitzer J., Meis M., Holube I. Individual hearing aid benefit: Ecological momentary assessment of hearing abilities. Proc ISAAR, (2020). 7, 213–220.
      Wagener K. C., Hansen M., Ludvigsen C. Recording and classification of the acoustic environment of hearing aid users. J Am Acad Audiol, (2008). 19, 348–370
      Wild C. P. Complementing the genome with an “exposome”: The outstanding challenge of environmental exposure measurement in molecular epidemiology. Cancer Epidemiol Biomarkers Prev, (2005). 14, 1847–1850
      Wilhelm P., Perrez M. A history of research conducted in daily life. (2013). (Scientific Report Nr. 170). Department of Psychology, University of Friborg, Switzerland.
      Wilson M. B., Kallogjeri D., Joplin C. N., Gorman M. D., Krings J. G., Lenze E. J., Nicklaus J. E., Spitznagel E. E. Jr, Piccirillo J. F. Ecological momentary assessment of tinnitus using smartphone technology: A pilot study. Otolaryngol Head Neck Surg, (2015). 152, 897–903
      Winkler I., Cowan N. From sensory to long-term memory: Evidence from auditory memory reactivation studies. Exp Psychol, (2005). 52, 3–20
      Wolters F., Smeds K., Schmidt E., Christensen E. K., Norup C. Common sound scenarios: A context-driven categorization of everyday sound environments for application in hearing-device research. J Am Acad Audiol, (2016). 27, 527–540
      Wu Y. H., Stangl E., Chipara O., Hasan S. S., DeVries S., Oleson J. Efficacy and effectiveness of advanced hearing aid directional and noise reduction technologies for older adults with mild to moderate hearing loss. Ear Hear, (2019). 40, 805–822
      Wu Y. H., Stangl E., Chipara O., Hasan S. S., Welhaven A., Oleson J. Characteristics of real-world signal to noise ratios and speech listening situations of older adults with mild to moderate hearing loss. Ear Hear, (2018). 39, 293–304
      Wu Y. H., Stangl E., Zhang X., Bentler R. A. Construct validity of the ecological momentary assessment in audiology research. J Am Acad Audiol, (2015). 26, 872–884


      1. Schinkel-Bielefeld N., Kunz P., Zutz A., Droste E., Buder B. Evaluation of hearing aids in every day life using ecological momentary assessment – What situations are we missing?. Am J Audiol, (in press
      2. Burke L. A., Naylor G. Daily-life fatigue in mild-to-moderate hearing impairment: An ecological momentary assessment study. Ear Hear, (2020. doi: 10.1097/AUD.0000000000000888. Online ahead of print.

      Ecological momentary assessment; Evaluation of hearing devices; Hearing assessment; Hearing research; Natural environments

      Copyright © 2020 The Authors. Ear & Hearing is published on behalf of the American Auditory Society, by Wolters Kluwer Health, Inc.