Feasibility and Acceptability of Wearable Sensor Placement for Measuring Screen Time of Children : Translational Journal of the American College of Sports Medicine

Secondary Logo

Journal Logo

Feasibility/Pilot Study Report

Feasibility and Acceptability of Wearable Sensor Placement for Measuring Screen Time of Children

Willis, Erik A.1,2; Hales, Derek1,2; Smith, Falon T.1; Burney, Regan1; El-Zaatari, Helal M.3; Rzepka, Michelle C.3; Amft, Oliver4,5,6; Barr, Rachel7; Evenson, Kelly R.1,8; Kosorok, Michael R.3; Ward, Dianne S.1,2

Author Information
Translational Journal of the ACSM: Fall 2022 - Volume 7 - Issue 4 - e000214
doi: 10.1249/TJX.0000000000000214

Abstract

INTRODUCTION

Electronic screens (e.g., TV, computers, tablets, smartphones) are now ubiquitous in the lives of children. Research suggests that electronic screen use may offer both benefits (e.g., early learning, social contact and support) and risks (e.g., low physical activity/fitness, poor sleep quality, obesity) to the overall well-being of children (1–10). However, the ability to assess the relationship of screen exposure to behavioral and health outcomes is plagued by multiple measurement issues (11,12). To date, studies of electronic screen use in children have relied largely on parental or caregiver self-report, which is susceptible to high levels of error and is limited in its ability to capture the short bouts of screen use that are characteristic of newer media (e.g., smartphones) (13–21). Some groups have developed smartphone apps or TV/computer allowance devices that measure screen usage more objectively (22–24), but these systems often require instrumenting every screen in a person’s environment, thus severely limiting real-world application and generalizability. A more precise, scalable, and cost-effective measure of screen use is needed to understand and evaluate screen time effects on child behaviors and health outcomes.

Wearable sensors have become the standard for measuring lifestyle behaviors (e.g., physical activity, sedentary, sleep) because of their unobtrusive size and vast data collection capabilities. However, the development and calibration of wearable sensors for the measurement of electronic screen use has been slow to evolve. Recently, research in adults has shown >80% accuracy in measuring electronic screen exposure with wearable color light sensors under controlled conditions (25–28). Extension of these findings to real-world application with children requires understanding several knowledge gaps. First and foremost, we need to understand what method or “housing” is feasible for extended wear while maintaining proper sensor placement. Second, to distinguish mere presence of a screen from looking at a screen, light measurements need to be taken as close as possible to the subject’s eyes (28,29). Head-placed wearables have been proposed to robustly detect light received at eye level. However, the acceptability of these head-placed wearables for children in free-living settings is unknown. For example, eyeglasses seem to be a practical, everyday accessory that can house a light sensor without changing their main function, to enhance vision, or substantially modifying the eyeglasses’ appearance. Other potential candidates are headband, wearable adhesive patch (3M Tegaderm™ adhesive), badge pinned/clipped to shirt collar, mask, necklace, or vest. The aim of this pilot study was to iteratively examine the acceptability of different wear methods for a light sensor and the feasibility of a free-living 3- or 7-d wear protocol and logging routine with children.

METHODS

Study Sample

A convenience sample of parents, children, and/or childcare providers were recruited in three phases through social media (i.e., LinkedIn, Facebook, and Twitter) or email. Recruitment was stratified by child’s age (3–5 and 6–8 yrs old). For phase 1, parents and childcare providers were recruited in January–February 2021 to complete an online survey to examine perceptions of different wear methods for the wearable sensor. Participants were eligible if they had at least one child between the ages of 3 and 8 yr or if they were employed as a child care provider serving at least one child in this age range. Participants completing the online survey were entered into a raffle for one of three $25 gift cards. For phases 2 and 3, parent/child dyads were recruited (phase 2: April–May 2021, phase 3: May–June 2021) to examine the feasibility of free-living 3- or 7-d wear protocols. Each phase of data collection lasted ~3 wk. Participants were eligible if they had at least one child between the ages of 3 and 8 yr and were living within a 30-mile radius of Chapel Hill, NC. Families received monetary compensation for each day they participated in the study. Participating parents received $5 for completing day 1 assessments (demographic questionnaire and wear log diary), $5 for each additional day the wear log diary was completed, and $15 for completing the follow-up survey. Total possible compensation for phase 2 was $30, and for phase 3 was $50. Children received an age-appropriate gift (e.g., sidewalk chalk) for participating. Web-based informed consent was obtained from all participants before participation and the Institutional Review Board (IRB) at the University of North Carolina at Chapel Hill approved all study activities (IRB number: 20-3207).

Phase 1

Based on previous light sensor placements (i.e., eyeglasses, headband) in adults and expert opinion (26,27), investigators identified seven possible wear methods to house a new screen detection sensor: 1) headband, 2) shirt clip/badge, 3) glasses with no lenses, 4) vest, 5) neckband, 6) mask, and 7) adhesive bandage. Parents of 3- to 8-yr-old children and childcare providers were presented with a Qualtrics (Qualtrics LLC, Provo, UT) link to an online survey. The survey consisted of pictures (Supplemental Content 1, figure, https://links.lww.com/TJACSM/A190; 1) headband, 2) vest, 3) necklace, 4) mask, 5) eyeglasses, 6) adhesive patch, 7) shirt clip/badge) and explanations of each wear method, and participants were asked to rate aspects of fitting and the likelihood of wear for each method (Supplemental Content 2, table, https://links.lww.com/TJACSM/A187). Parent- and childcare-provider–perceived acceptability of each item was quantified using a net promoter score approach by child age group and child sex (30). Participant responses to each question were first categorized as promoters (high ranking), passives (mid ranking), or detractors (low ranking). Supplemental Content 2 (table, https://links.lww.com/TJACSM/A187) shows cutoff scores for each category by question. Second, a net promoter score was calculated by subtracting the percentage of detractors from the percentage of promoters, which ranged from a low of −100 (if every participant is a detractor) to a high of 100 (if every participant is a promoter). To simplify comparison, all scores were converted to positive values by adding a constant value of 100, for a range of 0 to 200. Third, an average net promotor score was calculated across all questions for each wear method. Wear methods were ranked according to average net promoter score by participant type (parent/provider) and child age × sex groups (3–5 yr: boys/girls; 6–8 yr: boys/girls). The top two wear methods from each stratification were retained for phase 2 testing.

Phases 2 and 3

In phase 2, parent/child dyads were stratified by child age group (3–5 and 6–8 yr old) and randomly assigned to one of the wear methods identified in phase 1. These wear methods were implemented using commercially available items with no active sensors attached (Supplemental Content 3, figure, https://links.lww.com/TJACSM/A191; 1) adhesive patch, 2) shirt clip/badge, 3) vest, 4) eyeglasses, 5) headband). Therefore, no data were collected directly from these mock devices. In phase 2, parents were asked to fit their child with the mock device and have them wear the item during waking hours for 3 consecutive days. Written instructions on fitting the mock device were provided to the parent. Parents completed daily wear logs and a usability survey that asked about the feasibility and acceptability of the assigned wear method. Based on parent ratings and logs, three methods were selected for further testing. In phase 3, a new group of parent/child dyads were recruited and randomly assigned to test one of the top three wear methods for 7 consecutive days. Similar to phase 2, parents completed daily wear logs and a usability survey. Supplemental Content 4 (figure, https://links.lww.com/TJACSM/A189) shows the consort diagram for phases 2 and 3.

Phase 2 and 3 Measures

sociodemographic questionnaires

Parents completed a self-report demographic questionnaire via Qualtrics. The questionnaire captured information about their personal demographics (e.g., sex, age, household income, employment status, education) and the demographic characteristics of the children (e.g., age, sex, race/ethnicity).

wear log diary

Parents were asked to complete a wear log during the 3- or 7-d assessment period. The times of waking and going to bed, when the device was put on and taken off, and screen use were recorded daily on a standardized, preprinted recording sheet. Parents completed the log diary in real time. Wear time was calculated for each day by summing the duration of wear noted on the log diary.

usability survey

After the assessment period, each parent completed a usability survey via Qualtrics that asked about the feasibility and acceptability of the wear method including ease and satisfaction with the mock device, the degree to which children were willing to wear the device, challenges encountered during implementation, and positive aspects of the wear protocol (Supplemental Content 2, table, https://links.lww.com/TJACSM/A187). The questionnaire was scored using the net promoter score approach as previously described.

Statistical Analyses

Sample demographics and all outcome measures were summarized by descriptive statistics—means and SD for continuous variables and frequencies and percentages for categorical variables. Participants were only included in the analysis if they completed all the assessment protocols. In total, eight participants did not respond to contact attempts to schedule assessments before wear-method assignment (phase 2, n = 2; phase 3, n = 6). Reliability of wear log data across wear day and summary metric were assessed by computing within-subject variance values and respective 95% confidence intervals. The within-subject variance reflected how much individuals in the sample tended to change their reporting of summary metrics across wear days; smaller values indicate less variation in measurements on the same subject by different days. All analyses were performed by using SAS software, version 9.4 (SAS Institute Inc., Cary, NC). Within-subject variance and respective confidence interval values were computed using a freely available macro (%icc9) that uses the Proc Mixed procedure in SAS (31).

RESULTS

Participant characteristics are presented in Table 1. In phase 1, 280 adult participants (175 parents, 105 childcare providers) consented and initiated the survey. Final analyses included only responses with <10% missing data (115 parents, 62 childcare providers). A majority of parents completing the survey were between 30 and 40 yr old (71.3%), and providers were 50+ yr old (56.5%). Adult participants were predominantly female (parents, 85.2%; providers, 100%) and non-Hispanic White (parents, 71.3%; providers, 61.3%). In phases 2 and 3, 62 parent/child dyads were consented and 54 dyads (87%) completed assessments.

TABLE 1 - Characteristics of Participating Children, Parents, and Childcare Providers.
Phase 1 (n = 177) Phase 2 (n = 31) Phase 3 (n = 23)
n % n % n %
Parents
 Age
  ≤30 yr 14 12.2 1 3.2 20 87.0
  30–40 yr 82 71.3 24 77.4 3 13.0
  40–50 yr 17 14.8 5 16.1 0 0.0
  50+ yr 2 1.7 0 0.0 0 0.0
  Missing 0 0.0 1 3.2 0 0.0
 Female 98 85.2 30 96.8 23 100.0
 Race/ethnicity
  Non-Hispanic White 82 71.3 22 71.0 20 87.0
  Non-Hispanic Black 11 9.6 6 19.4 1 4.4
  Hispanic/Latinx 9 7.8 1 3.2 0 0.0
  Other 8 7.0 1 3.2 2 8.7
  Missing 5 4.4 1 3.2 0 0.0
 Income level
  <$50,000 18 15.7 0 0.0 0 0.0
  $50,000–$100,000 32 27.8 12 38.7 4 17.4
  $100,000+ 65 56.5 18 58.1 19 82.6
  Missing 0 0.0 1 3.2 0 0.0
 Education level
  HS/GED/Some college 26 22.6 2 6.5 2 8.7
  College degree 36 31.3 6 19.4 3 13.0
  Graduate degree 49 42.6 22 71.0 18 78.3
  Missing 4 3.5 1 3.2 0 0.0
 Age of their child
  3–5 yr 63 54.8 14 45.2 9 39.1
  6–8 yr 52 45.2 17 54.8 14 60.9
 Sex of their child
  Male 59 51.3 18 58.1 14 60.9
  Female 56 48.7 13 41.9 9 39.1
Child care providers
 Age
  ≤30 yr 1 1.6
  30–40 yr 4 6.5
  40–50 yr 22 35.8
  50+ yr 35 56.5
  Missing 0 0.0
 Female 62 100.0
 Race/ethnicity
  Non-Hispanic White 38 61.3
  Non-Hispanic Black 19 30.7
  Hispanic/Latinx 2 3.2
  Other 2 3.2
  Missing 1 1.6
 Education level
  HS/GED/Some college 25 40.3
  College degree 30 48.4
  Graduate degree 6 9.7
  Missing 1 1.6
 Quality rating of their program
  1 star 7 11.3
  2 star 5 8.1
  3 star 11 17.7
  4 star 12 19.4
  5 star 14 22.6
  Missing 13 21.0
 Total 3- to 5-yr-old enrolled, mean (SD) 60 4.1 (1.8)
Quality rating: Providers earn higher ratings as they meet more quality standards.
GED, tests of General Education Development; HS, high school.

Phase 1

Figure 1 shows parent-reported average net promoter scores by child age and sex. The shirt clip/badge wear method had the highest average score across all age/sex categories (all scores >124 points). The second highest ranked wear method varied by stratification. In parents with a 3- to 5-yr-old, bandage was the second ranked wear method for both boys (76.8 points) and girls (83.8 points). In parents with a 6- to 8-yr-old, the second ranked wear method was glasses for boys (89.3 points) and headband for girls (78.8 points).

F1
Figure 1:
Phase 1 parent (n = 115) mean (SE) net promoter scores by child sex (column A, boys; column B, girls) and age (row 1, 3- to 5-yr-olds; row 2, 6- to 8-yr-olds).

For providers, wear method scores were ranked as follows: shirt clip/badge (124.3 points), vest (76.8 points), mask (75.5 points), bandage (73.3 points), headband (71.8 points), glasses (70.8 points), and necklace (55.5 points). The top two wear methods from each stratification (shirt clip/badge, bandage, glasses, headband, and vest) were selected for phase 2 testing.

Phase 2

During phase 2, 31 participants were given one of five mock devices identified from phase 1 to wear for 3 d (n = ~6 children per item). On average, wear logs were completed over 8.5 (vest group) to 12.4 (bandage group) h·d−1. The proportion of observation time with wear was highest for the shirt clip/badge (90.0%), glasses (84.0%), and vest (84.4%), with the badge and glasses averaging 9.8 and 10.4 h of wear per day, respectively. In addition, average net promoter scores were higher for the glasses (155.4 points), shirt clip/badge (145.8 points), and vest (141.7 points) compared with the headband (112.5 points) and bandage (93.7 points). Wear time and net promoter score ranks were similar across age group and self-reported screen time stratification (data not shown). Supplemental Content 5 (table, https://links.lww.com/TJACSM/A188) shows detailed wear log observations and wear time summaries by wear method.

Figure 2 shows the percent of participants during phase 2 who wore each mock device for a given day and hour (10-, 8-, or 6-h) wear criterion. Using a 3-d/10-h criterion, glasses had the highest compliance, with 42.9% of participants meeting this criterion. Only 33.0% of participants met this criterion for the shirt clip/badge, headband, and vest. No participant met the 3-d/10-h criterion in the bandage group. By shortening the minimum required wear time to 2 d with at least 8 h of wear, the percent of children meeting the criteria increased substantially for the glasses (85.7%), shirt clip/badge (66.7%), and headband (50.0%), but did not change for the bandage (33.3%) or vest (33.3%). Wear time was consistent across the 3 d for the headband, shirt clip/badge, glasses, and vest (within-subject variance, <0.24) compared with bandage (within-subject variance, 0.51). Together, data from the wear logs and survey suggest that the glasses, shirt clip/badge, and vest had the highest potential and were further tested in phase 3.

F2
Figure 2:
Percent of children meeting wear criterion (in days per hours). Column 1: phase 2 wear logs (n = 31). Column 2: phase 3 wear logs (n = 23). Row A: 10-h criterion. Row B: 8-h criterion. Row C: 6-h criterion.

Phase 3

During phase 3, 23 participants were given one of three mock devices (n = ~8 per device) identified from phase 2 to wear for 7 d. On average, wear logs were completed over 7.9 (vest group) to 12.4 (shirt clip/badge group) h·d−1 (Supplemental Content 5, table, https://links.lww.com/TJACSM/A188). The proportion of observation time with wear was highest for the shirt clip/badge (75.3%) compared with the vest (57.6%) and glasses (56.3%). Within-subject variation showed wear time varied the least for the shirt clip/badge and the most for the vest placement across the 7 d (Supplemental Content 5, table, https://links.lww.com/TJACSM/A188). In addition, average net promoter scores were higher for the shirt clip/badge (169.6 points) and glasses (145.3 points) compared with the vest (112.5 points). Wear time and net promoter score ranks were similar across age group and self-reported screen time stratification (data not shown).

Figure 2 shows the percent of participants during phase 3 who wore each mock device for a given day and hour wear criterion (10, 8, or 6 h). Using a 4-d/10-h criterion, shirt clip/badge showed the highest compliance, with 57.1% of participants meeting this criterion. Only 37.5% of participants met this criterion for the glasses. No participant met the 4-d/10-h criterion in the vest group. By shortening the minimum required wear time to 3 d with at least 8 h of wear, 85.7%, 50.0%, and 25.0% met the criterion for the shirt clip/badge, glasses, and vest, respectively.

DISCUSSION

Not knowing precisely how much screen time children receive limits our ability to link exposure to behaviors and health outcomes. As technology to assess screen exposure becomes available (i.e., sensor and processing), researchers need information on potential structure and monitoring methods that are acceptable for young children and feasible for parents. In three investigative phases, we systematically examined the acceptability and feasibility of several wear methods for a potential sensor designed to detect child’s screen exposure. Using a minimum wear time criterion and parent ratings, we found that a shirt clip/badge or glasses were superior to a vest, bandage, necklace, headband, or mask for housing sensor technology.

The move from subjective questionnaires to wearable-device-based methods of screen time detection will greatly improve the accuracy of measurement and our ability to investigate exposure in a more nuanced way. Previous research in this area has been primarily tested in adults in highly controlled settings with small samples (25–28). Although one group has developed a method for estimating screen time using a wearable wrist band for children (28), this work is limited largely because of wrist placement, resulting in a limited viewing angle, making computer screens, tablets, and phones particularly difficult to detect. It is critical to identify optimal sensor placement that is also acceptable to young children for long-term wear. Our results have narrowed and identified two wear methods that are acceptable, are unobtrusive, and allow measurement at or near eye-level. Ideally, the two methods would be interchangeable, but further investigation is needed to determine if data collected from glasses and a shirt clip/badge are of similar quality under controlled and free-living situations. For example, quality of data collected from sensors will need to be evaluated in various situations, such as when blue light blocking settings are enabled on electronic devices or when the sensor is at varying distances and postures relative to the electronic device (e.g., laying on the floor watching TV vs sitting at a desk using a laptop) or in environments with bright ambient light settings (e.g., outside, near windows, riding in a car). Additional considerations may be necessary when using the shirt clip/badge placement of the sensor in situations where clothing (e.g., coats, scarves) may block the light signal. Future research identifying measurement error and other limitations will clarify applicable settings and situations for each. If both placement methods produce comparable data under a variety of conditions, using participant preference to determine method of wear could increase compliance and overall data quality in both children and adults.

In the field of physical activity and sedentary behavior measurement, adequate stability (i.e., intraclass correlations coefficients ≥0.80) of device-based measurement is generally believed to be achieved if an assessment administration includes at least 3–4 d of at least 10 h of wear time over a 7-consecutive-day wear protocol (32–36). However, research has also shown adequate estimates in device-measured physical activity with as little as 1 d with at least 10 h of wear (37). Although the feasibility and acceptability of the glasses and badge were good with at least 1–3 d of at least 10 h of wear, the percent of children with at least 4 or more days was lower than ideal. Wear criterion required for light sensors to produce valid data for young children has yet to be established. Our results indicate that wear compliance drops substantially after 3 d of wear, with very few participants getting 6–7 full days of wear. Further research will be needed to determine how much device-based data are required to get a “good” estimate of screen exposure in various groups of adults and children. If more than 3 d is needed, additional strategies to ensure adequate wear may be necessary. For example, some families said that they presented wearing the device as a “big kid research job” to their children. The few families who took this approach reported that their children had a higher desire to wear the device each day. Another strategy, used in behavioral change research but not assessed in the current study (38–40), is testing different incentive structures or allowing children to choose their own incentives. Understanding what type of incentive most motivates an individual child or the timing the incentive is introduced (e.g., presenting the incentive before or after wear period) may increase the child’s motivation to complete study protocols.

Limitations

This study benefits from a multiphase process to identify, test, and refine wear protocols for a wearable light sensor for young children. However, it is limited because all participants were from a convenience sample of educated, predominantly non-Hispanic White families, and thus, the results are not generalizable to other populations. Furthermore, this study only assessed the feasibility to wear the device and did not include a sensor to validate plausible data capture. However, to date, no suitable off-the-shelf wearable light sensor exists. Identifying and refining wear protocols are a necessary first step to reduce the cost and time burden of future validation studies. Although the wear methods were tested without active sensors, the size (diameter, 2 cm; height, 0.8 cm) and weight (19 g) of the sensor are negligible and should not have altered the results. Another limitation is that no data regarding the numbers and types of reminders that participants needed to wear the devices were collected. Additional email or text-message reminders to parents throughout the week may ensure adequate wear beyond the ~4 d observed in this pilot study. Lastly, net promoter score is traditionally based on a single survey question. Although we believe that the average net promoter score calculation is a stronger method compared with simple average Likert score for determining acceptability, validity evidence for this method in this context is unavailable.

Conclusions

To effectively link children’s screen time to health outcomes, we must be able to accurately measure their exposure. As screen exposure measurement devices become available, it is important that researchers have information on acceptable and feasible monitoring methods. This study sets the stage for tests of the utility of potential wear placements for use with young children and creates minimal expectation for adherence to wear protocols. Future studies need to determine the validity of wearable light sensors in young children and determine wear time criterion needed to produce valid data.

The authors thank the families for their participation in the study. Results of the present study do not constitute endorsement by the American College of Sports Medicine. Results are presented clearly (as possible), honestly, and without fabrication/falsification or with overt data manipulation.

The authors have no financial conflicts of interest regarding the results of this research. This research was funded by a University of North Carolina–Chapel Hill Center for Health Promotion and Disease Prevention Internal Planning Grant.

REFERENCES

1. Reid Chassiakos YL, Radesky J, Christakis D, et al. Children and adolescents and digital media. Pediatrics. 2016;138(5):e20162593.
2. Costigan SA, Barnett L, Plotnikoff RC, Lubans DR. The health indicators associated with screen-based sedentary behavior among adolescent girls: a systematic review. J Adolesc Health. 2013;52(4):382–92.
3. Stiglic N, Viner RM. Effects of screentime on the health and well-being of children and adolescents: a systematic review of reviews. BMJ Open. 2019;9(1):e023191.
4. Poitras VJ, Gray CE, Janssen X, et al. Systematic review of the relationships between sedentary behaviour and health indicators in the early years (0–4 years). BMC Public Health. 2017;17(Suppl 5):868.
5. Ramsey Buchanan L, Rooks-Peck CR, Finnie RKC, et al. Reducing recreational sedentary screen time: a community guide systematic review. Am J Prev Med. 2016;50(3):402–15.
6. Janssen X, Martin A, Hughes AR, et al. Associations of screen time, sedentary time and physical activity with sleep in under 5s: a systematic review and meta-analysis. Sleep Med Rev. 2020;49:101226.
7. Brand S, Lemola S, Mikoteit T, et al. [Sleep and psychological functioning of children and adolescents—a narrative review]. Prax Kinderpsychol Kinderpsychiatr. 2019;68(2):128–45.
8. Fang K, Mu M, Liu K, He Y. Screen time and childhood overweight/obesity: a systematic review and meta-analysis. Child Care Health Dev. 2019;45(5):744–53.
9. Mihrshahi S, Gow ML, Baur LA. Contemporary approaches to the prevention and management of paediatric obesity: an Australian focus. Med J Aust. 2018;209(6):267–74.
10. Zhang G, Wu L, Zhou L, et al. Television watching and risk of childhood obesity: a meta-analysis. Eur J Public Health. 2016;26(1):13–8.
11. Vandewater EA, Lee S-J. Measuring children's media use in the digital age: issues and challenges. Am Behav Sci. 2009;52(8):1152–76.
12. Barr R, Linebarger DN, editors. Media Exposure during Infancy and Early Childhood: The Effects of Content and Context on Learning and Development. Cham (Switzerland): Springer; 2016. 303 p.
13. Clark BK, Sugiyama T, Healy GN, et al. Validity and reliability of measures of television viewing time and other non-occupational sedentary behaviour of adults: a review. Obes Rev. 2009;10(1):7–16.
14. Jago R, Edwards MJ, Urbanski CR, Sebire SJ. General and specific approaches to media parenting: a systematic review of current measures, associations with screen-viewing, and measurement implications. Child Obes. 2013;9(Suppl 1):S51–72.
15. Junco R. Comparing actual and self-reported measures of Facebook use. Comput Hum Behav. 2013;29(3):626–31.
16. Otten JJ, Littenberg B, Harvey-Berino JR. Relationship between self-report and an objective measure of television-viewing time in adults. Obesity. 2010;18(6):1273–5.
17. Lin Y-H, Lin Y-C, Lee Y-H, et al. Time distortion associated with smartphone addiction: identifying smartphone addiction via a mobile application (App). J Psychiatr Res. 2015;65:139–45.
18. Felisoni DD, Godoi AS. Cell phone usage and academic performance: an experiment. Comput Educ. 2018;117:175–87.
19. Boase J, Ling R. Measuring mobile phone use: self-report versus log data. J Comput Mediat Comm. 2013;18(4):508–19.
20. Shum M, Kelsh MA, Sheppard AR, Zhao K. An evaluation of self-reported mobile phone use compared to billing records among a group of engineers and scientists. Bioelectromagnetics. 2011;32(1):37–48.
21. Domoff SE, Radesky JS, Harrison K, et al. A naturalistic study of child and family screen media and mobile device use. J Child Fam Stud. 2019;28(2):401–10.
22. Barr R, Kirkorian H, Radesky J, et al. Beyond screen time: a synergistic approach to a more comprehensive assessment of family media exposure during early childhood. Front Psychol. 2020;11:1283.
23. Elhai JD, Tiamiyu MF, Weeks JW, et al. Depression and emotion regulation predict objective smartphone use measured over one week. Pers Individ Differ. 2018;133:21–8.
24. Epstein LH, Roemmich JN, Robinson JL, et al. A randomized trial of the effects of reducing television viewing and computer use on body mass index in young children. Arch Pediatr Adolesc Med. 2008;162(3):239–45.
25. Wahl F, Kasbauer J, Amft O. Computer screen use detection using smart eyeglasses. Front ICT. 2017;4:8.
26. Martire T, Nazemzadeh P, Sanna A, Trojaniello D. Digital screen detection enabled by wearable sensors: application in ADL settings. In: Jardim-Gonçalves R, Mendonça JP, Jotsov V, et al, editors. Proceedings of the 2018 International Conference on Intelligent Systems (IS). Funchal, Portugal: Institute of Electrical and Electronics Engineers; 2018. pp. 584–8.
27. Li Z, Rathore AS, Chen B, et al. SpecEye: towards pervasive and privacy-preserving screen exposure detection in daily life. In: Song J, Kim M, Lane ND, et al, editors. Proceedings of the Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services. Seoul, Korea: Association for Computing Machinery; 2019. pp. 103–16.
28. Fletcher R, Chamberlain D, Richman D, et al. Wearable sensor and algorithm for automated measurement of screen time. In: Proceedings of the 2016 IEEE Wireless Health (WH). Bethesda, MD: Institute of Electrical and Electronics Engineers; 2016. pp. 1–8.
29. Garcia PAJ, Gonzalez LIL, Amft O. Using implicit user feedback to balance energy consumption and user comfort of proximity-controlled computer screens. J Ambient Intell Humaniz Comput. 2015;6(2):207–21.
30. Reichheld FF. The one number you need to grow. Harv Bus Rev. 2003;81(12):46–54.
31. Hertzmark E, Spiegelman D. The SAS ICC9 Macro [Internet]. Boston (MA): Harvard University; 2010 [cited 2022 Jul 5]. Available from: https://cdn1.sph.harvard.edu/wp-content/uploads/sites/271/2012/09/icc9.pdf.
32. Sasaki JE, Júnior JH, Meneguci J, et al. Number of days required for reliably estimating physical activity and sedentary behaviour from accelerometer data in older adults. J Sports Sci. 2018;36(14):1572–7.
33. Aadland E, Ylvisåker E. Reliability of objectively measured sedentary time and physical activity in adults. PLoS One. 2015;10(7):e0133296.
34. Hart TL, Swartz AM, Cashin SE, Strath SJ. How many days of monitoring predict physical activity and sedentary behaviour in older adults?Int J Behav Nutr Phys Act. 2011;8(1):1–7.
35. Migueles JH, Cadenas-Sanchez C, Ekelund U, et al. Accelerometer data collection and processing criteria to assess physical activity and other outcomes: a systematic review and practical considerations. Sports Med. 2017;47(9):1821–45.
36. Donaldson SC, Montoye AHK, Imboden MT, Kaminsky LA. Variability of objectively measured sedentary behavior. Med Sci Sports Exerc. 2016;48(4):755.
37. Wolff-Hughes DL, McClain JJ, Dodd KW, et al. Number of accelerometer monitoring days needed for stable group-level estimates of activity. Physiol Meas. 2016;37(9):1447–55.
38. Strohacker K, Galarraga O, Williams DM. The impact of incentives on exercise behavior: a systematic review of randomized controlled trials. Ann Behav Med. 2014;48(1):92–9.
39. Just DR, Price J. Using incentives to encourage healthy eating in children. J Hum Resour. 2013;48(4):855–72.
40. Loewenstein G, Price J, Volpp K. Habit formation in children: evidence from incentives for healthy eating. J Health Econ. 2016;45:47–54.

Supplemental Digital Content

Copyright © 2022 by the American College of Sports Medicine