Self-management is a key component of managing chronic conditions.1–3 One of the important components of being able to successfully self-manage is the ability to problem solve in order to implement solutions to overcome barriers inhibiting optimal decision making.4 Problem solving is defined as the “cognitive-behavioral process by which a person attempts to identify effective and adaptive solutions for specific problems encountered in everyday living.”5 Problem solving takes on different roles for different conditions, and has been studied in individuals who have had a stroke,6 spinal cord injury,7 and traumatic brain injuries.8 One area where problem solving and self-management receives a great deal of attention is with individuals with diabetes. Monitoring glucose levels, administering the appropriate insulin dose, and monitoring dietary intake are all crucial components of successfully managing this chronic disease.9 Failure to problem solve and self-manage in this population could result in preventable secondary complications, including vision loss or amputation of a limb.10
Limb loss places individuals at risk to develop secondary complications; therefore, skilled self-management is a critical component of a comprehensive treatment program for persons with new amputation.11,12 Traditionally, self-management strategies and problem-solving skills were provided to the individual with limb loss through demonstrations, support groups, knowledge assessment, and education materials.11–14 Recently, a more contemporary way to deliver these strategies and skills has been applied through the use of mobile health technology. Mobile apps, accessed and used through a smartphone interface, have been used as self-management tools for individuals with chronic pain,15 diabetes,16 and other chronic conditions.17 Although there have been other technological innovations in empowering the individual with limb loss through technology,18–20 none to date have examined the use of a mobile app as empowerment tool focused on a major component of succeeding with a prosthetic leg: problem solving the fit between the residual limb and the prosthetic socket.
Because comfort is the most common reason a patient seeks out the assistance of a prosthetist,21 and that fit is key indicator of being a successful ambulator,22 empowering the individual with limb loss to problem solve issues with his or her socket fit is of paramount importance. Recently, a set of patient-centered decision trees for the issue of a prosthesis fitting “loosely” were developed and acceptability tested among current prosthesis wearers.23 The decision trees were determined to have face validity by both the content experts (prosthetists and physical therapists) as well as the target users. The decision trees were created with the older adult prosthesis wearer in mind, because statistically speaking, this population will make up an increasingly larger percentage of nontraumatic amputations every year.24 Using the National Institute for Aging's guidelines for creating education materials,25 14 unique paper-based decision trees were created to assist prosthesis wearers in self-managing and problem solving the congruency and comfort of their socket fit. Feedback from the study partially supported the implementation of an alternative delivery form. Although the decision trees were found to be helpful, there were challenges in navigation due to the amount of information presented at one time and the need to follow arrows to determine an action. Based on the feedback, the paper-based decision trees were incorporated into a mobile health application (app) to improve the ease of navigation. Assessing technical effectiveness, efficiency, and acceptability of an app before pursuing efficacy testing is commonly performed in mobile health applications.26 In this study, technical effectiveness refers to errors committed, and efficiency is the amount of time taken to navigate the instruments. Acceptability refers to the perceived worth of the instruments. Thus, the aim of this study was two-fold: 1) to assess the technical effectiveness, efficiency, and acceptability of the problem-solving app, and 2) to compare the findings with the paper-based version of the decision trees.
METHODS
PROBLEM-SOLVING TOOLS
The framework for establishing the decision tree face and content validity is presented in a prior study.23 Fourteen decision-making trees based on the prior study, addressing the issue of a loose-fitting prosthesis, were modified to work within the mobile app's user interface. An example of one of the paper-based decision trees can be viewed in Figure 1 . To navigate the trees, a prosthesis wearer or caregiver reads the text box that presents a prompt regarding a prosthetic socket fit issue. The wearer is then presented with the binary options of arrows labeled “yes” or “no” and follows the arrow that corresponds to their answer to the prompt. The answer to the prompt will bring them to either a solution, another prompt, or to a terminal endpoint that states “problem solved” or “contact your clinician.” Thus, the decision tree guides the wearer through multiple issue-solution sets, empowers them to solve prosthetics socket fit problems independently, but instructs them to contact their prosthetist in the case that the problem exceeds the self-management abilities and a professional's assistance is required.
Figure 1: Sample paper-based decision tree for a transtibial sleeve-type suspension.
Although the paper-based decision trees present all the information at once to the wearer on a single sheet of paper, the mobile app adapted the format so that only one question was presented on the screen at a time. For each question presented, two touchscreen buttons associated with the response of “yes” or “no” are displayed. If the user makes a mistake by pressing the wrong button, a back button and a restart button are also available. The app automatically navigates the wearer through the decision trees without the need to follow any arrows or remember their placement on the page. The content, structure, and function of the decision trees adapted to the mobile app are identical to that of the paper-based decision trees. The app was run on an Apple iPad 2 (Apple, Cupertino, CA, USA) for the testing.
PARTICIPANTS
Thirty participants were tested. To be included in the study, participants had to 1) be 18 years of age or older, 2) speak and write fluent English, and 3) currently be using either a transtibial or transfemoral lower-limb prosthesis. Participants were excluded if they were unable to successfully complete the practice trials due to visual, verbal, or cognitive deficits. Recruitment occurred from local prosthetic services clinics, as well as from a regional rehabilitation hospital. All participants completed an informed written consent process before beginning the study. The study was approved by The University of Hartford Institutional Review Board.
DESIGN
A prospective randomized cohort design was used, with counterbalancing to control for order effects. Two problem-solving tools, or conditions, were used in the study: the paper-based decision trees and the mobile app (app) decision trees. Participants performed both conditions during the study, but in a randomized order. For the paper-based condition, participants navigated through the appropriate paper-based decision tree, which was placed in a binder in front of the participant. For the app condition, the participant navigated through each scenario using an interactive application on an iPad. The suspension types represented on the decision trees were independent of the participant's suspension type and were established before the onset of the study.
TECHNICAL EFFECTIVENESS, EFFICIENCY, AND ACCEPTABILITY
Technical effectiveness is a measure of the number of errors committed during each trial, and efficiency is the amount of time to complete each trial. Data were collected on a standardized form created for the study, which tracked the outcomes as well as etiology of the errors. Acceptability was measured through the use of Likert-like questions and a semistructured interview, which is consistent with the technology acceptance model (TAM).27 The schedule of interview questions can be found in Supplemental Digital Content 1 ( https://links.lww.com/JPO/A18 ).
PROCEDURE
Each testing condition consisted of a total of four trials: one practice trial and three measured trials. A mixture of transtibial and transfemoral decision trees was used, representing five different suspension types. To decrease the influence of learning or randomly guessing the response, each trial had a unique navigation pathway and number of steps required to complete each scenario. All participants received the same step-matched trials for each condition, and each trial required a different navigation pathway from the previous trial. Step-matching ensured that the trials between each condition required an equivalent number of steps to complete. To complete a trial, the lead researcher would read off a script, verbally instructing the participant to choose a particular response at bifurcations on the decision tree. The participant would then have to follow the lead researchers command, and navigate the tool appropriately. A second researcher measured the time the participant took to complete the trial and documented navigation errors. The researcher would then correct the participant, reset his or her position, and the lead researcher would repeat the command. This procedure continued until the participant completely navigated all the steps in the trial. Acceptability was measured after each condition was completed.
For the practice trial, instructions on how to navigate the tools were provided. For the paper-based condition, a researcher provided a verbal and visual walkthrough of the decision trees, whereas a video on the iPad displayed instructions for the app condition. The practice trials were of identical design to the measured trials; however, the participants were provided feedback if they encountered any difficulties navigating the tools, without the recording of time or errors. Participants were then asked to complete three unique trials of simulated problem solving that were measured by the researcher. Following the three trials under the first condition (i.e., paper-based condition or app condition trials), the participant was asked to fill out a questionnaire. Afterward, the participant crossed over and performed three trials under the remaining condition. The questionnaire was repeated, and the participant was interviewed. The total time to complete the study was approximately 1 hour.
DATA ANALYSIS AND STATISTICS
Quantitative data were analyzed using SAS JMP (Cary, NC, USA). Technical effectiveness and efficiency outcomes were analyzed using an analysis of covariance (ANCOVA). Covariates included age, education (years of formal education), sex, confidence in limb management (self-assessed scale from 1–5), and experience with using a prosthesis (years). Acceptability was analyzed using Wilcoxon signed rank test for ordinal scale data. Unadjusted bivariate correlations were run to compare demographics with scores and to determine if correlations existed in scores between the two test conditions. Pearson correlation coefficients and P values were calculated. Qualitative data from the interview and questionnaire comments were analyzed using an explanatory framework and deductive approach based on the research questions.28 Data were initially coded by analyzing the interviews and comments using the constant comparison method, then indexed before developing common themes that were prominent in the responses.29 Consensus method was used to determine the prominent themes based off of the assigned categories. This involved having two researchers comparing the common themes developed from the initial categorization and distilling redundant categories to prevent overlap. Data for acceptability are presented both in a quantitative and qualitative format, including a summary of responses provided during the interviews.
RESULTS
Demographic variables can be viewed in Table 1 .
Table 1: Demographics
TECHNICAL EFFECTIVENESS AND EFFICIENCY
Results demonstrated a statistically significant difference between the app and paper-based conditions for both efficiency and technical effectiveness. The app resulted in faster test times and fewer errors (Table 2 ). Average time and errors can be visualized in the box plot in Figure 2 .
Table 2: ANCOVA Results
Figure 2: A box plot demonstrating the technical effectiveness and efficiency of the two conditions.
Covariates were also analyzed. Age had a significant effect on scores with younger subjects scoring faster and with fewer errors. There was significant interaction effect of age versus type of test (paper-based vs. app), such that age had a significantly larger effect on the paper-based condition as compared with the mobile app. These interactions are consistent with the correlation results (described below), which demonstrated a correlation between age and the paper-based condition but not between age and app. Education also had a significant effect on scores where subjects with higher levels of education were faster and had fewer errors. The significant effect of sex was only present in time measurement as males completed the tests on average 20% faster than females, but there was no difference between sexes in number errors. Finally, higher self-confidence was significantly associated with longer time to complete the tests. Experience did not have a significant effect on scores.
CORRELATIONS
A summary of correlations is presented in Table 3 . Participants' performance on one condition was correlated with their performance on the other. There was a medium positive correlation between the measured efficiency of the paper-based condition with the efficiency of the app condition (r = 0.654), as well as a weak relationship with the technical effectiveness of the paper-based condition (r = 0.482). In addition, efficiency in the app condition was weakly positively correlated with the technical effectiveness scores in both conditions (paper-based, r = 0.454; app r = 0.395). One of the strongest positive correlations was found between the technical effectiveness of the two conditions (r = 0.732).
Table 3: Correlations
Education level demonstrated a weak negative correlation with the efficiency of the app condition (r = −0.42) and a medium negative correlation with the technical effectiveness of the paper-based condition (r = −0.51). That is to say, the higher the education level of the participant, the less time it took to complete the app condition and the fewer errors were made navigating the paper-based condition. Age demonstrated a medium positive correlation (r = 0.65) for efficiency and a weak positive correlation for technical effectiveness (r = 0.37) on the paper-based condition (Figure 3 ). The findings suggest that, as age increases, the paper-based condition required more time to complete along with more errors when compared with the mobile app.
Figure 3: Correlations between age and the paper-based condition for average time/average errors.
OUTLIERS
Of note was the prominence of a few outliers in the data set. It was decided to include all outliers before running the analysis given the smaller sample size (n = 30) and that the outcomes measured were not due to tester error. To ensure that the outliers were not responsible for skewing the results toward a significant finding, ANCOVA, interaction, and correlations were run with both the original data set and a data set that excluded any scores greater than three standard deviations away from the mean. Although the presence of outliers did not influence the technical effectiveness, efficiency, or acceptability outcomes, it did influence a number of the correlations. Specifically, removing a single participant's (participant #30) data points from the technical effectiveness scores resulted in the loss of significance among five correlations. Although the results presented include the outlier for clarity in uniform data analysis, the correlations that were significantly influenced are denoted with a [superscript “c”] in Table 3 .
ACCEPTABILITY
A total of six questions were asked using Likert scales. These questions were modified from other acceptability measures for the purpose of this study.30,31 For questions 1, 3, 5, and 6, a higher score represents a more favorable outcome, whereas a lower score is more favorable for questions 2 and 4. The average score for question 1 “Were the decision trees easy to navigate?” was a 4/5 (±0.78). For question 2 “Were the decision trees confusing?,” the average score was a 2.4/5 (±1.1). Question 3 asked, “Was the app easy to navigate?,” and it was scored a 4.5/5 (±1.0). Question 4 was, “Was the app confusing?,” and the average score reported was 1.5/5 (±0.94). Question 5 and 6 asked the participant to rate his or her “overall experience” with the app and the decision tree, respectively. The app had a rating of 4.5/5 (±0.86), whereas the paper-based experience was rated a 3.5/5 (±0.81). Wilcoxon signed ranks were significant in all cases. Ease of use (z = −2.33; P = 0.02), confusion (z = −3.23; P < 0.001), and overall experience (z = −3.63; P < 0.001) all favored the mobile app condition over the paper-based condition (Figure 4 ). Qualitative analysis revealed three prominent themes from the data: 1) the paper-based decision trees were organized; 2) the paper-based decision trees were difficult to navigate; and 3) the mobile app was simple to use and navigate.
Figure 4: Average acceptability scores. Note: Higher scores are desirable for “ease of use” and “overall”; lower scores are desirable for “confusing.”
DISCUSSION
The purpose of this study was to compare the technical effectiveness, efficiency, and acceptability of two novel problem-solving tools for lower-limb prosthesis wearers. Both quantitative and qualitative data were analyzed to evaluate the primary outcomes.
TECHNICAL EFFECTIVENESS AND EFFICIENCY`
The app was found to be significantly faster to use and was navigated with fewer errors when compared with the paper-based condition. When examining the data set, the total number of errors measured on the paper-based condition was three times greater than that of the app condition, with a total of 60 errors (summed across all participants and trials) being committed on the paper-based versus 18 on the app. There were more trials completed in the app condition that recorded “zero” errors in comparison to the paper-based. Given that the trials were step-matched for difficulty (i.e., same number of steps required for completion), this finding indicates improved navigation of the decision trees when using a mobile app interface that simplifies the presentation and response procedure. The result is likely due to the minimalist approach undertaken in the design of the app, an option that was not possible when creating the paper-based versions. Because there were no arrows to follow and the app automatically navigated the prompts without having to retrace steps, the interface lends itself to a less error-prone and more efficient navigation, similar to what has been reported in other studies using mobile apps.32,33 Although average times did not vary greatly, there was a significant difference between the conditions. This finding is due to larger variability of efficiency times in the paper-based condition as compared with the app condition. This efficiency finding demonstrates that the mobile app condition was consistently faster to navigate and with less variation.
CORRELATIONS
Results demonstrated a consistent relationship between the two conditions regarding technical effectiveness and efficiency, supporting the use of the step-matched trials between conditions. Education did correlate with the primary outcomes, consistent with other studies looking at the influence of education level on health literacy.34,35 Thus, designing materials with consideration to the participant's health literacy is supported.36 Age demonstrated a medium positive correlation for efficiency and a weak positive correlation for technical effectiveness on the paper-based condition. However, age did not significantly correlate with the either efficiency or technical effectiveness in the app condition. The findings suggest that as age increased, time to complete the paper-based condition increased, along with commission of more errors when compared with the mobile app. Thus, the app likely simplified the navigation task, especially for older adults, challenging the notion that older adults may find newer technology more difficult to learn, in contrast to other studies.37,38 However, because a single outlier had a notable impact on the correlation between age and time to complete the trials, further exploration of digital-based interventions for self-management, specifically in the older adult population, should be used. Finally, sex and years of experience were not correlated with any of the dependent variables, suggesting the tools were not biased toward either sex or experience.
ACCEPTABILITY
QUANTITATIVE
The results indicate that the app condition was found to be more acceptable, both in terms of ease of navigation and organization. In addition, the overall experience with the app condition was rated as greater than the paper-based condition. Although acceptability was significantly greater for the app condition, the acceptability scores for the paper-based condition demonstrated acceptance. The paper-based decision trees were found to be easy to use by approximately 80% of participants, whereas the overall experience was given an average score of 3.52/5. Regarding if the paper-based trees were confusing to navigate, the average score represented a “neutral” outlook, which is consistent with the prior studies evaluating the navigation and readability of paper-based decision trees for individuals with limb loss.22
QUALITATIVE
The first theme that developed for paper-based condition was that participants liked the layout of the decision trees. Categories that defined the layout included the incremental presentation of the information, the color-coded solutions, and the direct nature of the decision trees. Support for this theme was found in the majority of the participant's responses. The second theme that developed was that the paper-based decision trees can be confusing to navigate, which primarily was based on the difficult nature of the arrows and the overwhelming presentation of all the information at once. When the comments regarding the errors made during the trials were reviewed, it was found that nearly 50% of all the errors committed during the paper-based trials were due to failing to properly follow the arrows.
The one theme that developed for the mobile app was the ease of use and navigation. This is consistent with other studies evaluating the use of mobile apps.39,40 Many participants found that the one-touch approach to navigating the decision trees was a desirable trait. Because the app eliminated the use of arrows and only required responding to a “yes” or “no” response with a touch of the finger, the navigation issues prominent in the paper-based decision trees were not described for the app condition.
LIMITATIONS
The study had several limitations. Because this article only examined technical effectiveness, efficiency, and acceptability, it was not necessary that the participants currently be experiencing issues regarding their prosthesis fit at the time of the study. Therefore, it is beyond the scope of this article to generalize the benefit of using these tools with an individual who is currently attempting to problem solve his or her own unique fit issues. Because of the interaction effect of age on the efficiency paper-based condition, it would be beneficial to either include a larger sample size or define a more stringent inclusion age range for further studies. Finally, neither literacy nor reading speed was assessed as part of this study. These factors could have influenced the efficiency and technical effectiveness.
CONCLUSIONS
The mobile app-based decision trees demonstrated a statistically significant improvement in technical effectiveness, efficiency, and acceptability in comparison to the paper-based decision trees. Both conditions had relatively high acceptability; however, the mobile app was found to be significantly easier to use and less confusing than the paper-based version of the decision trees. For the paper-based condition, older adults tended to demonstrate decreased efficiency and technical effectiveness than younger adults. However, age was not significantly correlated with performance outcomes to the mobile app condition. This finding challenges the belief that older adults prosper with paper-based education materials, and that mobile app-based education materials are a viable alternative. Further study is required to assess the efficacy of the decision trees to manage a prosthetic fit issue in an individual who is currently having difficult managing a prosthetic socket fit issue.
ACKNOWLEDGMENTS
We would like to acknowledge Hanger Clinics, New England Orthotics and Prosthetics Solutions, and the University of Hartford for their support in this research.
REFERENCES
1. Kolbe J, Vamos M, Fergusson W, et al. Differential influences on asthma self-management knowledge and self-management behavior in acute severe asthma.
Chest 1996;110(6):1463–1468.
2. Coates VE, Boore JR. Knowledge and diabetes self-management.
Patient Educ Couns 1996;29(1):99–108.
3. Curtin RB, Sitter DC, Schatell D, et al. Self-management, knowledge, and functioning and well-being of patients on hemodialysis.
Nephrol Nurs J 2004;31(4):378–386, 396; quiz 387.
4. Hill-Briggs F. Problem solving in diabetes self-management: a model of chronic illness self-management behavior.
Ann Behav Med 2003;25(3):182–193.
5. D'Zurilla TNA.
Problem-Solving Therapy: A Positive Approach to Clinical Intervention . 3rd Ed. New York: Springer Publishing Company; 2007.
6. Grant JS, Elliott TR, Weaver M, et al. Social support, social problem-solving abilities, and adjustment of family caregivers of stroke survivors.
Arch Phys Med Rehabil 2006;87(3):343–350.
7. Elliott TR, Berry JW. Brief problem-solving training for family caregivers of persons with recent-onset spinal cord injuries: a randomized controlled trial.
J Clin Psychol 2009;65(4):406–422.
8. Rivera PA, Elliott TR, Berry JW, Grant JS. Problem-solving training for family caregivers of persons with traumatic brain injuries: a randomized controlled trial.
Arch Phys Med Rehabil 2008;89(5):931–941.
9. Fitzpatrick SL, Schumann KP, Hill-Briggs F. Problem solving interventions for diabetes self-management and control: a systematic review of the literature.
Diabetes Res Clin Pract 2013;100(2):145–161.
10. Fan L, Sidani S. Effectiveness of diabetes self-management education intervention elements: a meta-analysis.
Can J Diabetes 2009;33(1):18–26.
11. Wegener ST, Mackenzie EJ, Ephraim P, et al. Self-management improves outcomes in persons with limb loss.
Arch Phys Med Rehabil 2009;90(3):373–380.
12. Goodworth AD, Veneri DA, Burger J, Lee DJ. Development and pilot testing of an international knowledge assessment of prosthetic management for patients using lower-limb prostheses.
J Prosthet Orthot 2017;29(1):28–34.
13. Pantera E, Pourtier-Piotte C, Bensoussan L, Coudeyre E. Patient education after amputation: systematic review and experts' opinions.
Ann Phys Rehabil Med 2014;57(3):143–158.
14. Blum C, Ehrler S, Isner ME. Assessment of therapeutic education in 135 lower limb amputees.
Ann Phys Rehabil Med 2016;59s:e161.
15. Stinson J, Lalloo C, Harris L, et al. (540) iCanCope with Pain: User-centered design of an integrated smartphone and web-based pain self-management program for youth and young adults with chronic pain.
J Pain 2016;17(4):S109.
16. Holmen H, Torbjornsen A, Wahl AK, et al. A mobile health intervention for self-management and lifestyle change for persons with type 2 diabetes, part 2: one-year results from the norwegian randomized controlled trial RENEWING HEALTH.
JMIR mHealth uHealth 2014;2(4):e57.
17. Hartzler AL, Venkatakrishnan A, Mohan S, et al. Acceptability of a team-based mobile health (mHealth) application for lifestyle self-management in individuals with chronic illnesses.
Conf Proc IEEE Eng Med Biol Soc 2016;2016:3277–3281.
18. Winkler SL, Cooper R, Kraiger K, et al. Self-management intervention for amputees in a virtual world environment. In: Merrick PMSJ, ed.
Recent advances on using virtual reality technologies for rehabilitation. Hauppauge . New York: Nova Science Publishers; 2016:25–31.
19. Ceniceros X. Testing the Effects of the E-Health Amputee Patient Empowerment Program 2016.
20. Mathur N, Glesk I, Davidson A, et al. Wearable mobile sensor and communication platform for the in-situ monitoring of lower limb health in amputees. Paper presented at: 2016 I.E. International Symposium on Circuits and Systems (ISCAS); 22–25 May 2016, 2016.
21. Legro MW, Reiber GD, Smith DG, et al. Prosthesis evaluation questionnaire for persons with lower limb amputations: assessing prosthesis-related quality of life.
Arch Phys Med Rehabil 1998;79(8):931–938.
22. Klute GK, Berge JS, Biggs W, et al. Vacuum-assisted socket suspension compared with pin suspension for lower extremity amputees: effect on fit, activity, and limb volume.
Arch Phys Med Rehabil 2011;92(10):1570–1575.
23. Lee DJ, Veneri DA. Development and acceptability testing of decision trees for self-management of prosthetic socket fit in adults with lower limb amputation.
Disabil Rehabil 2017;1–6.
24. Ziegler-Graham K, MacKenzie EJ, Ephraim PL, et al. Estimating the prevalence of limb loss in the United States: 2005 to 2050.
Arch Phys Med Rehabil 2008;89(3):422–429.
25. Health NIF. Making your printed health materials senior friendly: Tips from the National Institute on Aging.
26. Hong Y, Goldberg D, Dahlke DV, et al. Testing usability and acceptability of a Web application to promote physical activity (iCanFit) among older adults.
JMIR Human Factors 2014;1(1):e2.
27. Holden RJ, Karsh BT. The technology acceptance model: its past and its future in health care.
J Biomed Inform 2010;43(1):159.
28. Gale NK, Heath G, Cameron E, et al. Using the framework method for the analysis of qualitative data in multi-disciplinary health research.
BMC Med Res Methodol 2013;13:117–117.
29. Smulowitz S.
Constant Comparison. The International Encyclopedia of Communication Research Methods . John Wiley & Sons, Inc; 2017. doi: https://doi.org/10.1002/9781118901731.iecrm0041.
30. Scheibe M, Reichelt J, Bellmann M, Kirch W. Acceptance factors of mobile apps for diabetes by patients aged 50 or older: a qualitative study.
Med 2 0 2015;4(1):e1.
31. Bravo C, O'Donoghue C, Kaplan CP, et al. Can mHealth improve risk assessment in underserved populations? Acceptability of a breast health questionnaire app in ethnically diverse, older, low-income women.
J Health Dispar Res Pract 2014;7(4).
32. Ventola CL. Mobile Devices and Apps for Health Care Professionals: Uses and Benefits.
Pharmacy and Therapeutics 2014;39(5):356–364.
33. Anderson K, Burford O, Emmerton L. Mobile health apps to facilitate self-care: a qualitative study of user experiences.
PLoS One 2016;11(5):e0156164.
34. Schillinger D, Grumbach K, Piette J, et al. Association of health literacy with diabetes outcomes.
JAMA 2002;288(4):475–482.
35. Kalichman SC, Rompa D. Functional health literacy is associated with health status and health-related knowledge in people living with HIV-AIDS.
J Acquir Immune Defic Syndr 2000;25(4):337–344.
36. Berkman ND, DeWalt DA, Pignone MP, et al. Literacy and health outcomes.
Evid Rep Technol Assess (Summ) 2004;87:1–8.
37. Mitzner TL, Boron JB, Fausset CB, et al. Older adults talk technology: technology usage and attitudes.
Comput Human Behav 2010;26(6):1710–1721.
38. Demiris G, Rantz M, Aud M, et al. Older adults' attitudes towards and perceptions of "smart home" technologies: a pilot study.
Med Inform Internet Med 2004;29(2):87–94.
39. Parmanto B, Pramana G, Yu DX, et al. iMHere: a novel mHealth system for supporting self-care in management of complex and chronic conditions.
JMIR Mhealth Uhealth 2013;1(2):e10.
40. Al Ayubi SU, Parmanto B, Branch R, Ding D. A persuasive and social mHealth application for physical activity: a usability and feasibility study.
JMIR mHealth uHealth 2014;2(2):e25.