There is increasing recognition of the importance of measuring outcomes for persons with upper limb amputation.1–4 Outcome measurement, when done systematically, can be used to assess treatment effectiveness, monitor function and prosthetic satisfaction, and justify the costs of rehabilitation service and prosthetic devices for persons with upper limb amputation. Outcome measurement for persons with amputation is especially important given the need for lifelong prosthetic care, the need to justify the high costs of prosthetic devices,5 and common insurance restrictions on prosthetic coverage.6
Multiple types of outcomes are important for upper limb amputees.4,7 Measures are needed that address each of the domains of the International Classification of Health, Functioning and Disability, that is, body structure and function, activities, and participation.3 In addition, measures of prosthetic satisfaction are important for assessing quality and consumer satisfaction. However, researchers and clinicians are limited in their choice of outcome measures because there have been so few developed and/or specifically validated for persons with upper limb amputation, and most of those that are in use have been developed for pediatric amputees.4,7–10
Outcome measures can be categorized as generic, population specific, or patient specific. They can be performance based, scored by an external observer, or they may be self-reported, scored by the patient. Generic measures can be used in a wide variety of populations, and thus, their scores can be compared across populations. Generic dexterity measures assess impairments of body function. These types of measures have been infrequently used in upper limb prosthetic research, and no data are available on their reliability or validity in this population.8 Condition-specific instruments are designed for use in specific patient populations, and thus, they target areas that are most relevant to the disease or condition and may be more responsive to change than are generic instruments.11 Patient-specific measures assess activities valued by individual patients12 and are thought to be particularly suited for measurement of change for the individual patient.13–15
Outcome measures must meet basic measurement criteria to be suitable for clinical and research purposes. For use in small studies and clinical practice, they must be validated for the clinical population and be highly reliable, and the scores and changes in scores must be easily interpretable. There has been little research conducted on generic, condition-specific, or patient-specific measures for adults with upper limb amputation.1,8
Thus, the purpose of this study was to examine the measurement properties of the tests and measures used in the Department of Veterans Affairs (VA) Study to Optimize the DEKA Arm. In this article, we report on two generic dexterity tests: the Box and Block Test of Manual Dexterity (BB) and the Jebsen-Taylor Test of Hand Function (JTHF); two condition-specific measures: the Upper Extremity Functional Scale (UEFS) from the Orthotic and Prosthetics Users Survey (OPUS) and the satisfaction scale of the Trinity Amputation and Prosthetics Experience Scale (TAPES); and one patient-specific measure: the Patient-Specific Function Scale (PSFS). Specifically, we aimed to 1) estimate the test-retest reliability, 2) calculate the minimum detectable change (MDC), and 3) examine known group validity of the above measures.
This was a multisite study with repeated measurements of subjects. Data were collected at four sites: three VA Medical Centers and one Department of Defense (DoD) site. The study received institutional review board approval at all study sites.
Subjects were a convenience sample of upper limb amputees who were screened for either the pilot study of the VA Study to Optimize the DEKA Arm or the full study.16 Subjects were eligible to participate if they were at least 18 years old and had single or bilateral upper limb amputation at the transradial, transhumeral, shoulder disarticulation, or forequarter level. Subjects were current users of any type of prosthetic device, as well as nonusers of devices. They were excluded if they had significant uncorrectable visual deficits, major communication or neurocognitive deficits, skin conditions prohibiting prosthetic wear, or an electrically controlled medical device (e.g., a pacemaker or drug pump.)
Subjects were recruited through several methods: clinicians from the study sites invited their patients to participate, e-mails and press releases regarding the study were sent to national listservs and consumer groups, and approved flyers and brochures were distributed at VA/DoD medical centers, private prosthetic treatment centers, and other locations that upper limb amputees might frequent.
Subjects in the full VA study participated in two data collection sessions within 1 week. All pertinent measures were administered at these two sessions, except the PSFS, which was administered only at the first visit. Nonprosthetic users did not complete the dexterity tests or the TAPES at either session but did complete all other measures. The pilot study protocol used a single screening visit with one baseline testing session for prosthetic users. The full study protocol used baseline testing visits with repeat testing within 1 week. Because subjects in the pilot study participated in a single data collection session, their data were used only in the validation portion of the study.
The dexterity tests were administered by occupational therapists (OTs) and were timed by the study research assistants. Study OTs were trained in the test administration by the first author. The same therapist administered the dexterity tests at the first and second assessments. During the training, therapists received instructions, practiced implementing the test, and were supervised in their implementation throughout the study period through video monitoring. Written self-report measures were completed by the study subjects.
We used two easy-to-administer dexterity measures that had been used in a wide variety of previous research: the BB17–19 and the JTHF. The BB test has been used as an outcome measure of hand function in upper limb prosthetics20,21 for conditions including stroke22–25 and multiple sclerosis.26 The JTHF has been used in studies of prosthetics,21 orthotics,27 wrist arthrodesis,28 stroke,29–31 brain injury,32 arthritis,33,34 and nerve injury.35
The BB17–19 consists of a box with a center partition. Small wooden blocks are placed in one side of the box and the subject is asked to use the prosthetic terminal device to grasp one block at a time, transport it over the partition, and release it. The number of blocks transported to the other side in 60 seconds was counted.
The JTHF36 is a seven-part dexterity test that evaluates the time needed to perform seven hand-related tasks, including 1) printing a 24-letter, third-grade reading difficulty sentence; 2) turning over 7.6 × 12.7 cm (3 × 5) cards in simulated page turning; 3) picking up small common objects (including pennies, paper clips, bottle caps) and placing them in a container; 4) stacking checkers; 5) simulated feeding; 6) moving large empty cans; and 7) moving large 1-lb cans. Each subtest is scored separately. The subtests are traditionally scored by recording the number of seconds required to complete each task. The time allowed for tasks is not capped. In our study, we modified the test administration and scoring method because we expected that some subtasks might be too difficult for upper limb amputees to complete and that it would not be feasible to provide unlimited testing time in a clinical environment. We capped the maximal allowable time for each subtask at 2 minutes. Rather than calculating the time to complete all items in the subtask, we calculated the number of items completed per second. We also counted the number of subactivities completed within the allotted 2 minutes.
We used two condition-specific self-report measures: the UEFS from the OPUS,9,37 which assesses functional activities, and the satisfaction scale of the TAPES, which assesses prosthetic satisfaction.38 Both condition-specific measures were considered promising for adults with upper limb amputation; however, there have been no published data on the test-retest reliability or validity of either measure.8
The UEFS was developed for use with upper limb adult amputees. Items in the UEFS ask clients to evaluate the ease of performing 23 activities, including self-care and instrumental daily living tasks, using a 5-point scale from 1 = very easy to 5 = cannot perform. Items include activities varying from washing, buttoning shirt, tying shoelaces, using fork or spoon, and writing name to donning and doffing the prosthesis. In our study, we used 22 of the 23 UEFS items, omitting the one item related to washing. Because of this, we used IRT methods in WINSTEPS39 to recalibrate the measure scores and calculate person-level summary scores (UEFS summary). The UEFS questionnaire also asks respondents to indicate whether they usually perform each of the activities using or not using their prosthesis (or orthosis). We scored the UEFS use scale by calculating the proportion of activities that the subjects indicated that they performed using the prosthesis.
The TAPES satisfaction scale contains 10 items related to satisfaction with functional characteristics of the artificial limb: reliability, comfort, fit, overall satisfaction, and contentment with the cosmetic characteristics of the device.40 Each item is rated on a 5-point scale from 1 = very dissatisfied to 5 = very satisfied. The TAPES was originally developed for people with lower limb amputations but has recently been used for persons with acquired upper limb amputations.40 Cronbach α (internal consistency) for the prosthetic satisfaction scale for upper limb amputees has been reported as 0.94.40
We also tested one patient-specific measure assessing function, the PSFS.14 The PSFS asks subjects to identify up to five activities that they have difficulty performing because of their condition and then rate the amount of limitation they have in performing these activities on a scale of 0 to 10, with 0 being unable to perform the activity and 10 being able to perform the activity with no problem. Individual items are scored separately. The PSFS has been shown to be valid and responsive to change for patients with arm impairments,41 neck pain, cervical radiculopathy, knee pain, and low back pain13,42 but has not previously been examined in upper limb amputees.
EXAMINATION OF TEST-RETEST RELIABILITY
Test-retest reliability comparing the scores for visits 1 and 2 was examined using repeated-measures analysis of variance and the Shrout and Fleiss intraclass correlation coefficient (ICC) (type 3,1). The ICC (3,1) is a two-way mixed-model, single measure of reliability. Test-retest reliability of the scores of each rater was evaluated separately.
MINIMUM DETECTABLE CHANGE
Minimum detectable change is a statistical measure of change, defined as the minimum amount of change that exceeds measurement error.43 To calculate MDC at the 90% and 95% confidence level, we used the coefficients from the test-retest reliability analyses. The formula for MDC 90 is shown below:
SEM = [SD at first Assessment]×[(1−[r (i.e., ICC)]]^0.5
MDC = [z score (for 90% confidence)] × [SEM]*^0.5
EVALUATION OF FLOOR AND CEILING EFFECTS
We also assessed the extent of floor and ceiling effects of each measure by examining the distribution of scores for each scale at the first assessment and observing the shape and presence of score clustering. Scores clustered at the low end of the scale suggest the presence of floor effect, whereas scores clustered at the high end of the scale would suggest the presence of a ceiling effect.
KNOWN GROUP VALIDITY
We used analyses of variance to compare scores at the first testing session (visit 1) by level of amputation, expecting that subjects with more proximal levels of amputation would have lower (worse) scores than would those with more distal (i.e., transradial) amputations.
The flow of subjects involved in the analysis of dexterity measures and the TAPES is shown in Figure 1. Sixty-two prosthetic users were screened for either the parent pilot study (n = 7) or the full parent study (n = 55). Three subjects who were prosthetic users did not complete all tests with their current device at visit 1 because their device was either broken (two subjects) or because it was a cosmetic (passive) device only (one subject). Thus, data from 59 subjects were used in the analysis of floor and ceiling effects and validity for dexterity and prosthetic satisfaction (TAPES).
Fifty-two prosthetic users who were screened in the full study protocol were eligible for visit 2 repeat testing with their prostheses. Complete data were collected from 49 of these persons, and these data were used in the analysis of reliability and MDC. One subject was lost to follow-up at visit 2, one subject was deemed ineligible for the parent study because of the length of his residuum and was thus terminated from the study, and one subject completed only part of visit 2 data collection because of time constraints. Fifty-three prosthetic users (including the cosmetic user) completed the TAPES at baseline and repeat testing; however, for consistency of our reliability analysis, we used data from the same 49 persons who completed dexterity tests at two time points.
The flow of subjects involved in the analysis of the UEFS is shown in Figure 2. Eleven subjects who were not prosthetic users were screened for either the parent pilot study (n = 1) or the full parent study (n = 10) in addition to the 62 prosthetic users mentioned above. Both nonusers and users of prostheses completed the UEFS, as this measure is appropriate for upper limb amputees regardless of device use. Seventy-three subjects completed the UEFS at visit 1 and were included in the analysis of the floor and ceiling effects and validity of the UEFS. Sixty subjects who completed the UEFS at both visits 1 and 2 were included in the analysis of the reliability and MDC of the UEFS.
The descriptive statistics of the four samples used (reliability of dexterity tests, validity of dexterity tests, reliability of the UEFS, and validity of the UEFS) are provided in Tables 1 and 2. Subjects in all groupings had a mean age of about 45 years and were mostly male white veterans with transradial or transhumeral amputations.
TEST-RETEST RELIABILITY AND MDC
The results of the test-retest reliability analyses are shown in Table 3. Reliability coefficients (ICC 3,1) were greater than 0.70 for all measures except for JTHF number of items for writing, small items, feeding, checkers, light cans, heavy cans, and checkers items per second. The MDC 90 and MDC 95 values for measures with reliability coefficients greater than 0.6 are shown in Table 4. As expected, items with lower reliability coefficients had larger MDCs, of approximately half of the scale interval (results not shown).
FLOOR AND CEILING EFFECTS
The distribution of scores for most measures is shown in Figures 3 and 4. A potential floor effect was observed for the JTHF page turning, small items, and feeding subtests, with more than 15% of the sample completing zero or one item per second. More than 60% of the subjects scored at the highest level (five items) for the JTHF page turning test (results not shown). No other tests exhibited floor or ceiling effects.
KNOWN GROUP VALIDITY
Analyses of variance (Table 5) revealed significant difference in dexterity, PSFS, and TAPES scores by level of amputation. Subjects with more distal amputation (i.e., transradial amputees) had better dexterity for all tests (p < 0.001), better self-reported function on the PSFS (p = 0.01), and greater prosthetic satisfaction (p < 0.05) as compared with persons with higher levels of amputation. Scores on the UEFS did not vary by amputation level.
Our study is the first to report on the test-retest reliability of the BB, the JTHF, the UEFS, and the TAPES in a sample of adults with upper limb amputation. We found the reliability of these measures to be acceptable or better. The reliability of the BB test was excellent, the reliability of the subtask scores of the JTHF using the items per second scoring method was acceptable to excellent, and the reliability of the TAPES satisfaction scale was good.
Our study used a modified method of administering and scoring the JTHF, capping the maximum allowable time at 2 minutes per subtask. We found, using this method, that the reliability of scoring the number of items per second was acceptable to good for all tests, with the possible exception of the checkers test, which had an ICC of 0.68 (confidence interval [CI], 0.49–0.80). On the basis of our findings, we cannot definitively recommend the use of the checkers items per second test. Further research is needed to confirm our findings. We found that counting the number of JTHF items completed in the 2-minute interval was a reliable scoring method only for the JTHF page turning subtask; however, a substantial ceiling effect was noted for this scoring method. Counting items was not a reliable method of subtask scoring for the six other JTHF subtasks. Thus, we do not recommend scoring these subtasks by counting the number of items completed within 2 minutes.
Ours is the first study to examine the test-retest reliability of the UEFS. Our study used a modified version of the UEFS, which used 22 of the original 23 items to calculate a summary score. We omitted the item related to washing because in another part of our study, we required subjects to actually perform all items on the UEFS and we were concerned that myoelectric device users would not be able to get their devices wet. We also calculated a summary score for the UEFS use score. We found that the modified UEFS summary score had good reliability; however, the reliability of the UEFS use scale was questionable at 0.65 (CI, 0.47–0.77). Further research is needed to confirm our findings
Unfortunately, we did not collect data on the PSFS at both of the study time points, and thus, we were unable to examine the test-retest reliability of this measure. Further research is needed to provide this information.
We examined the score distributions of each of our tests to evaluate the presence of potential floor or ceiling effects. We discovered potential floor effects for several of the JTHF subtests, suggesting that these tests were too difficult for some of the upper limb amputees in our study. Therefore, the JTHF may be unable to detect a decline in dexterity for users who score at the bottom of the scale. Floor effects are less likely for transradial-level amputees who had better dexterity. That said, few of our subjects were new amputees, and most were experienced device users. We might expect to see a greater floor effect in a population of newer prosthetic users.
Our study reported on the MDC of each of the study measures. The MDC is a statistical measure of meaningful change and is related to the instrument’s reliability. Statistically significant change may not indicate that the change is clinically meaningful. Our study did not assess whether changes in outcome scores were clinically important. Future studies are needed to determine the magnitude of score changes associated with clinically relevant change for each scale.
We examined the known group validity of the outcome measures by comparing scores across subjects with different levels of amputation. As we expected, subjects with more distal amputations had better dexterity, as measured by the BB and JTHF, and better function, as reported by the PSFS. Subjects with more distal amputation also had higher prosthetic satisfaction. These findings are consistent with the findings from surveys on prosthetic satisfaction and abandonment that report much greater satisfaction and less frequent device abandonment for persons with transradial amputation as compared with persons with higher levels of amputation.44,45
We did not find differences in UEFS scores by level of amputation. This may be because the UEFS questionnaire does not require that the respondent rate the difficulty of performing the task when using the prosthesis, and most of the activities in the UEFS can be completed by respondents using only one upper limb. Because of this, clinicians and researchers should exercise caution when using the UEFS as an outcome measure for prosthetic rehabilitation. Difficulty scores should not be interpreted without information on prosthetic use for the activity because further research on the UEFS is needed. Further research should also be conducted to examine the validity of the UEFS summary and use scores as well as alternate scoring and instruction methods. We recommend using a modified set of instructions that ask respondents to rate the difficulty of performing the item using the prosthesis and then separately to rate the difficulty of performing the item without using the prosthesis.
We conclude that the BB, the JTHF, and the TAPES are reliable and valid measures for use with adults with upper limb amputation. Our study used a modified method of administering and scoring the JTHF, which capped the allotted time to complete each subtask at 2 minutes. A floor effect was observed for several of the JTHF tests, suggesting that it would be difficult to detect deterioration in dexterity using this measure. Although we did not examine the test-retest reliability of the PSFS, our data analyses did support the validity of this measure.
We conclude that the UEFS summary score is a reliable measure for use with adults with upper limb amputation. A potential issue in the use of the UEFS measure is that the respondent rates the difficulty of performing each item and then separately indicates whether the prosthesis was used. Difficulty scores are calculated without taking prosthetic usage into account. Our findings can be used to assist clinicians and researchers in choosing appropriate measures and in interpreting changes in scores with repeat administration.
1. Wright V. Measurement of functional outcome with individuals who use upper extremity prosthetic devices: current and future directions. J Prosthet Orthot
2006; 18 (2): 46–56.
2. Miller LA, Swanson S. Introduction to the Academy’s State of the Science Conference on Upper Limb
Prosthetic Outcome Measures. J Prosthet Orthot
2009; 21 (suppl): P1–P2.
3. Hill W, Stavdahl O, Hermansson L, et al.. Functional outcomes in the WHO-ICF model: establishment of the Upper Limb
Prosthetic Outcome Measures Group. J Prosthet Orthot
2009; 21 (2): 115–119.
4. Hill WK, Hermansson P, Hubbard LN, et al.. Upper Limb
Prosthetic Outcome Measures (UPLOM): a working group and their findings. J Prosthet Orthot
2009; 21 (4S): 69.
5. Blough DK, Hubbard S, McFarland LV, et al.. Prosthetic cost projections for service members with major limb loss from Vietnam and OIF/OEF. J Rehabil Res Dev
2010; 47 (4): 387–402.
6. Gram D. Amputees fight caps in coverage for prosthetics
. USA Today
. June 9, 2008.
7. Miller L, Swanson S. Summary and recommendations of the Academy’s State of the Science Conference on Upper Limb
Prosthetic Outcome Measures. J Prosthet Orthot
2009; 21 (4S): 83–89.
8. Wright V. Prosthetic outcome measures for use with upper limb
amputees: a systematic review of the peer-reviewed literature, 1970 to 2009. J Prosthet Orthot
2009; 21 (4S): 3–63.
9. Heinemann AW, Bode RK, O’Reilly C. Development and measurement properties of the Orthotics and Prosthetics
Users’ Survey (OPUS): a comprehensive set of clinical outcome instruments. Prosthet Orthot Int
2003; 27 (3): 191–206.
10. Lindner HYN, Natterlund BSq, Hermansson LMN. Upper limb
prosthetic outcome measures: review and content comparison based on International Classification of Functioning, Disability and Health. Prosthet Orthot Int
2010; 34 (2): 109–128.
11. Patrick DL, Deyo RA. Generic and disease-specific measures in assessing health status and quality of life. Med Care
1989; 27 (suppl 3): S217–S232.
12. Beurskens AJ, de Vet HC, Koke AJ, et al.. A patient-specific approach for measuring functional status in low back pain. J Manipulative Physiol Ther
1999; 22 (3): 144–148.
13. Chatman AB, Hyams SP, Neel JM, et al.. The Patient-Specific Functional Scale: measurement properties in patients with knee dysfunction. Phys Ther
1997; 77 (8): 820–829.
14. Stratford P, Gill C, Westaway M, Binkley J. Assessing disability and change on individual patients: a report of a patient specific measure. Physiother Canada
1995; 47 (4): 258–263.
15. Westaway MD, Stratford PW, Binkley JM. The Patient-Specific Functional Scale: validation of its use in persons with neck dysfunction. J Orthop Sports Phys Ther
1998; 27 (5): 331–338.
16. Resnik L. Research update: VA study to optimize DEKA arm. J Rehabil Res Dev
2010; 47 (3): ix–x.
17. Desrosiers J, Bravo G, Hebert R, et al.. Validation of the Box and Block Test as a measure of dexterity of elderly people: reliability, validity, and norms studies. Arch Phys Med Rehabil
1994; 75 (7): 751–755.
18. Mathiowetz V, Volland G, Kashman N, Weber K. Adult norms for the Box and Block Test of manual dexterity. Am J Occup Ther
1985; 39 (6): 386–391.
19. Platz T, Pinkowski C, van Wijck F, et al.. Reliability and validity of arm function assessment with standardized guidelines for the Fugl-Meyer Test, Action Research Arm Test and Box and Block Test: a multicentre study. Clin Rehabil
2005; 19 (4): 404–411.
20. Farrell TR, Weir RF. The optimal controller delay for myoelectric prostheses. IEEE Trans Neural Syst Rehabil Eng
2007; 15 (1): 111–118.
21. Dromerick AW, Schabowsky CN, Holley RJ, et al.. Effect of training on upper-extremity prosthetic performance and motor learning: a single-case study. Arch Phys Med Rehabil
2008; 89 (6): 1199–1204.
22. Sung IY, Ryu JS, Pyun SB, et al.. Efficacy of forced-use therapy in hemiplegic cerebral palsy. Arch Phys Med Rehabil
2005; 86 (11): 2195–2198.
23. Stewart KC, Cauraugh JH, Summers JJ. Bilateral movement training and stroke rehabilitation: a systematic review and meta-analysis. J Neurol Sci
2006; 244 (1–2): 89–95.
24. Higgins J, Salbach NM, Wood-Dauphinee S, et al.. The effect of a task-oriented intervention on arm function in people with stroke: a randomized controlled trial. Clin Rehabil
. 2006; 20 (4): 296–310.
25. Mercier C, Bourbonnais D. Relative shoulder flexor and handgrip strength is related to upper limb
function after stroke. Clin Rehabil
2004; 18 (2): 215–221.
26. Goodkin DE, Priore RL, Wende KE, et al.. Comparing the ability of various compositive outcomes to discriminate treatment effects in MS clinical trials. The Multiple Sclerosis Collaborative Research Group (MSCRG). Mult Scler
1998; 4 (6): 480–486.
27. Stern EB, Sines B, Teague TR. Commercial wrist extensor orthoses. Hand function, comfort, and interference across five styles. J Hand Ther
1994; 7 (4): 237–244.
28. Rayan GM, Brentlinger A, Purnell D, Garcia-Moral CA. Functional assessment of bilateral wrist arthrodeses. J Hand Surg [Am]
1987; 12 (6): 1020–1024.
29. Kraft GH, Fitts SS, Hammond MC. Techniques to improve function of the arm and hand in chronic hemiplegia. Arch Phys Med Rehabil
1992; 73 (3): 220–227.
30. Blennerhassett J, Dite W. Additional task-related practice improves mobility and upper limb
function early after stroke: a randomised controlled trial. Aust J Physiother
2004; 50 (4): 219–224.
31. Wu CW, Seo HJ, Cohen LG. Influence of electric somatosensory stimulation on paretic-hand function in chronic stroke. Arch Phys Med Rehabil
2006; 87 (3): 351–357.
32. Neistadt ME. The effects of different treatment activities on functional fine motor coordination in adults with brain injury. Am J Occup Ther
1994; 48 (10): 877–882.
33. Stern EB, Ytterberg SR, Krug HE, Mahowald ML. Finger dexterity and hand function: effect of three commercial wrist extensor orthoses on patients with rheumatoid arthritis. Arthritis Care Res
1996; 9 (3): 197–205.
34. Stamm T, Mathis M, Aletaha D, et al.. Mapping hand functioning in hand osteoarthritis: comparing self-report instruments with a comprehensive hand function test. Arthritis Rheum
2007; 57 (7): 1230–1237.
35. Amirjani N, Thompson S, Satkunam L, et al.. The impact of ulnar nerve compression at the elbow on the hand function of heavy manual workers. Neurorehabil Neural Repair
2003; 17 (2): 118–123.
36. Rider B, Linden C. Comparison of standardized and non-standardized administration of the Jebsen Hand Function Test. J Hand Ther
1988; 2: 121–123.
37. Burger H, Franchignoni F, Heinemann AW, et al.. Validation of the orthotics and prosthetics
user survey upper extremity functional status module in people with unilateral upper limb
amputation. J Rehabil Med
2008; 40 (5): 393–399.
38. Gallagher P, MacLachlan M. Development and psychometric evaluation
of the Trinity Amputation and Prosthesis Experience Scales (TAPES). Rehabil Psychol
2000; 45 (2): 130–154.
39. Linacre JM. Winsteps® Rasch measurement computer program User’s Guide.
Beaverton, Oregon: Winsteps.com; 2012.
40. Desmond DM, MacLachlan M. Factor structure of the Trinity Amputation and Prosthesis Experience Scales (TAPES) with individuals with acquired upper limb
amputations. Am J Phys Med Rehabil
2005; 84 (7): 506–513.
41. Hefford C, Abbott JH, Arnold R, Baxter GD. The patient-specific functional scale: validity, reliability, and responsiveness in patients with upper extremity musculoskeletal problems. J Orthop Sports Phys Ther
2012; 42 (2): 56–65.
42. Westaway MD, Stratford PW, Binkley JM. The patient-specific functional scale: validation of its use in persons with neck dysfunction. J Orthop Sports Phys Ther
1998; 27 (5): 331–338.
43. Wyrwich KW, Tierney WM, Wolinsky FD. Further evidence supporting an SEM-based criterion for identifying meaningful intra-individual changes in health-related quality of life. J Clin Epidemiol
1999; 52 (9): 861–873.
44. McFarland LV, Hubbard Winkler SL, Heinemann AW, et al.. Unilateral upper-limb loss: satisfaction and prosthetic-device use in veterans and service members from Vietnam and OIF/OEF conflicts. J Rehabil Res Dev
2010; 47 (4): 299–316.
45. Biddiss EA, Chau TT. Upper limb
prosthesis use and abandonment: a survey of the last 25 years. Prosthet Orthot Int
2007; 31 (3): 236–257.
Keywords:© 2012 American Academy of Orthotists & Prosthetists
prosthetics; upper limb; disability assessment; psychometric evaluation