Secondary Logo

Journal Logo

Original Article

Endpoint Control for a Powered Shoulder Prosthesis

Phillips, Sam L. PhD, CP; Resnik, Linda PhD, PT; Fantini, Christopher MSPT, CP; Latlief, Gail DO

Author Information
JPO Journal of Prosthetics and Orthotics: October 2013 - Volume 25 - Issue 4 - p 193-200
doi: 10.1097/JPO.0000000000000006
  • Free

Abstract

The purpose of this article was to describe the use of endpoint control in high-level upper-limb prostheses including identifying its potential benefits and current challenges of application and discussing future challenges with its use. Methods of prosthetic control including direct joint control (with sequential, simultaneous, and linked movements) and endpoint control are discussed. This article is based on a case experience with a prototype version of endpoint control tested at the Tampa site of the Department of Veterans Affairs (VA) Study to Optimize the DEKA Arm, the James A. Haley Veterans Hospital. This article also discusses the relationship of endpoint control with motor control theory and robotics. Finally, the authors discuss challenges with the use of endpoint control and suggest future directions for enhancing this prosthetic control option.

BACKGROUND

Individuals with upper-limb deficiencies, whether from amputation or congenital, represent a small and underserved population. A subset of this already small group includes those individuals with higher-level upper-limb deficiencies, including those at or proximal to the shoulder joint. Shoulder disarticulations (SDs) and scapulothoracic (ST) amputations made up less than 0.1% of all amputations1 from 1988 to 1996. According to VA records, approximately 50 SD and ST prostheses are provided to veterans per year.2 However, this number is expected to grow because of the influx of recent combat veterans with traumatic amputation. In 2010, the VA had within its system 12 amputees with SD or ST amputations as a result of Operation Enduring Freedom and Operation Iraqi Freedom (OEF/OIF).3 There are more still on active duty status who have yet to enter the VA healthcare system.

In addition to those with SD or ST amputations, special consideration must be given to those who have undergone transhumeral (TH) amputation or have a congenital deficiency resulting in very short residual limbs, such as those through or close to the humeral neck. This population is as functionally disadvantaged as those previously mentioned. These individuals do not have the muscle mass or leverage to functionally use their anatomical shoulder joint and are often treated as if they were amputated at the SD level.

Those with an SD, ST, or very short TH residuum have limited functional prosthetic options. Most of those with higher-level upper-limb deficiencies are dissatisfied with available technology, as evidenced by the approximately 60% rejection rate of prostheses by individuals within this group.4 Weight, speed, and durability have frequently been cited as primary reasons for abandonment.5 However, the level of amputation is the most significant factor in upper-limb prosthetic rejection.6–8 There are additional factors, such as the lack of any commercially available powered shoulder joints, that may explain why persons with high-level amputations reject prostheses at a high rate.

The task of replacing the complexity of the human arm is a daunting one. The shoulder complex, the elbow, the wrist, and the hand together create a field of movement, often referred to as the functional envelope. This functional envelope is made up of several integrated spheres of movement determined by the degrees of freedom (DOFs) and the active range of motion (AROM) at each joint.9

Degrees of freedom, for our purposes, is a term used to represent the number of rotational joint axes present in an arm. One DOF is equivalent to a single specific axis. Limitations in the AROM or DOF in any anatomical or prosthetic joint will negatively affect the functional capacity of the upper limb by reducing the effective size of the functional envelope. Any task requiring the need to reach a point outside the functional envelope requires compensatory strategies in the trunk and/or the lower limbs to reach the target. This results in less efficiency and more difficulty with performing tasks.

One of the most serious challenges to development of satisfactory upper-limb prosthesis has been related to hardware design. Obtaining batteries, actuators, and motors, which are small enough, light enough, and powerful enough, is a serious challenge to device development.10 Each additional joint that has to be replaced by a prosthetic component not only adds weight to the prosthesis but also increases its complexity and necessitates more control inputs/strategies. With commercially available devices, none of which have powered shoulder capabilities, performance of complex motions is slow, cumbersome, and unnatural. Hardware improvements in new prostheses under development, such as the DEKA Arm, include a powered shoulder among several other capabilities not currently available in commercial products. These new features are being designed to help improve performance and acceptance of prostheses made for higher-level upper-limb amputees by increasing the functional envelope of the system and creating new control strategies with which to operate them.

The type of control system used for a prosthesis has a direct effect on the way the device is used. Nonpowered prosthetic shoulder joints require the user to exert more energy, both physically and cognitively, to functionally operate the prosthesis. These are limited in practical applications because the user cannot actively control the joint. Some designs include the use of a fixed shoulder joint offering no movement of the prosthetic limb above the elbow, whereas others allow the shoulder to be manually prepositioned and locked in a static pose. Manually locking shoulder joints allow the arm and the terminal device to be placed in a more desirable position for certain tasks, for example, putting on a jacket or a shirt and bringing the arm closer to the body in a crowded train. However, it is almost impossible for the user to actively use shoulder joint movement during the act of performing various tasks, for example, grabbing objects from overhead and bringing them down to a desk. This limits the ability to perform many bilateral tasks as well because the manual shoulder joint would always have to be prepositioned and locked in place. Any tasks requiring the manipulation/transport of objects through varying planes of height (e.g., eye level to waist level) would require the prosthetic user to incorporate significant amounts of compensatory body movements to complete the task. The functional envelope with such devices is limited because of the inability to actively operate the shoulder joint. Work has been done in the past to address this issue, with limited success mostly due to hardware constraints.11,12 However, new developments in prosthetic componentry and control systems as a result of the Defense Advanced Research Projects Agency (DARPA) Revolutionizing Prosthetics program are on the horizon and have promise to significantly improve the prosthetic options for those with high-level upper-limb deficiencies.

CONTROL STRATEGIES

All control schemes of an externally powered prosthesis can be broken down into two phases: 1) capturing an intention from the user and 2) using that signal to actuate the prosthesis. For the purposes of this article, arm control refers to the latter: use of a signal to control a movement of the prosthesis.

EXISTING CONTROL STRATEGIES: DIRECT JOINT CONTROL

Direct joint control is the current standard for controlling externally powered prostheses. The user activates a specific control input to operate a single specific joint motion (DOF). The spatial position of the terminal device is determined by the combination of signals that control the joints proximal to it. Direct joint control can be used in various configurations: 1) sequential control, 2) simultaneous control, 3) a combination of both sequential and simultaneous control, and/or 4) linked movements.

SEQUENTIAL JOINT CONTROL

In sequential joint control, the same control input(s) is used to control all motorized joints, sequentially. An example of sequential control is a setup using dual electromyographic (EMG) inputs in which each input controls one direction of movement and switching control between joints (as known as switching modes) is accomplished by using an alternate signal, frequently a myoelectric co-contraction or an alternate input. To illustrate further, a myoelectric prosthesis for a person with a TH amputation might use individual EMG signals from the biceps and the triceps for actuation of joint movements and co-contraction and these same signals for switching between elbow mode, wrist mode, and hand mode, where mode indicates the joint being controlled by the inputs. Theoretically, an unlimited number of joints could be controlled with sequential joint control using only three separate input signals—one is used to operate each joint motion and one is used as a switching mechanism to cycle through modes. However, the practicality of switching from joint to joint limits the effectiveness of this method because the cognitive planning required to operate controls set up in this fashion increases the time necessary to complete a task and is not necessarily intuitive to users.

In addition, one input can be split into multiple outputs, which reduces the number of inputs required, often at the expense of narrowing the functional breadth of the signal for a given motion. In effect, one signal becomes two or more separate signals, but each signal is more limited than the original.13,14 Common examples of this are multistate electrodes, linear transducers, and single-site control schemes. Generally, the effectiveness of this approach has been limited, with an increasing error rate as more states are introduced.

Similarly, an input can be separated temporally so that it cycles through outputs. For example, the first signal could flex the joint; and the second, extend. There are also additional, more esoteric ways to make a single signal perform multiple functions.15

SIMULTANEOUS CONTROL

In simultaneous control, two or more joints can be controlled simultaneously by using an independent control input for each joint. As the system becomes more complex and more powered movements are introduced, a greater number of control signals would be required to keep control of these movements independent of each other. There is significant cognitive demand and skill required on the part of the user to achieve coordinated multijoint movements with each added input. A simple example of a simultaneous control configuration would be that of an externally powered TH setup without powered wrist rotation, in which myoelectric signals are used to control hand function, and a linear transducer used to control the elbow. Because both inputs are separate and independent, the elbow can be actuated at the same time as the hand function by generating an EMG signal while applying tension to the linear transducer. This is not possible in sequential control schemes; however, it is not hard to imagine the difficulty in coordinating control of more than two joints simultaneously in an efficient and precise manner using this control scheme.

COMBINATION SYSTEMS

Depending on the number of control signals available from the user and the number of DOFs required to operate the prosthesis, a combination of sequential and simultaneous joint control could also be implemented. To illustrate, consider the addition of an electric wrist rotator to the previous example above—that of an externally powered TH setup, now with powered wrist rotation in which myoelectric signals are used to control and switch between hand and wrist function, and a linear transducer used to control the elbow. In this configuration, the powered elbow can always be simultaneously controlled with either the hand or the wrist, depending on which mode the system is in. Control between the hand and the wrist would be accessed sequentially, using co-contraction, for example, and then carried out using the individual EMG signals.

LINKED MOVEMENTS

One method of dealing with multiple DOFs, such as in a multiarticulated hand, is to use fewer inputs than there are DOFs, a process called underactuation.9 This is accomplished by physically linking two or more joint motions to a single control input. Robotic dexterous hands, those with articulated fingers, are commonly controlled this way. In this configuration, two (or more) mechanically independent joints are controlled by the same signal/motor and operate together with a synchronous motion because of rigid bar linkages or tendon transmission.9 This reduces the effective DOF by the number of joints linked. Subsequently, linked joints can be considered as a single joint for control schemes. One example would be the Touch Bionics i-limb hand, in which multiple interphalangeal joints are linked to act upon a single input.

ENDPOINT CONTROL—A NEW CONTROL STRATEGY FOR PROSTHETIC DEVICES

Endpoint control derives its name by actuating multiple powered joints using inverse kinematics to perform simultaneous coordinated movement to bring the terminal device (the endpoint) to a desired spatial position. Once calculated, the inverse kinematic equations use relatively little computing power and allow the user to input desired movements in an intuitive way.16 The chief advantages of endpoint control are thought to be the relatively low processing power and the reduced number of control signals required to operate multiple joints simultaneously in a coordinated fashion.16

Because endpoint control reduces the number of input signals required in a system, the complexity of operation for the user is made simpler. In addition, endpoint control can enable coordinated movement of the shoulder, the elbow, the wrist, and the hand, which could potentially lead to more anthropomorphic movements. This strategy provides a new frame of reference for prosthetic control. As previously discussed, in the traditional control configuration, direct joint control, the frame of reference is based on the isolated control and positioning of each joint, often in a sequential manner. In endpoint control, the user can control the speed and the direction of the terminal device (endpoint) through a coordinate system, to produce more coordinated multijoint movements without having to simultaneously control each joint individually. In simpler terms, a user can just issue directional commands (e.g., up/down, forward/back), with the terminal device as the reference, without having to think about controlling each joint. The endpoint control system coordinates all of the integrated joint movements for the user to get the terminal device to the desired point in space. Endpoint control can be set up to incorporate the use of various numbers of motors simultaneously.

ENDPOINT THEORY IN MOTOR CONTROL

The human arm, not including the hand, has seven DOFs: three at the shoulder, one at the elbow, and three at the wrist. The seven DOFs give redundancy to the arm, so a given endpoint position can be reached by more than one arm orientation. Seven DOFs facilitate reaching around objects and enable smooth movements through the functional envelope. Although complex multijoint coordination gives human movement a fluid appearance, it also greatly adds to the complexity of motor control. In contrast, commercial robots generally have six DOFs because of the reduced complexity. However, this also reduces the options for potential movement patterns and combinations of joint orientation.

Some variant of endpoint control is likely what the human body uses to program reaching and grasping movements.17 A frame of reference describes the center and orientation of the coordinate system. Generally, there is one frame of reference centered on the arm and the hand and another centered around the head.18 Joint-centered reference frames are used for storage of physiological joint angles and objects.19 Planning of reaching movements is described as spatial information, which is converted to motor patterns to move the hand through space.17,20,21 Further, it has been shown that after an injury resulting in neuromuscular deficits, whole-limb kinematics are less affected than are individual-joint kinematics, which suggests that some feedback is occurring in the head frame of reference.22 Thus, clearly, humans use multiple frames of reference to execute reaching tasks, including those that are hand centered, joint centered, and body centered.

ENDPOINT THEORY IN ROBOTICS

Many robotic devices are controlled by endpoint control.16 A beginning point and an endpoint can be identified, and the most efficient path can be obtained and taken. A better analog is probably robots controlled by a joystick or remotely by an instrumented glove. Endpoint control is popular in robotics because of its relatively low computational burden. Calculating the inverse kinetic equations, although complex, is done in the initial programming. When the arm is running, it has to execute only those equations, which may, in some cases, save computational power and time.16

Endpoint control is used for controlling DEKA Arm prostheses that include a powered shoulder joint. The prosthetic shoulder joint is aligned on the socket, relative to the user’s body. Using endpoint control, the base position, in this case, the shoulder joint, is mathematically related to the position of the terminal device. Thus, positioning and alignment of the prosthetic shoulder joint are critical. Any misalignment causes a rotation of the entire arm coordinate system. This includes any shifting of the socket, so an intimate socket fit is crucial.

DESCRIPTION OF THE DEKA ARM

The Generation 2 DEKA Arm (Figure 1) is a modular prosthesis that can be provided at the transradial, TH, and SD/ST amputation (shoulder configuration) level. It possesses powered movement capabilities that are not available in current prosthetic componentry. At the shoulder configuration level, the Generation 2 DEKA Arm has 10 powered DOFs—each of which needs input controls to activate function. The DOFs in the shoulder configuration version of the Generation 2 DEKA Arm break down as follows:

F1-8
Figure 1:
The DEKA Arm.
  • Flexion/extension and abduction/adduction of the shoulder joint
  • Humeral internal/external rotation
  • Flexion/extension of the elbow joint
  • Flexion/extension and pronation/supination of the wrist joint
  • Six grasping patterns of the hand: open-fingered pinch, closed-fingered pinch, lateral pinch (key grip), power grip, three-jaw chuck, and tool grip

With the addition of a powered shoulder and more DOFs, the ability to efficiently and effectively control all the movements of the prosthesis is paramount, both for safety and, as discussed earlier, for increasing the functional envelope of the prosthesis. The need to safely and efficiently control a prosthesis with so many DOFs led DEKA to explore and develop various versions of endpoint control patterns, each with subtle differences in its movement trajectory.

In endpoint control, the user indicates a single command, for example, “move hand up,” and the endpoint control software program identifies the joints that must be activated to make the prosthetic hand move up in space. DEKA has experimented with several types of endpoint control during the VA Studies to Optimize the DEKA Arm. The first version of endpoint control, which was the one used in the case study described below, used a cylindrical coordinate system with six DOFs: up, down, forward, backward, and left and right movements on a cylindrical surface oriented on the shoulder mounting plane. No matter how many joints were used, only six endpoint control signals were used to position the terminal device. The use of endpoint eliminates the need to control specific movements of the shoulder and elbow joints because the endpoint software automatically moves those joints to achieve the endpoint position of the terminal device. Table 1 provides a comparison of the signals per joint movement, additional control signals, total control signals, and operational states needed to operate direct joint control, simultaneous joint control, and endpoint control. The use of endpoint control minimizes the total number of control signals and operational states required to operate four DOFs.

T1-8
Table 1:
Possible control strategies for four movements of the shoulder and the elbow

Endpoint control enables the hand to make smooth movements in three-dimensional space. As previously mentioned, since the beginning of the VA Study to Optimize the DEKA Arm (VA Study) in 2008, DEKA has explored several versions of endpoint control, each with subtle differences in its movement trajectory. In the prototype version of endpoint control tested at the Tampa site of the VA, most shoulder and elbow movements were controlled through endpoint. However, shoulder abduction/adduction and wrist/hand movements were controlled separately. This control scheme made reaching activities simpler than with the control methods currently used in commercially available devices; the user moved the hand in the direction of the desired object.

One challenge to this version of endpoint was the method with which abduction and adduction were implemented. The shoulder abduction/adduction axis, which was under direct joint control, was proximal to the shoulder flexion/extension axis. The shoulder mounting plate was the proximal reference for the endpoint control pattern. The position of these axes in relation to one another created problems for the user when endpoint commands were issued while the arm was abducted or adducted from neutral. Such an instance would change the trajectory of the arm/hand in a way that created a cognitive burden to the user, and, as a result, the endpoint pattern was changed in later versions.

Another challenge to endpoint control comes at the edges of the workspace, that is, when all joints are fully extended. The arm has limited choices on how it can move when all the joints are at the limit of their movement. As a result, sometimes the movements are less intuitive than in the center of the functional envelope.

CASE EXAMPLE

The single case study described herein was part of a larger four-site clinical trial. The subject was a 59-year-old white man who had a left humeral neck amputation 42 years before. He was an experienced prosthetic wearer and used myoelectric control in his current prosthesis. The study team decided to fit this subject with a powered shoulder prosthesis to explore the benefits that a powered shoulder joint might provide. He was set up with a combination of user controls including dual-site myoelectric, pneumatic, and foot controls. The subject used the DEKA Arm in the laboratory, completing 5 testing and 10 training visits each lasting approximately 2 hrs, for a total of more than 30 hrs of arm-use time. His sessions included reaching, grasping, and simulated activities of daily living tasks. In addition, the subject was given the opportunity to attempt tasks of his choosing that he found difficult or impossible to complete with his existing prosthesis, such as reaching overhead (Figure 2).

F2-8
Figure 2:
The subject using the DEKA Arm with powered shoulder.

The prototype version used by this subject at the Tampa VA hospital, involved an endpoint control scheme that combined shoulder flexion/extension, elbow flexion/extension, and humeral rotation to move the hand through space. With the arm at a resting position, a “hand forward” signal was designed to coordinate (e.g., in the neutral plane) shoulder flexion with a combination of elbow flexion (then extension) to get the hand to reach forward. Other combinations of the same joint motions were used to move the hand “back,” “up,” and “down.” The use of a cylindrical coordinate system meant that (again using a neutral abduction plane example) movement of the hand to the left or the right would proceed along a cylindrical surface vertically centered on the shoulder mounting location so that the hand would remain at a constant radius from this vertical axis. The other motions of the system (shoulder abduction/adduction, wrist rotation, wrist flexion/extension, and control of the hand) were controlled separately using a combination of sequential and simultaneous control configurations. This control scheme was designed to make reaching activities simpler as the user moved the hand in the direction of the desired object.

The prosthetic controls were set up to use EMG inputs for hand open/close and inertial sensors placed on the feet to control arm movements and grip selection. The subject was able to learn and master control of the arm and was able to do several activities that he was unable to do with his current prosthesis. For example, he was able to reach above shoulder height and bring and hold a trumpet to his face (Figure 3).

F3-8
Figure 3:
The subject holding a trumpet with the DEKA Arm.

One challenge to this particular prototype version of endpoint control involved the relationship between the axes of movement in the shoulder joint with the movements incorporated in the prototype endpoint pattern. The axis of the shoulder abductor/adductor motor was medial to the axis of the shoulder flexion/extension motor. The location of the shoulder abduction/adduction axis combined with the fact that this motion was not incorporated into the initial version of endpoint control led to some confusion on the part of the user with operating the device with efficiency. The user commands for “up/down” and “forward/back” would work in the sagittal plane when the shoulder was in a neutral position in the coronal plane (neutral abduction/adduction). However, when the shoulder was not in neutral abduction/adduction, the movement would be different. For example, in an abducted position, an up/down endpoint command would move the hand across the body rather than upward. This concept was difficult for subjects to master. As a result of this experience and the experience of subjects from other VA trials, DEKA changed this in later versions by including the abduction/adduction motor into the endpoint pattern.

FUTURE CONSIDERATIONS

ARM TRAJECTORY AND OBSTACLE AVOIDANCE

The arm trajectory is the path that the arm moves through space. Whereas the hand path is determined by the user, the actual movements that occur at the joints of the arm are determined by the endpoint control programming in response to the user command to move the terminal device to a specific location. Part of the goal of endpoint control is to relieve the user of part of control responsibility while providing a more natural control interface. Because endpoint control for prostheses is a new application, it is not clear what factors should be used to determine the movement trajectory.

One key difference between endpoint control in prosthetics and endpoint control in robotics is that the final target is not defined. For example, in robotics, the final destination coordinates are programmed in and the arm goes to that coordinate by the most expeditious means. In prosthetic applications, there is currently no way to predefine true endpoint coordinates (i.e., the final target). The destination for the terminal device (endpoint) varies on the basis of any one of numerous potential tasks performed by the user. Prosthetic endpoint control is more akin to using a joystick, with which direction is controlled, but trajectories cannot be planned. This creates challenges in which one or more joints are near the end of the mechanical range or in which two joints are in parallel. These regions, where the arm motion is overconstrained, may require the user to change trajectories to reach the desired endpoint. However, if these regions are avoided, it can lead to movement with an unnatural appearance.

In addition, in this version of endpoint control, only the position of the terminal device was defined. In contrast, future versions of endpoint control may be able to define both the position and the orientation of the terminal device. For example, in the thumbs-up position and the thumbs-down position, the hand is in the same position in space but the orientation is different. Eventually pronation, supination, wrist flexion/extension, and wrist abduction/adduction could be added to assist in orienting the terminal device in addition to controlling its position. Currently, wrist functions are still controlled by direct joint control.

The coordination of prosthetic joint movement has typically been measured via a yes or no outcome by the ability (or lack thereof) to simultaneously move. Because coordinated joint movement has been very limited, no methods have been used to evaluate coordination in prosthetic devices. With endpoint control, along with other advances, measures assessing the coordination of movement between joints will need to be adopted.

Concerns related to potential injury to the user and/or bystanders as a result of unintentional movement of the prosthesis depend partially on the level of amputation of the user. At the TH or distal level, the user can physically move the arm away from the body by moving his/her own shoulder musculature in the case of an inadvertent movement. With a powered shoulder, the user is more constrained in his/her response to an inadvertent motion, given the attachment of the prosthesis to the torso through the socket. Thus, it is possible that an errant signal (either inadvertent or erroneous) could cause the prosthesis to contact the user.

The movement trajectory of proximal joints during endpoint control is not necessarily controlled directly by the user but can instead be a result of the user-selected trajectory for the endpoint of the prosthesis. It is important for the user to understand and be aware of the movement of these joints to avoid impacting obstacles during movement. In robotics, this is usually controlled through careful programming of the environmental obstacles and creating arm trajectories that avoid obstacles. In human motor control, presumably, the brain is doing the same thing, although anyone who has accidentally bumped an elbow can painfully attest to the imperfection of the system. Examples of these obstacles might include a table, a glass, a car door, or the user’s body. This subject did not generally present with this problem; one exception was bumping the arm on a table. Nevertheless, as with any prosthesis, this potential highlights the need for trajectory awareness and training protocols to ensure user understanding of these trajectories to maximize safety.

Ultimately, prosthetic control designs may need to incorporate a combination of endpoint control and direct joint control. In endpoint control, it is not possible to control an individual joint. Humans have the ability to use both endpoint and direct joint control strategies to reach through coordinated movements and to bend an individual joint, respectively, with their anatomical upper limbs. There may be situations while using a prosthesis in which individual joint movement is preferable. Advances in control schemes are under way to address this.

ENDPOINT CONTROL INPUT SIGNALS

Currently, the input signals of prosthetic devices operate physiologically as direct point control, that is, muscular activation, monitored either by electrical activation or by physical excursion around a joint. It is possible that future control signals, such as a brain control interface, will physiologically function though endpoint patterns and will thus make the operation of a prosthetic device in endpoint control much more intuitive. As progress is made in neural control, it is possible that endpoint control may become more natural. There are indications that the brain is making coordinate transfers to and from joint space. In fact, Losier et al.23 have recently shown that residual shoulder motion may provide a better control signal for endpoint control than myoelectric signals do.

CONCLUSIONS

Powered shoulder joints increase dexterity and provide unique challenges for upper-limb prosthesis users. Endpoint control is one method to bridge the gap between increased dexterity and limited (input) control signals and could be used in conjunction with other advances such as pattern recognition for EMG signals or brain computer interfaces. Endpoint control provides an optimal control method that minimizes the combination of input controls and the number of operational states of an upper-limb prosthesis. As demonstrated by this case study, endpoint control provides a promising method for controlling complex multi-DOF arm systems for clinical use.

REFERENCES

1. Dillingham TR, Pezzin LE, MacKenzie EJ. Limb amputation and limb deficiency: epidemiology and recent trends in the United States. South Med J 2002; 95 (8): 875–883.
2. VHA. National Prosthetic Patient Database. Available at: http://vaww.infoshare.va.gov/sites/prosthetics/Proclarity/Forms/AllItems.aspx?RootFolder=%2fsites%2fprosthetics%2fProclarity%2fProclarity%20Cubes%20and%20Information&FolderCTID=&View=%7bFB219FE1=%2d317C%2d4B57%2dAFAB%2d5646BF17ABD5%7d. Accessed April 27, 2011.
3. Center VSS. OEI/OIF combined utilization cube. Available at: http://vssc.med.va.gov/cube.asp. Accessed April 27, 2011.
4. Wright TW, Hagen AD, Wood MB. Prosthetic usage in major upper extremity amputations. J Hand Surg Am 1995; 20 (4): 619–622.
5. Silcox DH, Rooks MD, Vogel RR, Fleming LL. Myoelectric prostheses. A long-term follow-up and a study of the use of alternate prostheses. J Bone Joint Surg Am 1993; 75: 1781–1789.
6. Biddiss EA, Chau TT. Multivariate prediction of upper limb prosthesis acceptance or rejection. Disabil Rehabil Assist Technol 2008; 3 (4): 181–192.
7. Biddiss E, Chau T. The roles of predisposing characteristics, established need, and enabling resources on upper extremity prosthesis use and abandonment. Disabil Rehabil Assist Technol 2007; 2 (2): 71–84.
8. Biddiss EA, Chau TT. Upper limb prosthesis use and abandonment: a survey of the last 25 years. Prosthet Orthot Int 2007; 31 (3): 236–257.
9. Sarrafian S, ed. Kinesiology and Functional Characteristics of the Upper Limb. 3rd Ed. Rosemont, IL: American Academy of Orthopaedic Surgeons; 2004. Smith D, Michael JW, Bowker J, eds. Atlas of Amputations and Limb Deficiencies.
10. Pons JL, Ceres R, Pfeifer F. Multifingered dextrous robotics hand design and control. Robotica 1999; 17: 661–674.
11. Gow D, Douglas WB, Geggie C. The development of the Edinburgh Modular Arm System. Proc Inst Mech Eng 2001; 215 (3): 291–298.
12. Simpson D. The control and supply of a multimovement externally powered upper-limb prosthesis. Paper presented at: Advances in External Control of Human Extremities, Proceedings IV; 1973; Dubrovnik.
13. Popat R, Krebs D, Mansfield J. Quantitative assessment of four men using above-elbow prosthetic control. Arch Phys Med Rehabil 1993; 74: 720–729.
14. Kyberd P. The influence of control format and hand design in single axis myoelectric hands: assessment of functionality of prosthetic hands using the Southampton Hand Assessment Procedure. Prosthet Orthot Int 2011; 35 (3): 285–293.
15. Zecca M, Micera S, Carrozza MC, Dario P. Control of multifunctional prosthetic hands by processing the electromyographic signal. Crit Rev Biomed Eng 2002; 30 (4-6): 459–485.
16. Niku SB. Introduction to Robotics: Analysis, Systems, Applications. Upper Saddle River, NJ: Prentice Hall; 2001.
17. Keulen RF, Adam JJ, Fischer MH, et al. Selective reaching: evidence for multiple frames of reference. J Exp Psychol Hum Percept Perform 2002; 28 (3): 515–526.
18. Voisin J, Michaud G, Chapman CE. Haptic shape discrimination in humans: insight into haptic frames of reference. Exp Brain Res 2005; 164 (3): 347–356.
19. Pouget A, Ducom J-C, Torri J, Bavelier D. Multisensory spatial representations in eye-centered coordinates for reaching. Cognition 2002; 83 (1): B1–B11.
20. McCrea PH, Eng JJ, Hodgson AJ. Biomechanics of reaching: clinical implications for individuals with acquired brain injury. Disabil Rehabil 2002; 24 (10): 534–541.
21. Krakauer JW, Pine ZM, Ghilardi M-F, Ghez C. Learning of visuomotor transformations for vectorial planning of reaching trajectories. J Neurosci 2000; 20 (23): 8916–8924.
22. Chang Y-H, Auyang AG, Scholz JP, Nichols TR. Whole limb kinematics are preferentially conserved over individual joint kinematics after peripheral nerve injury. J Exp Biol 2009; 212 (Pt 21): 3511–3521.
23. Losier Y, Englehart K, Hudgins B. Evaluation of shoulder complex motion-based input strategies for endpoint prosthetic-limb control using dual-task paradigm. J Rehabil Res Dev 2011; 48 (6): 669–678.
F4-8
Figure
Keywords:

artificial limb; amputation; arm; microprocessor control; myoelectric prosthesis; prosthetic design; rehabilitation; robotics; upper-limb prostheses; upper limb

© 2013 by the American Academy of Orthotists and Prosthetists.