Skip Navigation LinksHome > January 2013 - Volume 72 - Issue > Introduction to Haptics for Neurosurgeons
Neurosurgery:
doi: 10.1227/NEU.0b013e318273a1a3
Haptics

Introduction to Haptics for Neurosurgeons

L’Orsa, Rachael BASc*; Macnab, Chris J.B. PhD*; Tavakoli, Mahdi PhD

Free Access
Article Outline
Collapse Box

Author Information

*Department of Electrical and Computer Engineering, University of Calgary, Calgary, Alberta, Canada

Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada

Correspondence: Chris Macnab, PhD, PEng, Department of Electrical and Computer Engineering, University of Calgary, 2500 University Dr NW, Calgary AB, Canada T2N 1N4. E-mail: cmacnab@ucalgary.ca

Received June 08, 2012

Accepted August 23, 2012

Collapse Box

Abstract

Robots are becoming increasingly relevant to neurosurgeons, extending a neurosurgeon’s physical capabilities, improving navigation within the surgical landscape when combined with advanced imaging, and propelling the movement toward minimally invasive surgery. Most surgical robots, however, isolate surgeons from the full range of human senses during a procedure. This forces surgeons to rely on vision alone for guidance through the surgical corridor, which limits the capabilities of the system, requires significant operator training, and increases the surgeon’s workload. Incorporating haptics into these systems, ie, enabling the surgeon to “feel” forces experienced by the tool tip of the robot, could render these limitations obsolete by making the robot feel more like an extension of the surgeon’s own body. Although the use of haptics in neurosurgical robots is still mostly the domain of research, neurosurgeons who keep abreast of this emerging field will be more prepared to take advantage of it as it becomes more prevalent in operating theaters. Thus, this article serves as an introduction to the field of haptics for neurosurgeons. We not only outline the current and future benefits of haptics but also introduce concepts in the fields of robotic technology and computer control. This knowledge will allow readers to be better aware of limitations in the technology that can affect performance and surgical outcomes, and “knowing the right questions to ask” will be invaluable for surgeons who have purchasing power within their departments.

ABBREVIATIONS: DC, direct current

DOF, degree of freedom

The term haptics relates to the sense of touch. The sense of touch gives us information on the material properties of an object, including stiffness (elasticity), texture, and weight, as well as shape properties such as size, orientation, and curvature. In the active exploration of objects, humans identify texture through lateral motion, hardness by applying pressure, temperature through static contact, weight by unsupported holding, global shape and volume through enclosure by fingers, and exact shape by following the object contours.1 In robotics engineering, haptics refers to a field of study that seeks to produce realistic interactions between a human and a remote or virtual environment. These interactions encompass both tactile (cutaneous) feedback relying on skin stimulation and kinesthetic (force) feedback revolving around muscle stimulation. Passive pressure sensing provides information to the tactile sense, and active touch stimulates the kinesthetic sense. To produce realistic tactile and kinesthetic feedback between a human and an environment felt only indirectly, engineers use haptic interfaces that relay forces and tactile properties from a virtual or robotic proxy back to the human who operates it.2 Such a setup, in which a human interacts with some environment via a proxy, is called teleoperation.3 Medical applications of teleoperated systems include surgical training with a virtual proxy in a consequence-free virtual environment,4 performing surgical procedures with a robot proxy to improve performance, and performing surgical procedures at a (potentially long) distance with a robot proxy.5 Such applications are already being exploited in neurosurgery: NeuroTouch6 recently became the world’s first neurosurgical simulator, and a teleoperated neurosurgical system called neuroArm7 is currently performing surgeries as part of human clinical studies. The third application, surgery over long distances, was introduced successfully for cholecystectomies in 2001,8 yet neurosurgical applications remain, for the most part, a research topic rather than a clinical reality. Many have proposed long-distance telesurgery as a means of providing expert surgical care to rural populations, training and supporting surgeons in their acquisition of advanced skills (as an extension of telementoring), and enabling access to complex surgery in extreme environments such as battlefields or space.9,10 Surgeons have operated routinely over long distances using teleoperation for rural populations in Canada.11 Some have conducted long-distance telesurgical mentoring for neurosurgery,12 although this is not commonly practiced yet. Frameless stereotaxy occurs long-distance through the use of teleoperation in China.13 As a result of technical limitations and the legal and ethical issues associated with long-distance patient care, the vast majority of contemporary telesurgical systems operate in settings where the surgeon and his/her team are proximally located and provide direct supervision of the system themselves. Thus, this article focuses on applications for which the proxy is nearby, ie, in the same room or building. In terms of teleoperated systems dedicated to neurosurgery and capable of complex procedures beyond stereotaxy, there are currently 2 that lack haptics14,15 and 3 that include haptics (neuroArm, ROBOCAST,16 and NeuRobot17,18). Of these, none are commercially available. Only NeuRobot and neuroArm have undergone human clinical trials, and only neuroArm continues to do so.

Just as flight-safety regulations require pilots to have precise knowledge and training in the behavior and limitations of their automated flight computer systems, it is imperative that surgeons be aware of the behavior, including drawbacks and limitations, of any robotic technology they use. In addition, a conceptual understanding of this technology becomes invaluable for surgeons asked to provide input on what products to purchase or develop for their hospital. Toward these ends, this article explains the basic workings of sensors, actuators (motors), haptic interfaces, robotic manipulators, software, and low-level control systems—all of which have their own properties and limitations that affect the feel, performance, and stability of teleoperated surgical robots. First, we introduce the underlying ideas of haptic technology, including specific advantages for neurosurgeons using haptic technology in surgical robots. This information is contextualized via an introduction to teleoperated robots, which includes a description of the technological components—both hardware and software—found in teleoperated systems. Finally, we describe the concept of computer automated feedback control and how limitations in current technology affect the performance and stability of teleoperation.

Back to Top | Article Outline

HAPTIC INTERFACES

A haptic interface presents similarities to a computer mouse in the sense that it serves as a human-machine interface, allowing a person to explore an environment indirectly through a virtual (or robotic) proxy. Haptic interfaces are more advanced, however, in that they are capable of transmitting forces (and, to a lesser extent at present, tactile information) and display both position and force in 3 dimensions.19 In a virtual environment, the proxy appears as a computer screen icon, often represented in a 2-dimensional projection of 3-dimensional space (Figure 1).

Figure 1
Figure 1
Image Tools

In a physical environment, the (remote) proxy consists of a mechanical device capable of effecting changes in its surroundings, often in the form of an electrically powered robotic manipulator with a tool gripper (Figure 2). For systems containing a remote physical proxy, force and/or position sensors on the proxy enable the system’s software to recognize when interaction occurs between the proxy and the remote environment. This information allows haptic rendering algorithms to reproduce the environment for a user via force and/or position instructions imposed on the haptic interface. A virtual proxy complicates haptic rendering owing to the lack of a real environment; both the proxy and the environment exist as mathematical constructs in the software using either surfaces (the visible outer surface of an object) or volumes (the space occupied by the object). Then the software must continuously track the virtual position and/or orientation of the proxy with respect to the virtual locations of geometric (ie, organ, vasculature) boundaries in the environment to determine when collisions (interactions) occur. After detecting a collision, the software decides how far the proxy should penetrate beyond the boundary, then computes the resulting reaction force caused by the interaction between the proxy and the environment, and finally determines what effect this should have on both.20 The software reflects this result to the haptic interface as force and/or position instructions, reinforcing the illusion of interaction with the virtual environment. Salisbury et al21 have provided an introduction to haptic rendering.

Figure 2
Figure 2
Image Tools

Combining virtual and remote proxies allows a real-time animated display, the results of which can be “played back” later within a virtual environment. This could, for example, allow residents to study the actions of their mentors by feel using the haptic interface to replay stored force data. Another example of mixed virtual and remote physical proxy use is virtual fixtures, which are discussed in the next section.

When a user holds a local haptic interface, it allows him/her to feel forces reflected from the proxy, allows user-supplied forces to be transmitted to the proxy, or both. In more advanced teleoperation, movements (ie, the position and/or velocity) of the local interface and remote proxy may also be transmitted and/or reflected. Engineers traditionally describe the user’s motions or forces on the local interface as commands and the local interface itself as the master. The remote physical proxy is then called the slave. In unilateral teleoperation, the local interface only transmits commands to the remote slave, whereas bilateral teleoperation allows information from the remote proxy to be reflected back to the local interface and user (Figure 2). Most haptic bilateral configurations used in surgery to date transmit commanded movements to the remote proxy and reflect forces back to the local interface.2 Time delays in this communication between local and remote machines are a challenge to engineers and surgeons alike.3,22,23 Similarly, packet loss may arise as an issue for teleoperated systems over the Internet: Algorithms break data into small packets before transmission, but the Internet often loses some of the packets. Although engineers attempt to reduce the effects of these phenomena as much as possible (and researchers continue to investigate more advanced methods), surgeons must be aware that communication delays and losses may introduce problems to their systems.

Back to Top | Article Outline
Advantages of Haptic Interfaces

Surgical telerobotics offer several key potential advantages over traditional surgery. Performing surgery with a telerobotic system, even without haptics, enhances the paradigm of minimally invasive surgery. In addition, filtering of unnecessary or unwanted surgeon movements, especially hand tremor, improves accuracy. Giving the surgeon the option of scaling provides another advantage: Scaling down of local macroscopic hand motions can result in remote microscale (or even nanoscale) movements; scaling up of remote forces (using haptics) could theoretically make brain tissue feel macroscopic, eg, feel like moving rocks, to the surgeon if desired. Note also that scaling down remote forces could make hard objects such as bones feel softer. Not limited to macroscopic robotic manipulators, teleoperation offers the option of using articulated wristlike millirobotic attachments at the end of instruments for conducting microscale or nanoscale surgeries.24 Using robotics also gives one the ability to interface with virtual environments and/or other preexisting software and computer techniques such as intraoperative imaging, which facilitates planning, training, and navigation tasks.25 Last but not least, when using a robotic system, the surgeon may become less fatigued during long surgeries when sitting at a comfortable and ergonomic console rather than having to stand by the bedside.26

Consider the example of writing on a blackboard with chalk, a simplified model of a surgical incision, to illustrate the advantages of adding haptic interfaces to a teleoperation system. Humans feel the contact force reflected from the blackboard through both tactile senses in their fingers and kinesthetic knowledge that motion is constrained in 1 direction and adjust it to an appropriate level. If an untrained human performs this task through teleoperation without reflected feedback from the robot that holds the chalk, the chalk will likely either crack (excessive pressure) or be illegible on the board (insufficient pressure). Even a well-trained operator, however, may require a long time to complete the task, especially if visual feedback is available only through a video interface. In contrast, given haptic information in the form of force feedback, anyone who had previously written on a chalkboard without a robot would know how much pressure to exert on the haptic interface for successful completion of the task. Provided with additional position feedback, the operator would know how the position of the remote robot was being constrained in 1 direction by the presence of the chalkboard and therefore would be much less reliant on the clarity and accuracy of visual information. In the context of a robot-assisted surgery, various studies confirm that haptic feedback generally improves performance in terms of task success rate, completion time, economy of exerting force, and less trauma to tissue.27-29 Moreover, the absence of haptic feedback to the surgeon in a robot-assisted surgery may lead to errors and thus becomes a legitimate safety concern.30 For instance, inappropriate and excessive forces from the robot caused a perforation of the gall bladder in a minimally invasive cholecystectomy.31 Now let us expand on and quantify some of these benefits.

Back to Top | Article Outline
Scaling Enhances Precision

The precision of the surgeon limits endoscopic, stereotactic, and microneurosurgical procedures, determined by his/her inherent visual acuity and fine motor control. Not only do these qualities differ from person to person, but their maximum bounds limit the level of precision with which a surgeon can interact with delicate, minute cranial structures. Neurosurgeons operating with telesurgical systems, however, routinely achieve submillimeter precision when their movements are scaled down for the robotic manipulator compared with the millimeter precision generally achieved by hand.5,14

Back to Top | Article Outline
Filtering Reduces the Effects of Hand Shake

Even though neurosurgeons are highly trained, finely tuned professionals, it is difficult to always perform all procedures without making any unnecessary or unwanted hand movements within the surgical corridor. Physiological tremors in particular range naturally from 8 to 12 Hz, which presents an impediment to microsurgery.32 Telesurgical systems can recognize and remove such undesirable movements, improving surgeon performance and increasing patient safety.33

Back to Top | Article Outline
Virtual Fixtures Reduce Surgeon Workload and Increase Accuracy

Some procedures or portions of procedures require a surgeon’s motion to be restricted to a single direction, eg, in biopsies when the needle must be inserted directly in a straight line from the cranial opening to the abnormal tissue to best preserve surrounding structures. Telesurgical systems semiautomate such procedures by ignoring all movements of the surgeon’s hands that do not follow the predetermined straight-line trajectory. This so-called z-lock function, which is also referred to as the incorporation of virtual fixtures, allows the surgeon to focus on 1 direction at a time without concern for extraneous movement, resulting in increased accuracy and decreased workload.34 Furthermore, the surgeon can create virtual haptic impediments around structures that must be avoided (a “forbidden region”). The system then scales down the velocity of the haptic interface proportionally as the tool tip approaches the forbidden region, effectively slowing the surgeon’s movements and drawing attention to the proximity of the important structures. If the tool tip approaches within some predefined distance of the forbidden region, the velocity of the haptic interface ceases altogether, and the surgeon feels a wall (or other containment geometry specified by the virtual fixture) that protects the forbidden region from contact with the surgical tool tip of the robot, ultimately resulting in increased patient safety.35

Back to Top | Article Outline
Haptic Interfaces Integrate With Navigation Systems and Surgical Planning Software

Telesurgical systems generally use extremely high-resolution digital encoders to register joint angle displacements of the remote manipulator, allowing precise mappings of the end-effector position with respect to some fixed frame of reference.3 This information can be overlaid on preoperative or intraoperative imaging data to produce a graphic display that shows the location of the tool tip of the robot within the patient’s anatomy. Thus, when integrated with existing neurosurgical navigation systems, the precise position measurements allow an even more accurate real-time representation of tool-tip location during surgery on familiar software displays that constitute the current industry standards.36 Likewise, the local interface can interact with preoperative surgical planning software to improve the surgical team’s performance during procedures.37 Note that the process of correlating the frame of reference of the robot to an anatomic frame of reference specific to each patient, known as registration, is not trivial (particularly if patient immobilization is insufficient) and constitutes one of the most significant concerns for safety standards in teleoperation.

Back to Top | Article Outline
Haptic Interfaces Integrate With Virtual Reality for Surgical Rehearsal

In much the same way that end-effector position data can appear in existing cranial navigation system displays, the local interface of a telesurgical system can interact with a computer-simulated virtual reality representation of patient-specific brain structures taken from imaging files. Thus, a surgeon could potentially rehearse as many times as desired before actually performing a surgery, and the training data could become available for further training or even for providing a reference during the procedure itself.38 Unfortunately, contemporary technology does not allow lifelike simulations in neurosurgery. Such simulators must predict soft-tissue deformations (in parallel with their reaction forces to tool-tip interactions when haptics are included), model events such as intraoperative bleeding, and synchronize detailed visual, auditory, and tactile output with user motion and said events. Typically, simulations will operate at a particular digital frequency that, for real-time operations, must be fast enough that it seems continuous to the human operator. For example, digitizing at 100 cycles per second will be below human perception but requires all necessary calculations to be performed within each 0.01-second interval. If the calculations cannot be performed within the 0.01 seconds, one would have the option of slowing the frequency, but at some point (usually near 0.1 seconds), it will become noticeable to the human operator, decreasing fluidity and realism. For calculations that take much longer than 0.1 seconds, the simulation would normally be considered non-real time, and humans would not attempt to interact with it. Furthermore, the use of patient-specific imaging data in such simulators often requires time-consuming manual segmentation (separation) of some anatomic structures within the imaging data, which is not necessarily a simple task in the context of specific simulators.4 Currently, the computational cost is simply too high, adding an unacceptable delay to the simulation. Thus, it is possible that even state-of-the-art simulators such as NeuroTouch will require further advancement before their integration with telesurgical systems becomes a well-recognized benefit.

Back to Top | Article Outline
Disadvantages of Haptic Interfaces

There are 3 main drawbacks associated with the use of haptic interfaces in telesurgical systems: They remain expensive; they require advanced prototype miniature sensors and tactile actuators; and they complicate control software for the overall system.39 Although inexpensive, commercially available haptic interfaces exist, these lower-resolution versions cannot accommodate surgery-specific movements such as grasping/clamping/cutting, do not provide force feedback in > 3 degrees of freedom (DOF; a metric that is described in the System Composition section), and fall short of the precision required for delicate procedures such as microneurosurgery. Although the cost of advanced haptic interfaces will likely decrease as their use becomes more prevalent, no consensus exists as to whether their benefits justify their expense at present.40 Some surgeons fatigue easily when using force feedback, and the addition of haptics may increase task completion time under certain conditions.41 Placement of force and/or tactile sensors at the tool tip provides the most accurate way to quantify forces and sensations. However, this requirement for the sensors to enter a patient’s body demands a level of advancement not yet found in sensor technology. The design of these sensors must address biocompatibility and sterilizability issues and, in the context of minimally invasive surgery (and particularly microneurosurgery), must conform to rigorous constraints on size, weight, shape, and sensitivity.42 Furthermore, currently, no physical tactile displays accurately reproduce the feeling of interacting with live tissues. In addition, when visual displays require update rates of 20 to 40 Hz (or 120 Hz for a 2-channel stereovision setup), haptic interfaces require update rates of ≥ 1 kHz to properly present haptic information to human users.43 This makes them particularly sensitive to time delays; mismatches in these far larger volumes of data become more apparent to the human user and can introduce control instabilities. Note that the addition of haptic feedback itself can introduce control instabilities for simple control algorithms and may require more advanced control systems that are able to guarantee system stability without sacrificing the fidelity of force and tactile feedback to the user (The Introduction to Feedback Control section provides a discussion of these issues). Furthermore, the level of computation required to concurrently display force information in 3 dimensions and tactile information may be excessive. Because the incorporation of haptics to telesurgical systems remains an ongoing topic of research, however, conclusive data quantifying benefit-risk tradeoffs remain unavailable.

Back to Top | Article Outline

INTRODUCTION TO TELEOPERATED ROBOTS

It is important for surgeons to understand the basic ideas behind feedback and computer control systems to understand the behavior and limitations of a telerobotic system. Consider first that the surgeon forms part of a feedback loop during normal surgeries that incorporates the interactions between a surgeon, his/her implement, and the patient during a procedure. The surgeon’s brain both makes high-level decisions and provides low-level control/coordination of actions. For instance, when performing a cut, the surgeon must first make the decision to cut, and then the coordination centers of the brain (including the cerebellum) choose and transmit signals via efferent nerves to the limb that holds the scalpel. During cutting, sensory structures within the skin, muscles, tendons, and joints of the limb in question register sensations from the scalpel that, in conjunction with visual cues sent from the eyes, indicate how similar the actual cutting action is to the intended action planned by the surgeon. The afferent nerves reflect this information back to the brain, which is used to coordinate the cutting action as it progresses and to form the basis of decision making regarding the next movement in the procedure. Much of this feedback loop interaction between brain, eyes, and limb (shown in Figure 3) becomes automatic with training and experience.

Figure 3
Figure 3
Image Tools

In robotics, the control system performs the coordination (relying on human input or decisions made by more advanced software algorithms for desired motions), deciding on the correct electric signals that will achieve the desired robot motion—similar to the function of the human cerebellum. Engineers normally model and analyze the interaction of control system, robot, and environment mathematically and visualize the relationship between individual system components using block diagrams. If Figure 3 were thus recast into a block diagram, it would be as seen in Figure 4, in which blocks denote system components and directional arrows describe the flow of information between components.

Figure 4
Figure 4
Image Tools

A control system that transmits commands (like electric current) to the robot motors without checking sensor information to see if the robot is accurately following the desired motion or achieving the desired force is called open-loop control. In closed-loop control, sensors mounted on the robot reflect information (like force measurement) back to the control system, normally many times per second, and the control system constantly adjusts its signals on the basis of the error between desired results and actual measurements. All teleoperative systems contain 2 nested control loops: The operator and the system make up an outer closed loop, and the system’s software and hardware define an inner (automatic) loop that could be either open or closed. The operator always acts as a control system in the outer loop, using visual and/or haptic feedback to coordinate his/her muscles with resulting movements and/or forces (Figure 5). In attempting to achieve the desired movements of the surgeon and/or accurately reflect the environment, the computer must send its own commands to the remote robot and/or local interface. Thus, the total control system consists of a human-machine interface of brain and software, which defies a complete mathematical analysis.

Figure 5
Figure 5
Image Tools

Although it may differ between individuals, each surgeon’s reaction time for a given procedure follows from the amount of time it takes for visual and sensory information to travel the appropriate afferent nerve pathways to the brain and be assimilated, for the brain to decide on the best course of action, and for the new control signals to travel back down through the efferent nerves from the brain to the limbs until the desired action has been carried out. Thus, there are 2 different types of delays: a delay during information transport through the nerves and a delay during the brain’s information processing. Any number of factors, including fatigue, stress, and distraction, can increase these delays.

In a telesurgical system, equivalent delays exist for information transport (transmission and reflection) and processing. Information from continuous-output sensors must be digitized before computer processing, and even information in digital form must be re-encoded for efficient transmission. Signal-processing techniques for accomplishing this with minimal impact on signal quality are well established, and the effects (like aliasing, in which a continuous signal cannot be properly reconstructed from its digital representation) should be unnoticeable in commercial systems. The practical impact of filtering and computer encoding/decoding of signals is the introduction of significant time delays, even in nearby teleoperation systems. In long-distance teleoperation, even larger transport delays occur as a result of the limited speed of the electric signals and limited capacity of wires or wireless systems for encoding information. Thus, a telesurgical system compounds delays inherent to the “reaction time” of both the electromechanical system itself and the surgeon who operates it. As the total delay in the system increases, it becomes increasingly difficult to control the slave. If the delay becomes too large, then the surgeon must adopt a “move and wait” strategy that increases workload.44 Hokayem and Spong3 provide further information on time-delay control issues and remedies beyond what is supplied in the Introduction to Feedback Control section of this article.

Back to Top | Article Outline
System Composition

Although a variety of different telesurgical systems are currently being developed,45 most share the same basic types of components (Figure 6). As described previously, the surgeon operates a haptic interface, and sensors measure the position, velocity, and/or force that he/she applies. Control software interprets these data and in turn transmits commands to the actuators (motors) in the robotic manipulator. The robotic manipulator interacts with the physical environment of the surgery (ie, the patient), and its sensors provide the measured position, velocity, and/or force to the control software, which may reflect it back to the haptic interface. Furthermore, a graphical display shows the surgeon the interaction between the robotic manipulator and its environment. Thus, the telesurgical system consists of the haptic interface, the robotic manipulator, the sensors and actuators in the interface and manipulator, the control software, and the display.

Figure 6
Figure 6
Image Tools
Back to Top | Article Outline
Haptic Interfaces

There are a number of commercially available haptic interfaces that vary mostly in their structural design, the size of their work space (ie, how large a region that may be reached by the interface), the amount of force they can reproduce, the number of DOFs in which they record position, and the number of DOFs in which they can reproduce forces. The cartesian coordinate system uses 3 DOFs, characterizing linear positions, velocities, accelerations, and forces in 3 perpendicular directions: x (forward-backward), y (left-right), and z (up-down). A straight-line linear motion or force is thus decomposed into 3 separate perpendicular components. If we extend this from the linear cartesian system to one that accounts for rotational motion as well, we could add the pitch (forward-backward rotation), yaw (left-right rotation), and roll (up-down rotation) used to describe the movements (as with airplane motion). The rotational equivalent of force is torque, and torque acting about a central pivot is also described in (or decomposed into) roll, pitch, and yaw directions.

With 3 possible directions for linear movement and 3 possible directions for rotational movement, we have a maximum of 6 possible DOFs that any given object can describe in 3-dimensional space (Figure 7, left). The 3-DOF volume plus 3-DOF rotational poses that the robot end effector can reach defines the task space. However, the DOFs of an articulated robot somewhat confusingly also measure the number of possible joint rotations and extensions, called the joint space. Most haptic interfaces and robotic manipulators use multiple anthropomorphic links connected via rotational joints. Thus, just as each phalange in a human finger attaches to the next via a rotational joint that is capable of describing some fraction of 360° of movement, each link in the interface or manipulator does the same. Even though both joints between 3 phalanges rotate in the same plane so that the finger can describe only 1 DOF of motion, the 3 phalanges are also described as having 2 DOFs (one for each joint). The work space of the finger describes the area covered by the fingertip given the maximum amount of movement each joint can perform (Figure 7, right). Thus, when a manufacturer says its haptic interface provides a particular number of DOFs of positional sensing or force reproduction, this refers to the number of directions of movement the interface can record/produce (maximum of 6). Some advertise a seventh DOF, which refers to a grasping DOF (an extra cutter or clamper) that is attached to the end of the robotic manipulator, which itself may be capable of moving in all 6 DOFs.

Figure 7
Figure 7
Image Tools

Structural designs for haptic interfaces range from stylus-type interfaces in which the surgeon grips a scalpel-like protrusion to wearable glove-type interfaces. Hayward et al43 describe a more thorough classification of structural designs. Again, the work space of a haptic interface refers to the range of motion that is mechanically allowable by its structural design. Most attempt to reproduce the natural workspace of an average human hand, although workspaces differ enough between individual brands and models to necessitate their comparison. As of yet, no consensus exists among surgeons as to which type of structural design is preferable for which type of surgery; one must try a variety of makes and models before purchasing these costly tools.

Back to Top | Article Outline
Robotic Manipulators

For robotic manipulators, manufacturers refer to the number of DOFs as the number of single DOF joints in the robot, not the number of directions in which the end of the robotic manipulator (end effector) can move. A simplified model of a human arm allows 3 rotations at the shoulder, 1 rotation at the elbow, and 3 wrist rotations for a total of 7 DOFs (mathematical models usually ignore the small translational extension allowed by the shoulder). A human finger has a major knuckle at the base of the finger with 2 DOFs and 2 minor 1-DOF knuckles for a total of 4 DOFs. Hence, one may find references to 16- or 18-DOF surgical robots,46 which consist of 2-armed systems comprising 8 or 9 joints each or some combination of joints, cutters, clampers, and/or linear actuators. Although a wide variety of commercially available industrial robots exist for assembly-based tasks, relatively few commercial robots can perform any surgical applications. The majority of researchers currently use either custom-built robots or scaled versions of industrial counterparts. Of the commercially available robots, most provide at least 6 DOFs with an option for cutter/clamper attachments. As with the haptic interfaces, differences are mostly restricted to their work space, size, encoder resolution (smallest position measurement that can be recorded), and the amount of force they can produce.

Back to Top | Article Outline
Actuators

For most robotic manipulators, 1 electric motor actuates 1 joint in 1 rotational DOF. For systems that use pneumatic or hydraulic actuators, the arrangement is similar.47 A joint can have 2 electric motors colocated at a joint, an arrangement that typically appears at the base of a robot manipulator in the “shoulder” joint. Most robots use direct-current (DC) electric motors. Because DC motors produce small amounts of torque but are capable of spinning very rapidly, incorporating gears is necessary; a smaller-radius gear followed by a larger-radius gear wheel will increase the torque yet slow the motion, which is the same effect found when gearing down a bicycle to go up a hill. Simply putting 2 gear wheels together will not achieve the necessary amount of torque. In older technology, robot manipulators always used to have large, heavy gearboxes (with the motors) placed under the robot with chains or cables running through the structure of the robot to transmit the power. This arrangement limits the possible configurations of the robot, and contemporary robot technology often uses harmonic drive gearing instead. Harmonic drive gearing results in a small, light gearbox that can be placed directly beside the electric motor at the joint itself, allowing much greater range of movement of the manipulator. However, harmonic drives use elastic elements, resulting in the generation of high-frequency vibrations that must be accounted for in control system design and may be noticeable to the user under certain conditions. In addition, any electric motor will saturate at a certain value of current, implying that there is a maximum magnitude of torque that can be applied. Saturation adversely affects the control because the response will not be as expected (to both the surgeon and the computer control), and both performance and stability issues arise.

Haptic interfaces also use electric motors to actively reproduce tool-tip forces and/or positions for the user. The torque (force) that a DC motor produces is proportional to the electric current supplied when external resistance is encountered that limits speed (whereas its speed is proportional to supplied voltage when it is free to turn). Thus, to produce a force felt by the surgeon, the control system must supply the appropriate current to the motors. Because the reflected force should be 3-dimensional (expressed within 3 cartesian DOFs), a kinematic transformation (ie, a mathematical formula) calculates how much torque each motor must produce. Note that certain teleoperation configurations may also use the electric motors in the interface to reflect the movements of the remote robot. Reflecting position, for example, causes the interface to stop its motion at exactly the same position where the remote robot contacts a solid, immovable surface such as a wall, although only to a point. Because haptic interfaces are also subject to saturation, any further pressure of the surgeon against the haptic interface (beyond the maximum force it can render when displaying a solid object) will result in erratic movement that normally constitutes a destabilization of the system. Using the wall analogy, the haptic interface will be forced beyond the position at which it is supposed to render the wall and will not be able to rectify the disparity between where it is and where it is supposed to be. To the user, this makes it seem as though he/she has pushed through the wall or as though the wall was not as firm as it should have been.

Besides DC motors, some robot manipulators use stepper motors or piezoelectric actuation.48 Both types of motors will only follow position commands and cannot normally produce a desired force, but they have the advantage of being able to be run accurately in open-loop configurations. (Although the control system does not require measurements to see if motors have reached desired positions, the higher-level supervisory software will still require measurements to ensure safe operation.) This is in contrast to normal DC motors that require such measurements combined with a closed-loop computer control to keep them accurate (in position, velocity, and/or force tracking).

Back to Top | Article Outline
Sensors

Optical encoders normally measure positions of revolute joints in robotic manipulator arms or haptic interfaces. The encoders use wheels marked with regularly spaced black lines, like painted spokes, and a laser with an electronic circuit will count precisely the number of lines that have passed by during any rotation and can tell which direction it is spinning. Counting the number of lines gives the rotational position (eg, in degrees). The number of lines can be quite large (placed only a fraction of a degree apart), and manufacturers achieve arbitrary accuracy in practice. The computer calculates velocity (eg, degrees per second) by differentiating the position signal, and because the signal is digital, the calculation produces a noise-free velocity estimate. A kinematic transformation then gives the position and velocity of the end effector of the robot (or of the hand gripper in the haptic interface). This transformation provides accuracy as long as it is given accurate lengths of the links in the robot arm. Thus, because of this advanced digital technology, one normally assumes “perfect” position and “near-perfect” velocity measurements of the mechanical components. Note that the software does not know the position or velocity of the end of the tool (gripped by the robot) unless the kinematic calculations include the precise length of the tool. If desired, one normally achieves this in practice by premeasuring and barcoding all tools and then scanning them automatically as they are placed in the robot gripper so that the software knows the properties of whatever tool the robot is holding, as is the case with neuroArm.

When estimating forces in 3-DOF cartesian coordinates, force sensors precisely measure the deflection of a small elastic element inside the device and rely on a previous calibration to output the correct corresponding force. A scale inside a grocery store, in which a spring attaches the weighing platform to the display, provides an example of a 1-DOF force sensor. A precalibration indicates the amount the spring will stretch given a reasonable range of applied weight and is used to show the deflection of the spring as pounds or kilograms (rather than centimeters). A grocery store scale shows a large initial reading followed by a decaying vibration when produce is first dropped on it, and one has to wait for the proper reading. The same effect happens on a smaller scale with the force sensors used in robotics. The decaying vibration adds noise to the output of the force sensor, and filters implemented in electronics or software attempt to negate this effect. Filtering “smooths” the output of the sensor but ends up introducing a time delay into the system. One could allow the surgeon to feel the output from the force sensor directly to prevent the delay, but the noise may be too distracting. The haptic interface itself may have force sensors on the grip, allowing the surgeon to directly command the force he/she wishes to produce. One may also purchase sensors that measure both force and torque, ie, 6-DOF force/torque sensors, and researchers continue to investigate how to provide surgeons with more natural-feeling interactions. Althoefer et al42 provide a good overview of force-sensing techniques commonly used in medical robotic applications and their limitations.

Back to Top | Article Outline
Displays

The majority of telesurgical systems in use or development today use custom displays that best suit the needs of the surgeons. This can include any number of 2-dimensional or 3-dimensional screens or other types of displays in any number of configurations that allow surgeons to be positioned as comfortably as possible with as much readily available pertinent information as possible. Riener and Harders49 outline the display types.

Back to Top | Article Outline
Software

As with most aspects of these emerging telesurgical systems, engineers custom design control software to fit the needs of the surgical team. Depending on whether one wants to track position, velocity, force, or some combination thereof, the control software must be able to provide the appropriate commands and/or switch between operating modes either automatically or at the behest of the surgeon. Software can often make up for budgetary shortcomings in terms of equipment; eg, if a 6-DOF haptic interface is too expensive for a given system, it is possible for the software to apply haptic illusions to the interface to make it seem as though forces are indeed rendered in 6 DOFs. Lederman and Jones50 give a more thorough treatment of haptic illusions in a virtual environment context.

Robotic software systems are complicated and expensive. They consist of thousands or even millions of lines of code, constitute the large majority of the cost of a robot, and provide the largest source of possible design errors that could affect the safety of the robot. The supervisory code and the control systems code represent 2 very different categories of software. The high-level supervisory code deals with the keyboard/mouse inputs for configuration of the system and visual interface outputs (including graphic user interfaces and graphic animations) and provides high-level command-and-control structure to the low-level control system. The supervisory code generally contains an enormous amount of code developed by computer software experts—either computer scientists or software engineers—using best practices and rules of thumb standard in the industry. The control system code, on the other hand, tends to be rather succinct, implementing mathematical formulas that describe the kinematic transformations and the stable (open or closed loop) controls that provide electric signals to the motors. In general, 2 separate computers implement 2 separate control systems for a teleoperation system: 1 controls the haptic interface and 1 controls the remote robot (and the human cerebellum implements a third that controls the hand). Electrical or mechanical engineers develop control systems code using precise mathematical formulations. The remainder of this article serves as a nonmathematical introduction to the field of control systems because this low-level control has the greatest effect on the particular “feel” of a system.

Back to Top | Article Outline

INTRODUCTION TO FEEDBACK CONTROL

Most robot arms move with rotational joints, but one typically commands the end effector of the robot to move in 3-DOF cartesian space. Thus, each robot controller must contain a mapping that tells the software what positions each joint must have, in degrees, so that the end effector (tool gripper) is in the correct cartesian position (and similar mappings exist for velocities and force/torque). These mappings are referred to as the kinematics; the reverse mappings, from cartesian space to joint positions, as inverse kinematics. Given the precise lengths of the robot links and their rotation directions, engineers find calculating these mappings to be straightforward. However, the algorithms must avoid impossible movements referred to as kinematic singularities (eg, trying to move directly “up” on the z axis or trying to produce a force directly “up” when an arm is straight and can move only in an arc) and must decide on how to position joints when > 1 joint configuration is possible to achieve the same cartesian position of the end effector (eg, elbow up or elbow down). Well-designed software will either keep systems away from their kinematic singularities or at least ignore impossible commands in these regions.

Because these problems are well understood and precise mathematical formulas exist that are based on geometric constraints for kinematics, most research into improvements in performance focuses on the dynamics. In its simplest form, mechanical dynamics describe the relationship between forces and accelerations based on Newton’s Second Law of Motion, which we normally describe as follows: total net force = mass × acceleration. A rotational equivalent for this law also exists: total net torque = moment of inertia × angular acceleration. Typically, an electric motor provides torques in a robot arm, but the robot arm also experiences forces and torques from joint friction, friction or damping in the environment, elasticity of the environment, weight of a payload, and mass and moment of inertia of a payload. The internal dynamics of the electric motors themselves relate the voltage, current, velocity, and torque of the motor. However, the transients in the electric dynamics normally disappear very quickly compared with the motion of the actual physical machine. Thus, most control system designs simply decide on a torque that the motor should produce to supply a desired motion, assuming that very-low-level software or electronics will supply the appropriate voltage to the motor without difficulty.

The term automatic feedback control systems describes a field of engineering that addresses how to provide closed-loop control signals to a system to achieve desired responses and stability. Typically, the controller uses the errors (like the differences between desired/commanded forces and measured forces) to decide on appropriate control signal outputs (like currents to electric motors) that affect the plant (like haptic interface or remote robot). In a closed-loop control, the plant must have sensors (like optical encoders and/or force sensors) that can measure the resulting signals and provide feedback for the control.

Open-loop control provides simplicity in design and hardware/software implementation, but performance suffers considerably compared with closed-loop control; calibrations become inaccurate over time, and disturbances adversely affect the system. For example, an open-loop cruise control system in a car would be accurate only if the weight of the driver were exactly known and calibrated for, if there were no payload or passengers, if there were no wind, and if there were no incline. Even so, open-loop control would be precise for only a short time while the car was nearly brand new. It takes a closed-loop control to adjust for unknown and unmeasured factors affecting the system. A human makes a fine closed-loop control, keeping an eye on the speedometer while adjusting the throttle. However, an automatic cruise control reduces driver effort. In haptic teleoperation, certain mechanical designs enable the use of open-loop controls. Analogous to driving a car, the surgeon must look at resulting motions and feel resulting forces to close the loop himself/herself. NeuroArm,7 a neurosurgical robot at the Foothills Hospital in Calgary, uses such an open-loop configuration with piezoelectric motors that follow commanded positions precisely and force sensors that allow a surgeon to feel appropriate forces via 2 haptic interfaces. The design uses piezoelectric motors because they can operate within an magnetic resonance imaging field, unlike electric motors, allowing the use of nearly real-time magnetic resonance images during robotic surgery. Piezoelectric devices vibrate at an ultrasonic frequency when supplied with a voltage, and a particular microscale geometric design results in precise motion; it is known with high accuracy how far the actuator will move for a given duration over which voltage is supplied. Piezoelectric motors can attain accuracy on the nanometer scale and velocities of nearly 1 m/s. Thus, open-loop controls in neuroArm command the piezoelectric motors to move a precise distance, as commanded by the surgeon through the haptic interfaces. The surgeon can judge through visual feedback how much the end effector or tool has moved in relation to the tissue and is given an indication of the measured force through the interface. Note that the haptic interfaces use normal electric motors that can be supplied with controlled currents to approximately reproduce the measured force for the surgeon. In general, time delays (resulting from filtering and communication bottlenecks) and other undesired dynamics in a telesurgical system may cause a too-large collision when contacting a solid object; the surgeon will feel the solid object and pull back a fraction of a second later. This may result in damage to delicate and expensive force sensors in the gripper. In addition, without the ability to command force, excessive vibrations can occur when touching bone, when the commanded position oscillates between just above the bone surface and just below the bone surface.

If one introduces closed-loop force control in an attempt to solve these types of problem, one must ensure that other problems are not introduced. For instance, making the system more sensitive to a commanded force and less sensitive to a commanded position could increase the amount of overshoot when puncturing through stiff tissue into free space or softer tissue. Control strategies also exist that attempt to optimize both position and force simultaneously.

Other surgical robots in use today (eg, da Vinci; Intuitive Surgical, Inc, Sunnyvale, California) use position and/or velocity control in which a closed-loop control system causes the slave to track commanded positions and/or velocities from the master. The surgeon then uses visual feedback to determine required movements in the procedure. The current paradigm of surgical robotics requires the robotic manipulator to follow the position of the surgeon’s hand, in which case haptic interfaces need to be added to the existing motion control (an approach that constitutes the large majority of research and applications to date). However, it may be desired to provide an alternative pure force-tracking mode available to the surgeon that can be switched on, especially if this can be done in select DOFs. It might be desirable to switch to pure haptic control when touching soft tissue to evaluate its stiffness, when making an incision where accurate force is important in 1 DOF, or when interacting with solid bone and preventing motion in 1 DOF. Given the present ability to scale up force feedback arbitrarily in teleoperation, it is also possible that a neurosurgeon practiced in the art of telesurgery could learn to prefer pure haptics-based controls in some situations; eg, interacting with brain tissue as if it were heavy gravel might be helpful. The level of utility of haptic feedback has a nontrivial relationship to the characteristics of the task at hand, eg, whether the task is single DOF or multiple DOF, whether it is low force or high force, and for sensing or manipulation.

Haptic feedback helps task performance in different ways, depending on the levels of forces.51 At high levels of measured force, the user feels environment mechanical properties as passive physical constraints that serve as both safety barriers and intuitive guides for tools. At low levels of force, however, haptic feedback provides less benefit as a physical constraint and more benefit as a supplemental information source, requiring an increased level of awareness and cognitive processing by the user. Both measuring the force on the remote robot and reflecting it back to the haptic interface and measuring a surgeon’s force on the hand controller and transmitting it to the robot describe open-loop control configurations. If one were to measure the force on the hand controller and compare it with the actual force experienced by the robot, resulting in a force error, a control system that attempted to drive that error to zero would be a closed-loop control. Applications in haptics to date have, by and large, used closed-loop controls for the position and/or velocity measurements only, leaving the force control in open loop. Current research, on the other hand, often proposes and tests adding closed-loop control to the force tracking as well. Typically, researchers borrow mathematical methods from electric circuit/network theory because the behavior of electric networks with an input voltage and output current in connection with other such networks is well understood. The ratio of voltage to current is called the impedance of the network. It turns out that mechanical systems have an analogous mathematical form in which force takes the place of voltage (both are thought of as efforts) and velocity takes the place of current (both are thought of as flows). Thus, the ratio of (applied) force to (resulting) velocity defines a mechanical impedance. In everyday language, people would normally describe this as the resistance they feel when pushing on an object. Free space has zero impedance; soft tissue has a small impedance; and a solid wall has infinite impedance.

Closed-loop controls have the potential to allow one to interact with the haptic interface in a way that feels exactly like one is touching the actual environment (ie, a natural feel), which is again referred to as transparency. Transparency can be viewed as a scale on which the performance of different systems may be compared, with full transparency being the ideal condition in which the feel of an environment is exactly reproduced by the haptic interface. With closed-loop controls, stability becomes an issue of concern; instability is evidenced as inappropriate behavior—for both the haptic interface and the robotic end effector—that ranges from buzzing vibrations to erratic movement to dangerous and destructive speeds.2 Stability can also be viewed somewhat as a scale in which systems can be unstable, marginally stable, or (ideally) robustly stable. In this context, a robustly stable system is one that is stable regardless of conditions, eg, for any possible range of input rather than for some limited range of input. Note that to eliminate stability concerns altogether, one would rely only on open-loop controls. Normally, there is a tradeoff between stability and transparency in closed-loop configurations.3 Transparency is sacrificed to guarantee stability and vice versa. The most common type of stability problem in a closed-loop control interacting with an unknown environment consists of a vibration that does not disappear, referred to as a limit cycle. The amplitude of vibration occurring in a limit cycle may actually begin to increase, in which case it is an unstable limit cycle (overshooting repetitively, in a vibration, will cause instability if the overshoot grows larger each time). Even small time delays can introduce these vibrations, as can certain mechanical properties like elasticity in the robot structure, gearbox dynamics (dead zone, backlash, and elasticity), elasticity in surgical tools held by the robot, and elasticity in the surgical environment (especially bone). One must also be aware that external vibrations, deliberate “malicious” effort on the part of the operator, or improper use can also cause instability. In teleoperation, the surgeon would normally have time to react, easing off the force or pulling back the robot from the environment. Control systems engineers understand the mathematical foundations of feedback and design controls that would not go unstable under any ordinary circumstances. In any case, automatic controls in commercial systems always come with fail-safes that are designed to immediately shut off power, or trip, when a severe instability occurs, ideally before any damage is caused. Unfortunately, the mathematical techniques used by control systems engineers to prevent instability typically reduce the level of transparency in the system. However, more advanced techniques such as those described in the Advanced Methods section can limit the degradation of transparency compared with simpler techniques.

In surgical teleoperation systems, stability concerns most often revolve around the loss of passivity. Passivity indicates that the system is not producing energy (only absorbing energy or being energy neutral). Consider haptic teleoperation on a high-impedance object in which a force sensor measures the environment interaction, which is then sampled and fed back to the user by a digital controller at regular intervals (eg, every 1 millisecond).52 As the slave robot penetrates the environment, the sampled forces from the force sensor(s) on the robot will be less than the real forces during each sampling interval, resulting in the forces reflected to the user being too low. In contrast, as the slave robot moves out of the environment, the reflected forces will be too high compared with reality. Thus, the user’s legitimate expectation that a passive environment would not generate energy is violated. Indeed, as the user uses the teleoperation system to probe the environment by pushing and letting go of the haptic interface, the energy-instilling digital controller presents the environment to the user as one emitting energy and causing vibrations, an effect never observed when the same environment is touched directly by hand.

Link and joint elasticity in the robot or elasticity in thin and/or cable-driven surgical instruments can also cause vibrations. An example is the Zeus Surgical Robot System (Computer Motion Inc, Goleta, California) in which a 1-N force applied to the tip of one of its cantilevered instruments (straight endoscissors) causes a 15-mm tip deflection.53 As the surgical instruments become thinner, the effect of elasticity becomes more crippling. In the presence of link or joint elasticity, control laws based on the assumption of a rigid robot may no longer be effective or accurate; gravity causes the robot to sag and elasticity causes vibration. Control system designs must take elasticity into account to ameliorate steady-state errors, transient errors, and vibrations caused by elasticity.

Back to Top | Article Outline
Passivity-Based Control

The simplest closed-loop controls calculate a control signal that is simply proportional to error. After the inverse kinematics are used to translate cartesian errors (eg, millimeter for position error or Newtons for force error) into joint errors (in degrees for position error or Newton-meters [Nm] for torque error), a so-called proportional control is as follows: motor torque command = constant × joint position error, where the constant is called a control gain and is selected by the control system designer to achieve stable operation with acceptable performance (if possible). A proportional-derivative control also includes the derivative of the error and contains 2 control gains: motor torque command = constant1 × joint position error + constant2 × joint velocity error. The advantages of adding the derivative (velocity) term include closely tracking a desired speed, reducing position overshoot, and improving stability qualities by damping vibrations. Even when no desired velocity term is available, the proportional-derivative control can use a zero value for desired velocity to achieve the improved stability qualities. Instead of position error, a term containing a control gain multiplying force error can also be added, but such “hybrid” controls (controlling force and velocity at the same time) must be designed very carefully to avoid instabilities. However, 2 different controllers that guarantee stability will not necessarily deliver the same performance in terms of haptics and may in fact exhibit very different degrees of smoothness when rendering haptic contact forces.54

The important contribution of passivity theory has been to show the stability of a properly designed proportional-derivative control (and some hybrid controls) when interacting with an unknown surgeon and an unknown environment, assuming that both the surgeon and the environment are passive. In humans, muscles constitute active components, but all other tissue is passive. Therefore, an operator remains passive only under the assumption that the operator does not deliberately perform actions to destabilize the system. A passive system always remains stable, and a system composed of interconnected passive systems is also stable.

Passivity approaches can also include haptics (ie, a force-feedback error term) using analogies to electric network theory. Specifically, the 2-port network theory from electric analysis establishes passivity properties. This theory has been a popular approach because the time delays inherent in the system can also be dealt with using theory from electric network transmission. It turns out the control can account for a fixed constant time delay in a guaranteed-stable (passive) manner. However, this approach reduces transparency significantly, and much current research revolves around reducing this tradeoff. Another drawback of the passivity approach stems from the environment remaining unknown to the control, so that control gains must be prechosen for suitability for a certain type of environment, eg, either soft tissue or bone. The mathematical theory applied also results in closed-loop behavior that must be learned by the surgeon and may be counterintuitive in some instances; ie, it is not a controller that could be used for the first time by a surgeon with no experience in teleoperation. In addition, the passivity constraints can be unsatisfied in certain situations—because of the surgeon, the environment, or the communications network—and the software must be able to detect such situations and change the control in response.

After stability becomes established, concern revolves around achieving the highest level of transparency. In evaluating transparency, a distinction needs to be made on the basis of the purpose of teleoperation. Although hard-contact telerobotic applications (eg, bone milling) involve static regulation of force, soft-tissue applications (eg, probing tissue for determining the tissue compliance) require dynamic position/force tracking and impedance matching. The reason for this becomes clear when one notes that the operator can detect the tissue compliance during the probing process, not after the tissue has been completely deformed. Therefore, everyone would like control methods that both stabilize the teleoperation system and ensure dynamic local/remote position and force tracking. In the following, we discuss 2 architectures that attempt to achieve this, with varying levels of success.

Back to Top | Article Outline
2-Channel Architecture

The 2-channel architecture allows transmission of 1 signal and reflection of 1 signal in teleoperation, predominantly either position-position or position-force. In a position-position control architecture, there are no force sensor measurements, and the controller tries to minimize the difference between the haptic interface and robot manipulator (end effector) positions, thus reflecting a force proportional to this difference to the user once the slave makes contact with an object. Position-position control achieves relatively good position tracking between the haptic interface and manipulator, but its force tracking performance is poor. In a position-force control architecture, a force sensor is used to measure the interactions between the remote robot and the environment for reflection to the user while the robot tracks the position of the haptic interface. Position-force control achieves relatively good position tracking between the master and the slave while its force tracking performance is perfect. Neither of the above schemes achieves full transparency.

Back to Top | Article Outline
4-Channel Architecture

A 4-channel architecture for teleoperation control both transmits and reflects the force and velocity signals (or a weighted summation of force and velocity). Whereas 2-channel controllers stop short of achieving full transparency, the 4-channel architecture would theoretically reach ideal velocity and force tracking (giving the user an accurate perception of the impedance of the environment) between the local interface and remote proxy. Note that the system tracks velocity directly rather than position, whereas position-control is desired as an ideal, although the system indirectly tracks position by translating the velocity to a desired position through integration. Although the 4-channel system achieves full transparency as an idealized mathematical model, it can experience stability problems in real implementation; unfortunately, necessary stabilizing modifications in the control design end up reducing transparency. The inevitable presence of (even small) time delays is the largest problem for stability, but some physical realities such as elasticity in the robot joints, gearbox dynamics like backlash and dead zone, actuator saturation, and bounds on sampling rates can adversely affect stability. Yet some physical damping effects such as viscous friction help stability and may be enough to keep the system stable when it is not theoretically stable. Although the components within the system should be well engineered in any commercial system, the surgeon must still be aware of external factors that cannot explicitly be accounted for in the control design: flexibility of tools, the passivity of the operator, and the passivity of the environment. One of the main drawbacks with passivity-based stability analyses is their overconservatism; they assume that the impedance ranges of operator and environment are from zero to infinity, whereas this is not the case for human operators. Although this conservatism provides a larger margin of safety in case 1 or more of the previous factors hamper system stability, it reduces system transparency.

Back to Top | Article Outline
Advanced Methods

The popularity of passivity techniques stems from the fact that more traditional control techniques would require a full model of everything in the loop, including human and environment, to design for stability and performance. Yet having a mathematical model of the human and environment is unrealistic. However, new advances in adaptive control techniques may allow the system to adapt or learn the characteristics of unmodeled systems as it operates. An adaptive control trains an artificial neural network (ie, updates certain parameters in the control law) online in real time on the basis of the errors and information from the sensors. For instance, neural networks could predict environment forces in the work by Smith et al55 and remote robot forces and velocities in the work of Minh and Hashim.56 Recently, adaptive controllers for general nonlinear teleoperators that adapt to unknown robot dynamics have been developed that satisfy the passivity criteria.57,58 A similar approach using neural networks to estimate both unknown local interface and remote robot dynamics can preserve the passivity of the system.59 For adapting to unknown human and/or environment dynamics (in addition to unknown robot dynamics), one can assume that these dynamics are linear, and then separate adaptive laws can be designed for the local side and remote side.60,61 Even the uncertain lengths of the tools can be included for adaptation to kinematics.62 However, environments are unlikely to be linear; surgical applications require interacting with nonlinear viscoelastic tissue and experiencing discontinuous collisions and puncture scenarios. One idea is to use pure force control and an adaptive neural-network structure that adapts nearly instantly to new nonlinear environments.63 Another approach is to lump external effects as disturbances and ensure robustness to those disturbances, rather than try to learn them.64

Back to Top | Article Outline

CONCLUSION

We have outlined the basic ideas of teleoperated robots that incorporate haptics, a technology that allows surgeons operating haptic interfaces to feel the forces experienced by the robot as it touches tissue or bone. Haptic interfaces have been found to improve the surgical experience for surgeons, and current research into haptics promises very advanced systems that, in the near future, will give surgeons a very realistic feel when controlling surgical robots. However, machines work differently from biological systems, and these differences have been highlighted here so that surgeons can understand some of the counterintuitive behaviors of remote haptic interaction. The basic principles of robot geometry, robot movements, sensors, and actuators have been outlined so that the mechanical properties, limitations, and similarities/differences to/between equivalent human systems can be understood. In addition, the basic operations of low-level automatic open-loop and feedback control systems have been outlined. This will allow surgeons to have a conceptual understanding of the limitations in performance found in such systems. Moreover, surgeons will learn that, unlike with biological coordination, instabilities may occur that pose real safety concerns with such automation systems. Knowledge of how these instabilities may be generated will allow surgeons to control their robots in a safer manner. Better understanding of all the components and principles behind such robotic systems will also enable surgeons to ask manufacturers and salespeople more relevant questions, thus assisting the decision-making process during the purchase of surgical robotic equipment.

Back to Top | Article Outline
Disclosures

R. L’Orsa is supported by the Natural Science and Engineering Research Council of Canada (NSERC) and Alberta Innovates-Technology Futures in the form of graduate student scholarships. Dr Macnab holds a University of Calgary Seed Grant for “Testing of Novel Neural-Adaptive Controls” and has industrial funding from the City of Calgary Water Resources for “Automation of a Wastewater Treatment Plant.” Dr Tavakoli holds an NSERC Discovery Grant for “Robotic Assistance for Improving Surgeries and Therapies” and holds an NSERC Collaborative Research and Development Grant for “Network-Based Haptic Telepresence Technology for In-Home Rehabilitation” in (industrial) collaboration with Quanser Inc, the University of Western Ontario, and Glenrose Rehabilitation Hospital. The authors have no personal financial or institutional interest in any of the drugs, materials, or devices described in this article.

Back to Top | Article Outline

REFERENCES

1. Lederman SJ, Klatzky RL. Hand movements: a window into haptic object recognition. Cogn Psychol. 1987;19(3):342–368.

2. Hannaford B, Okamura A. Haptics. In: Siciliano B, Khatib O, eds. Handbook of Robotics. New York, NY: Springer; 2008.

3. Hokayem P, Spong M. Bilateral teleoperation: an historical survey. Automatica. 2006;42(12):2035–2057.

4. Malone HR, Syed ON, Downes MS, D’Ambrosio AL, Quest DO, Kaiser MG. Simulation in neurosurgery: a review of computer-based simulation environments and their surgical applications. Neurosurgery. 2010;67(4):1105–1116.

5. Zamorano L, Li Q, Jain S, Kaur G. Robotics in neurosurgery: state of the art and future technological challenges. Int J Med Robot. 2004;1(1):7–22.

6. Delorme S, Laroche D, Diraddo R, F Del Maestro R. NeuroTouch: a physics-based virtual simulator for cranial microneurosurgery training. Neurosurgery. 2012;71(1 suppl operative):ons32–ons42.

7. Lang MJ, Greer AD, Sutherland GR. Intra-operative robotics: NeuroArm. Acta Neurochir Suppl. 2011;109:231–236.

8. Marescaux J, Leroy J, Rubino F, et al.. Transcontinental robot-assisted remote telesurgery: feasibility and potential applications. Ann Surg. 2002;235(4):487–492.

9. Challacombe B, Wheatstone S. Telementoring and telerobotics in urological surgery. Curr Urol Rep. 2010;11(1):22–28.

10. Haidegger T, Benyó Z. Extreme telesurgery. In: Baik SH, ed. Robot Surgery. Vienna, Austria: InTech; 2010.

11. Anvari M, McKinley C, Stein H. Establishment of the world’s first telerobotic remote surgical service: for provision of advanced laparoscopic surgery in a rural community. Ann Surg. 2005;241(3):460–464.

12. Mendez I, Hill R, Clarke D, Kolyvas G, Walling S. Robotic long-distance telementoring in neurosurgery. Neurosurgery. 2005;56(3):434–440.

13. Tian ZM, Lu WS, Wang TM, et al.. Clinical application of robotic tele-manipulation system in stereotactic surgery [in Chinese]. Zhonghua Wai Ke Za Zhi. 2007;45(24):1679–1681.

14. Mitsuishi M, Morita A, Sugita N, et al.. Master-slave robotic platform and its feasibility study for micro-neurosurgery [published online ahead of print May 16, 2012]. Int J Med Robot. doi:10.1002/rcs.1434.

15. Arata J, Tada Y, Kozuka H, et al.. Neurosurgical robotic system for brain tumor removal. Int J Comput Assist Radiol Surg. 2011;6(3):375–385.

16. Comparettei MD, Vaccarella A, De Lorenzo D, Ferrigno G, De Momi E. Multi-robotic approach for keyhole neurosurgery: the ROBOCAST project. Paper presented at: Joint Workshop on New Technologies for Computer/Robot Assisted Surgery; July 11-13, 2011; Graz, Austria.

17. Hongo K, Goto T, Miyahara T, Kakizawa Y, Koyama J, Tanaka Y. Telecontrolled micromanipulator system (NeuRobot) for minimally invasive neurosurgery. Acta Neurochir Suppl. 2006;98:63–66.

18. Goto T, Miyahara T, Toyoda K, et al.. Telesurgery of microscopic micromanipulator system “NeuRobot” in neurosurgery: interhospital preliminary study. J Cent Nerv Syst Dis. 2009;2009(1):45–53.

19. Biggs SJ, Srinivasan MA. Haptic interfaces. In: Stanney K, ed. Handbook of Virtual Environments. London, UK: Lawrence Erlbaum, Inc.; 2002.

20. Laycock SD, Day AM. A survey of haptic rendering techniques. Comput Graph Forum. 2007;26(1):50–65.

21. Salisbury K, Conti F, Barbagli F. Haptic rendering: introductory concepts. IEEE Comput Graph Appl. 2004;24(2):24–32.

22. Lum MJ, Rosen J, Hawkeye K, et al.. Teleoperation in surgical robotics: network latency effects on surgical performance. Paper presented at: International Conference of the IEEE Engineering in Medicine and Biology Society; September 3-6, 2009; Minneapolis, MN.

23. Anvari M, Broderick T, Stein H, et al.. The impact of latency on surgical precision and task completion during robotic-assisted remote telepresence surgery. Comput Aided Surg. 2005;10(2):93–99.

24. Tendick F, Sastry SS, Fearing RS, Cohn M. Applications of micromechatronics in minimally invasive surgery. IEEE/ASME Trans Mechatronics. 1998;3(1):34–42.

25. Muller-Wittig W. Virtual reality in medicine. In: Kramme R, Hoffmann KP, Pozos RS, eds. Springer Handbook of Medical Technology. Berlin, Germany: Springer; 2011.

26. Tavakoli M, Patel RV, Moallem M. Haptic interaction in robot-assisted endoscopic surgery: a sensorized end-effector. Int J Med Robot. 2005;1(2):53–63.

27. Wagner C, Stylopoulos N, Howe R. The role of force feedback in surgery: analysis of blunt dissection. Paper presented at: 10th Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems; March 24-25, 2002; Orlando, FL.

28. Tholey G, Desai JP, Castellanos AE. Force feedback plays a significant role in minimally invasive surgery: results and analysis. Ann Surg. 2005;241(1):102–109.

29. Tavakoli M, Aziminejad A, Patel RV, Moallem M. High-fidelity bilateral teleoperation systems and the effect of multimodal haptics. IEEE Trans Syst Man Cybern B Cybern. 2007;37(6):1512–1528.

30. Ruurda JP, Broeders IA, Pulles B, Kappelhof FM, van der Werken C. Manual robot assisted endoscopic suturing: time-action analysis in an experimental model. Surg Endosc. 2004;18(8):1249–1252.

31. Joice P, Hanna GB, Cuschieri A. Errors enacted during endoscopic surgery: a human reliability analysis. Appl Ergon. 1998;29(6):409–414.

32. Harwell RC, Ferguson RL. Physiologic tremor and microsurgery. Microsurgery. 1983;4(3):187–192.

33. Ghorbanian A, Zareinejad M, Rezaei SM, Sheikhzadeh H, Baghestan K. A novel control architecture for physiological tremor compensation in teleoperated systems [published online ahead of print May 16, 2012]. Int J Med Robot. doi:10.1002/rcs.1436.

34. Rossi A, Trevisani A, Zanotto V. A telerobotic haptic system for minimally invasive stereotactic neurosurgery. Int J Med Robot. 2005;1(2):64–75.

35. Abbott JJ, Marayong P, Okamura AM. Haptic virtual fixtures for robot-assisted manipulation. In: Thrun S, Brooks R, Durrant-Whyte H, eds. Springer Tracts in Advanced Robotics. Vol 28. Berlin, Germany: Springer-Verlag; 2007.

36. Hagn U, Nickl M, Jorg S, et al.. The DLR MIRO: a versatile lightweight robot for surgical applications. Ind Robot. 2008;35(4):324–336.

37. Kim S, Chung J, Yi BJ, Kim YS. An assistive image-guided surgical robot system using O-arm fluoroscopy for pedicle screw insertion: preliminary and cadaveric study. Neurosurgery. 2010;67(6):1757–1767.

38. Liu D, Wang T. A virtual reality training system for robot assisted neurosurgery. Paper presented at: 16th International Conference on Artificial Reality and Telexistence; November 29-December 1, 2006; Hangzhou, China.

39. Westebring-van der Putten EP, Goossens RH, Jakimowics JJ, Dankelman J. Haptics in minimally invasive surgery: a review. Minim Invasive Ther Allied Technol. 2008;17(1):3–16.

40. van der Meijden OA, Schijven MP. The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: a current review. Surg Endosc. 2009;23(6):1180–1190.

41. Yip MC, Tavakoli M, Howe RD. Performance analysis of a haptic telemanipulation task under time delay. Adv Robot. 2011;25(5):651–673.

42. Althoefer K, Liu H, Puangmali P, Zbyszewski D, Noonan D, Seneviratne L. Force sensing in medical robotics. In: Bradley D, Russell D, eds. Mechatronics in Action. London, UK: Springer-Verlag; 2010.

43. Hayward V, Astley OR, Cruz-Hernandez M, Grant D, Robles-De-La-Torre G. Haptic interfaces and devices. Sensor Rev. 2004;24(1):16–29.

44. Sheridan TB, Ferrell WR. Remote manipulative control with transmission delay. IEEE Trans Hum Factors Electronics. 1963;4(1):25–29.

45. Dwivedi J, Mahgoub I. Robotic surgery: a review on recent advances in surgical robotic systems. Paper presented at: Florida Conference on Recent Advances in Robotics; 2012; Boca Raton, FL.

46. Motooka W, Nozaki T, Mizoguchi T, et al.. Development of a 16-DOF telesurgical forceps master/slave robot with haptics. Paper presented at: 36th Annual Conference on IEEE Industrial Electronics Society, 2010; Phoenix, AZ.

47. Raoufi C, Goldenberg AA, Kucharcyzk W. A new hydraulically/pneumatically actuated MR-compatible robot for MRI-guided neurosurgery. Paper presented at: 2nd International Conference on Bioinformatics and Biomedical Engineering; May 16-18, 2008; Shanghai, China.

48. De Lorenzo D, De Momi E, Dyagilev I, et al.. Force feedback in a piezoelectric linear actuator for neurosurgery. Int J Med Robot. 2011;7(3):268–275.

49. Riener R, Harders M. VR for planning and intraoperative support. In: Virtual Reality in Medicine. Vol XII. London, UK: Springer-Verlag; 2012.

50. Lederman SJ, Jones LA. Tactile and haptic illusions. IEEE Trans Haptics. 2011;4(4):273–294.

51. Wagner C, Howe R. Force feedback benefit depends on experience in multiple degree of freedom robotic surgery task. IEEE Trans Robot. 2007;23(6):1235–1240.

52. Jazayeri A, Tavakoli M. A passivity criterion for sampled-data bilateral teleoperation systems. Paper presented at: World Haptics Conference; 2011; Istanbul, Turkey.

53. Beasley RA, Howe RD. Model-based error correction for flexible robotic surgical instruments. Paper presented at: Robotics: Science and Systems Conference; June 8-11, 2005; Cambridge, MA.

54. Semmoloni J, Manganelli R, Formaglio A, Prattichizzo D. Control design issues for microinvasive neurosurgery teleoperator system. Paper presented at: International Conference on Advanced Robotics; June 22-26, 2009; Munich, Germany.

55. Smith AC, Mobasser F, Hashtrudi-Zaad K. Neural-network-based contact force observers for haptic applications. IEEE Trans Robot. 2006;22(6):1163–1175.

56. Minh VT, Hashim FB. Time forward observer based adaptive controller for a teleoperation system. Int J Control Autom Syst. 2011;9(3):470–477.

57. Chopra N, Spong MW, Lozano R. Synchronization of bilateral teleoperators with time delay. Automatica. 2008;44(8):2142–2148.

58. Nuno E, Ortega R, Basanez L. An adaptive controller for nonlinear teleoperators. Automatica. 2010;46(1):155–159.

59. Forouzantabar A, Talebi HA, Sedigh AK. Adaptive neural network control of bilateral teleoperation with constant time delay. Nonlinear Dyn. 2012;67(2):1123–1134.

60. Zhu WH, Salcudean SE. Stability guaranteed teleoperation: an adaptive motion/force control approach. IEEE Trans Automat Contr. 2000;45(11):1951–1969.

61. Malysz P, Sirouspour S. Nonlinear and filtered force/position mappings in bilateral teleoperation with application to enhanced stiffness discrimination. IEEE Trans Robot. 2009;25(5):1134–1149.

62. Liu X, Tavakoli M, Huang Q. Nonlinear adaptive bilateral control of teleoperation systems with uncertain dynamics and kinematics. Paper presented at: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems; October 18-22, 2010; Taipei, Taiwan.

63. Richert D, Macnab CJ, Pieper JK. Adaptive haptic control for telerobotics transitioning between free, soft, and hard environments. IEEE Trans Syst Man Cybern A Syst Hum. 2012;42(3):558–570.

64. Mohammadi A, Tavakoli M, Marquez HJ. Disturbance observer-based control of non-linear haptic teleoperation systems. IET Control Theory Appl. 2011;5(18):2063–2074.

Keywords:

Haptic interfaces; Haptics; Neurosurgical robots; Robot control; Teleoperation; Telerobotics; Telesurgery

Copyright © by the Congress of Neurological Surgeons

Login

Article Tools

Images

Share

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.