Secondary Logo

Journal Logo

A Randomized Trial of Simulation-Based Versus Conventional Training of Dental Student Skill at Interpreting Spatial Information in Radiographs

Nilsson, Tore A. DDS; Hedman, Leif R. PhD; Ahlqvist, Jan B. PhD

Simulation In Healthcare: The Journal of the Society for Simulation in Healthcare: October 2007 - Volume 2 - Issue 3 - p 164-169
doi: 10.1097/SIH.0b013e31811ec254
Empirical Investigations

Introduction: A radiology simulator has been developed. We tested the simulator with students in an oral radiology program for training interpretation of spatial relations in radiographs utilizing parallax. The aim of the study was to compare learning outcome regarding interpretative skill after training in the simulator vs. after conventional training.

Methods: Fifty-seven dental students voluntarily participated in a randomized experimental study. The participants' proficiency in interpretation of spatial information in radiographs and their visual-spatial ability was assessed. Proficiency was assessed by a test instrument designed by the authors and visual-spatial ability with the Mental Rotations Test, version A (MRT-A). Randomization to training group was based on pre-training proficiency test results. The experimental group trained in the simulator and the control group received conventional training. Training lasted for 90 minutes for both groups. Immediately after training a second proficiency test was performed.

Results: The proficiency test results were significantly higher after training for the experimental group (P ≤ 0.01), but not for the control group. Univariate variance analysis of difference in proficiency test score revealed a significant interaction effect (P = 0.03) between training group and MRT-A category; in the experimental group there was a stronger training effect among students with low level of MRT-A.

Conclusions: Training in the simulator improved skill in interpreting spatial information in radiographs when evaluated immediately after training. For individuals with low visual-spatial ability simulator based training seems to be more beneficial than conventional training.

From the Oral and Maxillofacial Radiology, Department of Odontology (T.A.N., J.B.A.), and the Skill Acquisition Lab, Department of Psychology (L.R.H.), Umeå University, Umeå, Sweden.

Reprints: Tore Nilsson, Oral and Maxillofacial Radiology, Department of Odontology, Umeå University, SE-901 87 Umeå, Sweden (e-mail:

Authors Ahlqvist and Nilsson disclose relationship as owners of Qbion AB, Sweden stock.

A radiology simulator based on virtual reality (VR) technology has been designed.1 In the simulator, the user can perform and analyze plain-film radiographic examinations. An obvious benefit with simulation-based training in radiology is that training can be performed in a radiation-free environment. The VR technology offers possibilities for visualization, feedback, and unlimited training that are not possible with conventional training methods.2 Besides, simulation-based training is thought to have the potential to improve educational results.

An oral radiology program for training interpretative skills utilizing the tube shift technique was developed. The technique is based on parallax and is used in oral radiology to determine the spatial relationships of objects when radiographs from straight angles cannot be performed.3

Interpretation of spatial relations in radiographs utilizing parallax is a rather complicated skill to acquire. It not only demands knowledge of topographic anatomy and radiographic projection geometry, but also an understanding of the nature of the radiographic representation and the parallax phenomenon. Conventional educational methods on this topic are normally based on analysis of authentic radiographic examinations and training is limited to theoretical reasoning. Our educational experience shows, however, that a number of students have difficulty efficiently acquiring the required skills. Therefore, educational methods need to be improved. Research has shown that learning to interpret spatial information in radiographs is more demanding for individuals with low visual-spatial abilities.4,5 Therefore, when evaluating new educational methods, it is of interest to consider what implications such methods will have for individuals with various degrees of visual-spatial ability.

The aim of this study was to evaluate the effect of simulator-based training on the skill to interpret spatial information in radiographs. We compared improvement after training in the VR radiology simulator with improvement after conventional training. In addition, the results were analyzed on a subgroup level for groups with low, medium, and high visual-spatial ability.

Back to Top | Article Outline


The study was a randomized experimental study with volunteer dental students. It was designed in accordance with the ethical principles of the Helsinki declaration and approved by the University Ethical Board. Informed consent was obtained after the participants received information regarding study design and protocol. To be included in the study, the participants were required to have passed the final anatomy examination. They were also required to have passed the oral radiology course where the principle of tube shift technique is taught, but were not required to have attended the advanced oral radiology course. The study was performed at the Oral and Maxillofacial Radiology Department at the University Clinic.

Back to Top | Article Outline

The Simulator

The simulator was a prototype based on software for simulation of radiographic projections. The virtual environment (VE) presented three objects: a patient, a dental x-ray machine, and an intraoral film. The patient model was made up of a transparent torso where the tooth arches with complete teeth (including the roots) were visualized (Fig. 1). The patient model consisted of two parts, one invisible and one visible. The invisible one was a high-resolution computed tomography (CT) radiology examination data matrix of a dry skull, and the visible one was a polygon model of the tooth arches rendered from the actual CT data. The software rendered geometrically correct radiographs from the individual positions of the patient, the x-ray film, and the x-ray machine.1

Figure 1.

Figure 1.

The interface hardware consisted of two monitors, a three-dimensional mouse, and a tracker system. One monitor presented the VE and the other monitor the rendered radiographs. The VE was presented stereoscopically, and the users wore shutter glasses (Chrystal Eyes, StereoGraphics Corporation, San Rafael, CA). Navigation in the scene was performed with a three-dimensional mouse (Spaceball 2003, 3Dconnexion Inc., San Jose, CA), and interaction with the separate objects on the scene was performed by means of a tracker system (Fastrack, Polhemus, Colchester, VT).

Back to Top | Article Outline

Training Program

A training program for interpretation of spatial relations in radiographs utilizing parallax was developed. The rationale for this procedure derives from the manner in which the relative positions of radiographic images of two separate objects change when the projection angle at which the images are made is changed.3Figure 2 shows two intraoral radiographs, exposed at different angles, of an embedded tooth. The position of the tooth, relative to the adjacent tooth roots, can be deduced if the difference in projection between the radiographs is known. If the different positions of the x-ray machine are not known, the change in projection can be deduced from the relative change in position of anatomic details seen in the images.

Figure 2.

Figure 2.

The training program had four structured exercises, all with a stepwise design. Each exercise started with the selection of the area to be examined. The simulator responded by automatically displaying a radiograph of the chosen area. The positions of the x-ray machine and film were simultaneously displayed in the VE. The continuation of the exercise was dependent on which exercise was chosen. The available exercises are described below.

Back to Top | Article Outline

Analyze Beam Direction

The goal was to acquire the skill to deduce the change in projection between two radiographs by analyzing the change in the relative position of anatomic details between the two images. After the initial radiograph was displayed a second radiograph over the same area was displayed. The second image was rendered with minor changes in projection angle and film position. The positions of the x-ray machine and film were randomly chosen by the simulator program, but were not visualized. The user was asked to move the x-ray machine from the original position to the position where it was thought to have been when the second radiograph was exposed. A third radiograph reflecting the actual position of the x-ray machine was displayed after the user moved the x-ray machine to the new position. Feedback was given as angulation error and as visual comparison between the three simulated radiographs.

Back to Top | Article Outline

Ordinary Radiography

The goal was to acquire the skill to deduce the relative depth position of an object displayed in pairs of radiographs when the difference in projection angle was decided by the user. The initial radiograph displayed an artificial spherical radiopaque object situated in a random position in the jaw. The sphere was not visualized in the VE. The user exposed a second radiograph from any desired projection angle. From the information in the two images, the user was urged to deduce the three-dimensional position of the sphere in the jaw. Thereafter, the user was to fetch a blue marking sphere and place it in the correct position in relation to the roots of the teeth. Immediate feedback was given from the simulator by revealing the correct position of the radiopaque sphere in the jaw and the distance error.

Back to Top | Article Outline


The goal and the basic design was the same as for “ordinary radiography” with one important difference. In this exercise, dynamic radiographs were rendered when the x-ray machine was moved. When the exercise started, two identical radiographs with the radiopaque sphere were presented. When the x-ray machine was moved one of the radiographs was continuously updated. It was therefore possible to follow the change in relative position between object details in real time and to compare those with the initial image. The exercise was finished when the blue marking sphere was positioned in the jaw.

Back to Top | Article Outline

Object Localization

In this exercise, the first and the second exercise were fused together into one unit. The fluoroscopy function was available in part of the exercise.

Back to Top | Article Outline


Fifty-seven individuals participated, 34 women and 23 men. The median age was 25 years, ranging from 23 to 46 years. Twenty-nine participating students were in the seventh semester and 28 were in the ninth.

Back to Top | Article Outline

Study Design

The study was organized into six phases:

  • Proficiency testing before training
  • Assessment of visual-spatial ability
  • Randomization
  • Interaction training
  • Intervention (training object depth localization)
  • Proficiency testing after training
Back to Top | Article Outline

Proficiency Test Instrument

A proficiency test instrument for assessment of skill in interpreting spatial information in radiographs utilizing parallax was designed by the authors. The instrument consisted of three subtests named principle subtest, projection subtest, and radiography subtest. The subtests dealt with different aspects important in the object localization procedure.

The principle subtest was a paper and pencil version of training equipment familiar to all participants. The participants were asked to evaluate the relative depth of two to four stylized objects against a neutral background in schematic image pairs. In this way, all radiographic signs that erroneously could be perceived as depth cues were eliminated. The task was therefore considered to be relevant for testing understanding of the principles for determining spatial relations utilizing the parallax phenomenon.

The projection subtest aim was to test the ability to identify differences in x-ray beam direction in pairs of hand drawn sketches of radiographs by analyzing changes in the relative position of relevant anatomic landmarks. The task was considered relevant for testing the ability to identify differences in beam direction based on a combination of anatomic knowledge and understanding of the principles for determining spatial relations utilizing parallax.

The radiography subtest evaluated the participants' ability to interpret three-dimensional information in radiographs utilizing parallax. They were asked to report the relative position of specified object details or pathologic processes by choosing one of two alternatives. For each task, two or three intraoral radiographs were presented (Fig. 2). The test was identical to the ordinary clinical procedure and must in that respect be regarded as a highly valid test.

The sum of the subtest scores, called proficiency test result, was used as a measure of the skill to interpret spatial information in radiographs utilizing parallax.

Proficiency testing was performed before and after intervention. The test instruments had the same design at both test occasions, but the individual tasks were altered to avoid the effect of memorization of tasks from the first test occasion.

Back to Top | Article Outline

Assessment of Visual-Spatial Ability

All participants were tested using the redrawn Vandenberg and Kuse mental rotations test version A (MRT-A).6 Internal consistency of the test has been reported to be 0.88 and test-retest reliability is reported to be 0.83.7 This is the most frequently used version of the MRT and consists of 24 items organized in two subsets with 12 items each. For every item, a target figure and four stimulus figures are presented. In all problem sets, two stimulus figures are rotated versions of the target figure, and the participant has to mentally rotate the figures and find the two correct versions. Instructions, procedures, and scoring were identical to Peters and collaborators.6

Back to Top | Article Outline

Randomization to Training Groups

Randomization to experimental and control group was performed separately for the seventh and the ninth semester classes. For each class, the participants were ranked in a list according to their proficiency test results before training. For each consecutive pair in the list, a computerized random number generator allocated each individual to either group. In total, 28 participants were assigned to the experimental group and 29 to the control group. The procedure created two equal groups regarding pretraining proficiency test results.

Back to Top | Article Outline

Interaction Training

Two interaction training exercises were designed to give the participants an opportunity to learn how to interact with the simulator before the actual training. There was no radiology training included in the interaction training. The training lasted for 20 minutes.

Back to Top | Article Outline


All participants trained in object localization with the tube shift technique for 90 minutes under the supervision of an experiment leader. The participants in the experimental group trained individually using the simulator. The leader introduced the exercises and thereafter only answered questions. During training the students were free to choose among the exercises. Training was carried out in two sessions of 45 minutes each. The reason for dividing the training into two sessions was the worry that a 90-minute session would be too strenuous for the participants and might cause simulator sickness with symptoms such as nausea, eyestrain, and drowsiness.8 The time interval between the two training sessions varied from 1 day to 2 weeks.

The participants in the control group used the ordinary educational material for the advanced course. The training was organized in small groups with a maximum of five members. It was performed individually or in pairs, and the participants were supported by a tutor. The educational material consisted of computerized training material with 10 cases. For each case, two or three intraoral radiographs were presented (Fig. 2). The cases were accompanied by questions concerning changes in projection and object depth localization. The tasks were similar to the tasks in the radiography subtest. After completing each task, the program provided correct answers and commentaries. The tutor encouraged discussion by asking questions and explaining problems. Thus, the tutor had a more active role in the control group. The training was completed in one 90-minute session.

Back to Top | Article Outline

Proficiency Test After Training

The test instrument was distributed to the participants immediately after completion of training.

Back to Top | Article Outline

Statistical Analysis

The outcome measures were proficiency test and radiography subtest results before and after training. A new variable, improvement after training, was calculated as the difference between test scores after training and before training. The participants were categorized in three visual-spatial ability subgroups assigned the low, the medium, and the high MRT-A subgroup. Categorization was based on the 33.3 and 66.7 percentiles of the MRT-A test results for the whole investigated population.

Descriptive statistics (mean and standard deviation) were calculated to describe test results before and after training for the training groups. The difference in test results before and after training was assessed with paired-samples t test. Wilcoxon signed-ranks test was applied when data did not meet the normality assumptions as tested by Kolmogorov-Smirnov test. An independent-samples t test, applied to the variables improvement after training, was used to test for differences in training effects between the control and experimental groups on subgroup level. A possible effect modification by visual-spatial ability was assessed by analysis of variance (ANOVA) with training method (experimental and control) and MRT-A category (low, medium, high) included as fixed effect factors and an interaction term between the factors training method and MRT-A category. Effect size (ES) estimates9,10 were calculated for comparison of mean differences of improvement after training for the experimental and control groups.

All tests were two-sided. P values less than 0.05 were regarded statistically significant. All analyses were performed with SPSS 13.0 for Windows (SPSS Inc, Chicago, IL).

Back to Top | Article Outline


Characteristics of the study population are presented in Table 1. There were no significant differences in age or MRT-A score between the control and experimental groups based on an independent-samples t test. All participants (N=57) performed their training as planned with no drop outs. One participant in the experimental group complained about illness during training, probably due to simulator sickness,8 but she was able to complete the training.

Table 1

Table 1

The proficiency test and radiography subtest results are displayed in Table 2. The experimental group showed significant differences in test results before and after training on both the proficiency test and the radiography subtest. The corresponding differences were not significant for the control group. A subgroup analysis of the variable, improvement after training, is presented in Table 3. Among students with low visual-spatial ability, a significantly higher improvement in the proficiency test was observed for the experimental group compared with the control group (P = 0.02). Among participants in the medium and high category of visual-spatial ability no significant differences between training groups were observed. This indication of visual-spatial ability being an effect modifier was supported by an ANOVA revealing a significant interaction effect (P = 0.03) between training group and MRT-A score categories. The corresponding assessment of the difference in improvement at radiography subtest showed the same pattern as for the proficiency test. The interaction effect was, however, not statistically significant (P = 0.31). ES comparing experimental and control groups among the low MRT-A category was 1.15 for improvement at proficiency test and 0.99 for improvement at radiography subtest.

Table 2

Table 2

Table 3

Table 3

Back to Top | Article Outline


The results showed that training in the radiology simulator for 90 minutes significantly improved test results. Conventional training, however, did not improve test results to a significant degree (Table 2). Improvement after training was differently distributed between the training groups by MRT-A categories (Table 3). Among students with low visual-spatial ability, students receiving simulator-based training showed significantly (P = 0.02) higher improvements in proficiency test results and borderline (P = 0.06) significantly higher improvements in radiography subtest results compared with students receiving conventional training. No significant differences in improvements between training groups, for either test, were observed among students with medium or high levels of visual-spatial ability. This indication of a modifying effect by visual-spatial ability on the training method was confirmed by ANOVA analysis reveling a significant (P = 0.03) interaction between MRT-A score and training method when assessing the proficiency test results. Effect size of simulator training compared with conventional training was estimated to be of crucial practical importance for the low MRT-A category.

In searching for possible mechanisms and explanations for the results, differences between the two training modalities must be analyzed. First, in the simulator a patient model was to be radiographed. The simulation exercises demanded interaction and allowed experimentation utilizing a fluoroscopic function. The internal structures of the radiographed jaw were visualized which facilitated interpretation of the radiographs. Visual feedback was given continuously and text-based feedback was added at the end of each task. This kind of feedback is not available during conventional training. We think that the improved feedback is the single most important feature making simulator training more effective. Secondly, tasks could be repeated without any risk that correct solutions were memorized because the position of the object to be localized was always randomly changed. Beside these factors, anecdotal reports from the participants indicate that the simulator training was more challenging and motivating than conventional training. These explanations are in concordance with results from a systematic literature review reporting that educational feedback, repetitive practice and curriculum integration are the most important features for effective learning in high fidelity medical simulations.11

Back to Top | Article Outline

Possible Impact of Simulator and Study Design

There are indications that distributed learning (spacing between training sessions instead of one long session) will improve learning outcome. A common explanation is that the skills being learned have more time to be cognitively consolidated between practice sessions.12,13 This effect might have been beneficial for the experimental group. It was, however, observed that some individuals at the second training session had difficulty recalling how to interact with the simulator. The effect due to distributed learning might have been modified for that category because they needed to spend time recalling how to use the simulator. In the control group, the training was organized both individually and in small groups. According to a recent review, collaboration in small groups may have positive effects on learning outcome.14 The effects of these factors are impossible to estimate, and they add some uncertainty to the conclusions.

In the simulator, the resolution of the simulated radiographs and the visual tooth models were significantly lower than in the real environment. It caused some anatomic details, normally visible in radiographs, to be difficult to distinguish in the simulated radiographs. The degree of difficulty interpreting the simulated radiographs was therefore increased. It was also observed that the training in the simulator was a rather slow process. The slowness can be attributed to the fact that the students were unaccustomed to the simulation environment. These factors might have reduced the simulator training efficiency.

Back to Top | Article Outline

Possible Impact of Test Instrument Design

One can argue that it would have been enough to use only the radiography subtest, since the test in all aspects reflected the skill to be learned. The low number of tasks (five) in combination with the dichotomous nature of the response alternatives made it sensitive to random effects due to high probability for guessing correctly. With an additional number of tasks, these effects would have been less. It was, however, not desirable to increase the number because extensive testing might have had undesired learning effects.

The radiography subtest results were close to maximum score after training indicating a possible ceiling effect. This effect made the test less sensitive to skill improvement for high performers. Although the radiography subtest results at subgroup analysis were not identical with the proficiency test results, the radiography subtest results seem not to violate the conclusions drawn from the proficiency test.

This study dealt with the immediate training effects within a very restricted field of radiology, and therefore the results are only valid for that topic. However, the students indicated that the training was challenging and motivating. Therefore simulator-supported training in radiology may be an attractive alternative to conventional teaching methods for related fields as well. Increased motivation, in turn, may increase training efforts and give positive long-term effects. The results also indicate that the simulator concept has the potential for successful development into other areas of radiology training. Features supporting expansion to other fields are the high validity of the simulated radiographs in combination with the extraordinary feedback which can be arranged in virtual radiation-free environments. However, new applications need to be properly evaluated.

In conclusion, our study demonstrated that training in the radiology simulator improved skill at interpreting spatial information in radiographs utilizing parallax when evaluated immediately after training. For individuals with low visual-spatial ability, simulator-based training seems to be more beneficial than conventional training.

Back to Top | Article Outline


This project was supported by EU Structural Funds, Objective 1, Northern Norrland, Sweden. The simulator was developed in co-operation with VRlab, Umeå University, Sweden. The authors thank Fredrik Wiklund, PhD, for excellent statistical advice.

Back to Top | Article Outline


1. Nilsson T, Ahlqvist J, Johansson M, Isberg A: Virtual reality for simulation of radiographic projections: validation of projection geometry. Dento-maxillo-facial Radiol 2004;33:44–50.
2. Vince J. Essential Virtual Reality Fast: How to Understand the Techniques and Potential of Virtual Reality. London: Springer-Verlag, 1998.
3. White SC, Pharoah MJ. Oral Radiology: Principles and Interpretation, 5th ed. St. Louis: Mosby, 2004: 91–93.
4. Berbaum KS, Smoker WRK Smith WL: Measurement and prediction of diagnostic performance during radiology training. Am J Roentgenol 1985;145:1305–1311.
5. Nilsson T, Hedman L, Ahlqvist J: Visual-spatial ability and interpretation of three-dimensional information in radiographs. Dento-maxillo-facial Radiol 2007;36:86–91.
6. Peters M, Laeng B, Latham K et al.: A redrawn Vandenberg and Kuse mental rotations test: different versions and factors that affect performance. Brain Cogn 1995;28:39–58.
7. Vandenberg SG, Kuse AR: Mental rotations, a group test of three-dimensional spatial visualisation. Percept Mot Skills 1978;47:599–604.
8. Kennedy RS, Kennedy KE, Bartlett KM. Virtual environments and product liability, Handbook of virtual environments: design, implementation, and applications. Edited by Stanney KM. Mahwah, NJ: Lawrence Erlbaum, 2002: 543–553.
9. Cohen J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Hillsdale, NJ: Lawrence Erlbaum, 1988: 8–27.
10. Hojat M, Xu G: A visitor's guide to effect sizes: statistical significance versus practical (clinical) importance of research findings. Adv Health Sci Educ Theory Pract 2004;9:241–249.
11. Issenberg SB, McGaghie WC, Petrusa ER et al.: Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005;27:10–28.
12. Metalis SA: Effects of massed versus distributed practice on acquisition of video game skill. Percept Mot Skills 1985;61:457–458.
13. Donovan JJ, Radosevich DJ: A meta-analytic review of the distribution of practice effect: now you see it, now you don't. J Appl Psychol 1999;84:795–805.
14. Dolmans DH, Schmidt HG: What do we know about cognitive and motivational effects of small group tutorials in problem-based learning? Adv Health Sci Educ Theory Pract 2006;11:321–336.
© 2007 Society for Simulation in Healthcare