Because of the interest of therapists in many geographic areas in learning to use the Test of Infant Motor Performance (TIMP), a CD-ROM self-study program for learning to score the test was developed and tested for effectiveness in attaining acceptable scoring reliability. The TIMP is a newly developed motor assessment scale for use by physical therapists and occupational therapists. 1–5 The TIMP is designed to examine the functional motor skills of infants from 32 weeks postconceptional age to four months corrected age. Training is required before a therapist can use the TIMP independently. The initial training process in use involved a four-hour workshop provided by an author of the test, scoring 14 unedited videotapes of infants being tested, and testing three infants under the supervision of the test developers. This process was time-consuming and required intensive work with the test developers. Access to training videotapes was limited for therapists who live outside the Chicago metropolitan area. Therefore, it was desirable to develop a training procedure that was less time-consuming, more accessible, and, if possible, less supervised.
Learning, as suggested by Maslow, 6 is self-development or self-actualization. The discrepancy between what an individual wants to be and what he is represents the area where the need for education lies. 7 Knowles 8,9 provided three assumptions about adult learning: 1) learning will take place when the subject is related to the learners’ need or interest, 2) the past experiences of an adult learner may serve as resources for his or her own learning, and 3) adult learners need to be independent and self-directed. Adult learning thus is determined by the learner's needs, goals, and motivation.
Adult learners have several characteristics that should be taken into consideration when developing educational programs. 10 First of all, adults have individualized learning styles. They are comfortable and skilled with whatever styles they have, and they are not easily persuaded to change to another path. One approach to match learning needs with the characteristics of adult learners is self-directed learning.
Self-directed learning is a learning process in which the learners obtain knowledge from predetermined materials, at their own pace, and without the aid of an instructor. 11 Self-directed learning programs are designed for self-paced and self-managed learning. In contrast, the traditional lecture-based learning approach is controlled mostly by the teacher. Lecture-based teaching can allow many students to learn at the same time, but it does not allow for much individual variability in preferred learning style or pace.
Self-directed learning should be introduced in the development of competent practice in health professionals because of the varying needs for information and different time frame in which information must be obtained for each individual. 12 A major barrier to continuing education is the physical constraint imposed by the time and locations where workshops are provided. 13 One solution is to provide flexibility using various educational tools, including videotapes and computer networks. 14
Choosing appropriate media is important. 15 Books can provide organized information in detail, but they do not provide the ability to illustrate motion needed for teaching the TIMP. Videotapes and computerized instructional programs can be used in presenting information that can be learned by viewing the materials.
The use of videotapes can aid in understanding of clinical assessment tools. Russell and colleagues 16 held a one-day workshop to train users in scoring the Gross Motor Function Measure. The subjects demonstrated significant improvement in scoring agreement after viewing a training videotape that included three children of different ages with different types of cerebral palsy. Another example of using videotapes for training is work by Doble. 17 Doble examined the effectiveness of a training protocol that involved reading the administration manual, viewing a training videotape, and completing five practice observations and ratings either on patients or on people without disabilities using the Process Skills Assessment, 18 an observational scale designed to assess the ability to learn and manage the processes of daily living. Moderate levels of interrater reliability, demonstrated by Spearman correlation coefficients ranging from 0.58 to 0.74 in different subcategories, were reached using this training protocol.
The newest technology for self-directed learning is multimedia in format. 19 Multimedia learning materials are computer-based applications that allow users to access different types of information through computerized instructional programs. The types of information that can be provided include text, graphics, photos or video clips, and sound. 20
Computerized instructional media have been used as tools in teaching rheumatology, kinesiology, applied anatomy, patient assessment, and sliding board transfer skills in entry-level professional education. 21–26 Computerized instructional media in these studies included images, case examples, and related questions to facilitate student interaction with the learning materials. Measures of effective learning included equivalent or improved exam scores, positive attitudes toward learning materials using computerized instructional media, and positive feedback from the educator because they saved time by not requiring repeated teaching of the same materials. The limitations of computer-assisted instruction are the cost and time in design and development. 24 Computerized instructional media, as a result, are best used for materials that do not require regular updating.
CD-ROM is a tool that can display information in various formats in a computerized environment. Using CD-ROM as a learning tool can offer advantages, including accessibility, speed, comprehensiveness, and economy. 27 CD-ROMs have been used to teach clinical biochemistry, dermatology, pathology, and radiology. 28–31 The benefits of CD-ROM instructional programs are similar to the advantages of computer-assisted instructional programs, including student control over pace and progress, equal or better performances on exams when compared with other learning formats, and positive feedback from both students and faculty.
A CD-ROM self-study program was designed to increase accessibility to training in scoring the items on the TIMP. The CD-ROM program can provide text descriptions of the assessment, pictures, and video-like clips to illustrate the movements that are essential in learning to score the TIMP. The purpose of this study was to examine the effectiveness of the CD-ROM self-study program as a method for achieving rater reliability in scoring of the TIMP items. Training in administration of the TIMP items (ie, the ability to perform the assessment on infants) was not examined.
The criteria for subject selection were as follows: 1) physical therapists or occupational therapists, and 2) working in neonatal intensive care units, early intervention programs, or pediatric high-risk follow-up clinics serving infants less than four months corrected age. Ten therapists were recruited from the hospitals affiliated with ongoing TIMP research that had not previously used the test. Thirty letters were also sent to those who previously expressed interest in learning about the TIMP to request their participation. The final subjects were those who replied confirming that they were willing to commit to this study.
Twelve therapists were recruited for the videotape group and 14 therapists were recruited for the CD-ROM group as a sample of convenience. The 12 subjects who lived within the Chicago metropolitan area were assigned to the videotape group, and those who lived outside the Chicago metropolitan area were assigned to the CD-ROM group because the need to travel would be a barrier to completing the current training procedures. Subjects in the CD-ROM group were from the United States (10), Iceland (two), Australia (one), and Canada (one). One subject in the CD-ROM group was an occupational therapist. All other subjects were physical therapists. The therapists consented to participate according to procedures approved by the University of Illinois at Chicago Institutional Review Board for the Protection of Human Subjects. Parents gave their approval for videotaping the TIMP tests on their infants.
One subject in each group dropped out of the study for personal reasons, and the scoring sheets from one subject in the CD-ROM group were lost in the mail. The final numbers of subjects were 11 in the videotape group and 12 in the CD-ROM group.
Three subjects in the videotape group and six in the CD-ROM group reported an MS degree as their highest level of education. One subject in the CD-ROM group had a PhD, and the remainder of the subjects held bachelor's degrees. Four subjects in the CD-ROM group rated themselves as very competent in the use of a computer, seven reported having some experience, and one was a bit apprehensive about using a computer.
Table 1 summarizes the clinical experience of subjects in both groups. The mean years of clinical experience was 6.3 in the videotape group (SD = 5.1, range = 1–18) and 13 in the CD-ROM group (SD = 6.0, range = 2–21). The mean years of clinical experience in pediatric therapy was 4.9 years in the videotape group (SD = 5.3, range = 0.25–17) and 10.3 in the CD-ROM group (SD = 5.7, range = 2–20). The subjects in the CD-ROM group had significantly more clinical experience than the videotape group (t = −2.9, p < 0.009).
Construction of the CD-ROM Self-Study Program
The CD-ROM program includes text description of the TIMP and details of each item, scanned photos to demonstrate testing positions, digitized video clips to illustrate item administration and scoring, and quiz sections. The CD-ROM self-study program was constructed in the Instructional Technology Laboratory at the University of Illinois at Chicago. Several kinds of software were used, including Claris Homepage (Claris Corp, Santa Clara, Calif) for basic page design, Adobe Photoshop (Adobe Systems Inc, San Jose, Calif) for scanning and adjusting pictures, Real Magic Producer (Real Networks Inc, Seattle, Wash) for digitizing video clips from videotaped infant examinations, and Adobe Premiere (Adobe Systems Inc) for transferring the digitized video clips into movie-like clips. Approximately 200 hours were spent constructing the CD-ROM program.
The digitized video clips are from videotapes of actual TIMP testing. Each clip shows the infant being placed in the test position, item administration, and how the infant responds to the test item. Three individuals, including test developers and a therapist using the TIMP in research, reviewed the scores of infants’ performances for these video clips. Final decisions on scores were based on the feedback from reviewers.
The minimum equipment required for running the CD-ROM self-study program includes a 486 computer or Pentium Plus PC with color monitor, 16 MB of memory, quad-speed (4X) CD-ROM drive, and a Web browser such as Netscape 3.0 (or above) or Internet Explorer. The CD-ROM self-study program is viewed in a Web browser with use of a mouse to navigate through the program.
The subjects in the videotape group were trained according to the existing procedures for training therapists to use the TIMP. 2 First they attended a four-hour workshop taught by the second author. The instructor lectured about the theoretical background of the TIMP and scored one or two videotapes along with the subjects in the workshop. Reading materials related to the theoretical background of the TIMP were provided for subjects to read on their own time. The subjects then watched and scored fourteen 30- to 90-minute training videotapes illustrating performances of infants across the age range of the test. Subjects chose their own place and time for scoring videotapes. Specific instructions on how to score the videotapes were given to the subjects. They were allowed to rewind the videotapes only one time for repeated inspection of the performances. No feedback on their scoring was given before the subjects finished scoring all videotapes.
The subjects in the CD-ROM group were given the same background reading materials as the videotape group. After reading these materials, they independently viewed the CD-ROM program to learn how to score the TIMP. Unlike the subjects in the videotape group, the subjects were allowed to play the CD-ROM program as many times as they wanted to, which is one of the advantages of using computerized instructional material.
It was recommended that all subjects in both groups practice the TIMP on about 10 infants of different ages or ability levels to become familiar with administering the TIMP. Both groups were asked to record the time they spent reading, scoring the training videotapes or playing the CD-ROM, and practicing the TIMP on infants.
For assessment of rater reliability, subjects in both groups scored four videotapes after completing the training procedures. The babies in these four videotapes were different from those in the 14 training videotapes and the video clips used in the CD-ROM self-study program except for one infant, who appeared in both the training videotapes and the reliability videotapes but at different ages. The scores for items on the four videotapes, which were rated by the authors in advance, were analyzed by Rasch analysis 1,32 to verify that the infants in the videotapes had different ability levels and that their performances were typical of previously tested infants (ie, they lacked exceptionally unusual item performances).
A questionnaire sent after the rater reliability assessment was finished was used to obtain feedback about satisfaction with using the CD-ROM program. Questions were in yes/no or ordinal rank format in the areas of ease of use, content organization, educational effectiveness, and graphic design. Space was also provided for subjects to provide written comments to further improve the CD-ROM program. No questionnaire was sent to the videotape group because the purpose of the questionnaire was to improve the CD-ROM program.
Raters’ scoring data were analyzed according to a Rasch psychometric model that posits that item scores are determined in a probabilistic manner by the difficulty of the item and the ability of the performer. 32 As mentioned previously, infant performance on the four videotapes fit the Rasch model (ie, more able infants tended to pass more difficult items and vice versa). As a result, rater performance could be analyzed as a source of unexpected or inconsistent item scores (unreliability) and be evaluated for systematic bias (severity/leniency relative to other raters). In Rasch analysis, fit statistics are used to compare expected to observed values in rater performance as indices of unexpected scoring patterns.
Rasch analysis with the Facets computer program version 3.10 (MESA Press, Chicago, Ill) was used to evaluate the following: 1) unexpected item scores: subjects had to have fewer than five percent misfitting ratings for the total number of TIMP items scored on the four reliability videotapes to be considered reliable raters; 2) overall rater consistency or reliability: mean square infit values <1.3 represented acceptable consistency in using the scoring scales; and 3) rater severity (systematic bias) was assessed to identify raters that were consistently more severe or more lenient than other raters. These criteria for scoring reliability were selected following the work in a previous study of rater reliability on the TIMP. 33 Fewer than five percent misfit ratings corresponded to 95% agreement, and fewer than 1.3 infit values were found, producing a t statistic of p < 0.05.
Group averages for these three reliability variables were compared by the Student t test (p < 0.05, two-tailed) to examine whether the CD-ROM group reached, on average, higher or lower rater reliability and overall consistency than the videotape group and to assess group differences in rater severity.
The average time spent on learning was compared between the videotape group and the CD-ROM group using the Student t test (p < 0.05, two-tailed). The relationship between reliability and training time was explored by calculating the Pearson product moment correlation coefficient between training time and the percentage of misfitting ratings, infit values, or rater consistency.
The percentages of favorable vs unfavorable responses to the CD-ROM program were calculated from the yes/no questions on the questionnaire. Written comments made by subjects were used to evaluate the need for further improvement of the CD-ROM learning program.
Table 2 summarizes the scoring reliability of each group. The mean percentage of misfit ratings for items scored from the four videotapes was 4.1 (SD = 2.1, range = 1.7–7.7) in the videotape group and 4.9 (SD = 4.9, range = 1.3–9.4) in the CD-ROM group. No statistically significant difference was found between the two groups in the percentage of misfit item ratings (t = −0.88, p = 0.39), indicating that the percentage of unreliable ratings did not differ by training method. The mean of the infit mean square for overall rater consistency was 0.96 (SD = 0.25) in the videotape group and 1.0 (SD = 0.27) in the CD-ROM group. No statistical significance was found for the difference between the group means in the infit mean square (t = −0.42, p = 0.68), indicating that average rater consistency did not differ by training method. In terms of individual performance, five subjects in each group failed to pass the preestablished criterion of fewer than five percent misfit ratings. Two subjects in the videotape group and one in the CD-ROM group failed the criterion of overall rater consistency.
In the Rasch analysis the rater severity/leniency measure was set to center on 50 (ie, an average rater would have a score of 50). In this study rater severity between 46 and 49 was considered similar enough to describe the raters as interchangeable. The mean of the rater severity measure was 45.7 (SD = 2.53, range = 40–49) in the videotape group and 48.1 (SD = 2.77, range = 44–52) in the CD-ROM group. The subjects in the videotape group were significantly more lenient raters than subjects in the CD-ROM group (t = −2.14, p = 0.044). In other words, subjects in the CD-ROM group tended to give systematically lower scores than subjects in the videotape group when observing the same performances on the TIMP items.
The mean amount of time spent on learning materials (including reading the articles related to theoretical background of the TIMP, scoring the training videotapes or playing the CD-ROM program, and scoring the four videotapes for reliability analysis) was 20.04 hours (SD = 5.29, range = 13.22–28.25) in the videotape group and 10.80 hours (SD = 4.20, range = 6.00–28.25) in the CD-ROM group. The CD-ROM group spent significantly less time than the videotape group on learning the materials (t = 4.66, p = 0.000). The average time spent on practicing the TIMP on infants was 1.11 hours (SD = 1.62, range = 0–5) in the videotape group and 3.48 hours (SD = 3.42, range = 0–10.5) in the CD-ROM group. The CD-ROM group spent significantly more time than the videotape group practicing the TIMP on infants (p = 0.048). The ratio of average time spent learning materials for the CD-ROM group to that of the videotape group was 0.54. Table 3 provides the amount of time spent on different learning activities in each group.
The amount of time spent learning materials had no correlation with the percentage of misfit ratings or with the rater consistency variable (Table 4). The minimum time spent learning materials for those who passed both scoring reliability criteria was 13.2 hours in the videotape group and eight hours in the CD-ROM group. Rater severity, however, was significantly correlated with the amount of time spent learning to score the TIMP (Pearson correlation = −0.578, p = 0.004). The more time spent learning, the more lenient was the rater. Years of clinical experience in pediatric therapy had no significant correlation with any of the three outcome variables (percentage of misfitting ratings, r = 0.20; rater consistency, r = 0.16; rater severity, r = 0.18).
The feedback obtained from the questionnaire about the CD-ROM program was consistently favorable. Six subjects (50%) rated themselves as highly satisfied with the CD-ROM self-study program, and the other six (50%) rated themselves as satisfied. Valuable suggestions provided by the subjects included increasing the length of the video clips, using better angles of observation for photography, and adding more video clips.
In summary, the subjects in the CD-ROM group spent significantly less time learning materials but more time than the videotape group practicing the test with infants. Overall learning time was less in the CD-ROM group. The average achievement on percentage of misfitting ratings and the overall rater consistency measure did not differ significantly between the videotape group and the CD-ROM group, which indicated similar learning outcomes in both groups. The subjects in the CD-ROM group were significantly more severe raters than the subjects in the videotape group. Neither time spent learning materials nor clinical experience in pediatric therapy had a significant correlation with the percentage of misfit ratings and overall consistency measure. The time spent learning materials, however, had a significant negative correlation with rater severity.
Adult learners will learn something when they are motivated. 34 The subjects in this study were therapists who previously expressed their interest in applying the TIMP in their clinical practice or who worked in hospitals affiliated with ongoing TIMP research. As a result, the subjects in this study were either motivated or had the need to learn how to score the TIMP, creating a fit with the principles of adult learning theory. Adult learners are comfortable and skilled with the learning styles that they have. 10 Adult learners are also afraid of making mistakes in front of their peers or those who are less experienced. Subjects could learn how to score the TIMP by watching the training videotapes or playing the CD-ROM program alone at their own pace. This is a fit to characteristics of adult learners for both groups. The two training methods support slightly different learning styles. For those who want to get feedback immediately or those who want to learn the test item by item, the CD-ROM program is more suitable. The CD-ROM program provides organized information on individual items and comments on the video clips. The subjects could adjust their understanding of the item as they played the CD-ROM program. For the videotape training, subjects did not get feedback until they finished scoring all 14 training videotapes. This procedure would be less desirable for those who are accustomed to getting answers right away.
Brevity and conciseness are important in editing the videotapes. 15 Unrehearsed classroom videotaping and taped panel discussions are thought to be of little use because they increase the difficulty of concentrating on and picking up important issues from the content. The videotapes used in the videotape training group were unedited testing. The CD-ROM program was constructed to improve learning of the TIMP. Several guides exist for trainers in developing self-directed learning programs. 35–37 These guides indicate that first of all, the trainer needs to define specific learning objectives. The learning objectives should be specific to the content and to the behavior that is desired in the clinical setting. The goal of the TIMP CD-ROM program was focused on how to score items on the TIMP. The goal for the learner was to be able to score the TIMP reliably from videotapes of the TIMP conducted on infants. The instructors should be accessible to the learner or provide feedback frequently so that the learner will not be frustrated or stuck by obstacles in the learning process. Breaking the information into single concepts can assist the learner by preventing the distraction caused by unimportant details. This was accomplished in our CD-ROM program by creating an individual page for each test item. Repetition of key concepts is essential. Using examples or case studies can help learners integrate the information they have learned. A quiz section is inserted after every 10 items in the TIMP CD-ROM program for learners to test themselves on what they have learned.
Previous studies have shown that students can attain equally good or better scores on exams by using computerized instructional programs as compared with traditional teaching methods. 24–26 In this study, both groups achieved a similar level of aberrant ratings and rater consistency but different levels of rater severity. The subjects in the videotape group scored unedited videotapes, which could be viewed twice, but no feedback was provided before they finished scoring all 14 training videotapes. The CD-ROM program, on the other hand, provided comments on each video clip about why a specific score was given. The subjects could play the video clips again if they missed some details. Subjects in the CD-ROM group were trained to observe details in the video clips; as a result, they may have become more critical raters than the subjects in the videotape group.
A meta-analysis of 24 studies revealed that the ratio of average time for computer-based learning to average time for conventional study was 0.71. 38 The ratio of average time spent on learning materials for the CD-ROM group to that of the videotape group was 0.54, in agreement with previous findings of a reduction in learning time when using computer-based learning.
Our findings related to clinical experience are also similar to those of other researchers. Previous studies have found that rater reliability on Tinetti balance scores, 39 videotaped observational gait analysis assessment, 40 and goniometric and visual measurement of forefoot position 41 is not influenced by the rater's clinical experience. Although a limitation of the study is the fact that clinical experience differed significantly between the two groups, no significant correlation was found between years of clinical experience in pediatric therapy and percentage of aberrant ratings, overall rater consistency, or rater severity measures in this study.
In general, the CD-ROM program is superior to the videotape training in its organization of information. Each item includes text description followed by video clip(s) of performance. Comments on why specific scores were given are provided. The amount of video clips, however, is limited in the CD-ROM program. Only one or two video clips are inserted for each item. On the other hand, the 14 training videotapes include infants with various ability levels, so the subjects in the videotape group had seen performance of almost every possible score for each TIMP item. The videotape training, however, is lengthy because each video shows a complete, unedited test, and most of the raters expressed frustration because no feedback was provided during the videotape scoring process.
Time spent on learning materials did not have a significant correlation with either the percentage of misfit ratings or overall rater consistency. Three subjects in the CD-ROM group, however, spent less than eight hours learning materials, and none of them passed the reliability criterion for percentage of aberrant ratings. It thus seems that at least eight hours are needed for CD-ROM training to achieve acceptable scoring reliability.
Time spent on learning materials was negatively correlated with rater severity measures. The most likely explanation is that those who spent less time learning materials might have missed seeing spontaneous movements they were expected to watch for throughout the testing. This would lead to more zero scores for certain items, which would be reflected in a greater severity measure.
No significant correlation between testing practice time and the three reliability measures was found in this study. All subjects were asked to practice the TIMP on 10 infants to become familiar with administration of the TIMP. The average time recorded on practicing on infants, however, was 1.1 hours in the videotape group and 3.5 hours in the CD-ROM group in this study. In neither group did subjects practice as much as suggested. Lack of correlation between practice and scoring reliabilities, as a result, might be due to insufficient practice in both groups. Additinal investigation is still needed to explore how the amount of practice will affect rater reliability and whether practice can serve as a compensation for the limited amount of the video clips in the CD-ROM program.
All subjects were either satisfied or highly satisfied with the CD-ROM self-study program according to the feedback from the questionnaire. Some additional efforts, however, can be made to enhance the quality of the program. More video clips with better camera angles should be inserted, especially for the items most commonly misrated. Doing so, however, might increase training time. Increasing the interaction between the user and the program should also be emphasized. For example, users would benefit from entering their own scores for the video clips before they read comments on correct answers. A clock and a counter for the percentage of correct answers that the learner has entered could also be used so that the learner can keep track of the learning time and how well they've done on scoring the video clips.
This study has some limitations. First, the sample of subjects was small and one of convenience. Only one occupational therapist was recruited in our study even though the TIMP is designed to be used by both physical therapists and occupational therapists. The years of clinical experience in pediatric therapy was significantly higher in the CD-ROM group than the videotape group. Although no significant correlation was found between clinical experience and any of the three criterion measures, further investigation is needed to ensure that the CD-ROM program can help less experienced therapists learn how to score the TIMP. Additional investigation is also needed to explore the relationship between the amount of time practicing on infants and scoring reliabilities.
The ability to score videotapes reliably is not the same as scoring live performances. Knowing how to score is a cognitive task, whereas performing the test is a psychomotor task. Whether the therapists who learned to score the TIMP by using the CD-ROM program have the competence to administer and score the TIMP in a clinical situation remains unanswered. Additional investigation is needed to assess whether subjects who can score the TIMP videotapes reliably can also administer and score the TIMP reliably in real time. Finally, learning can be improved for either method by increasing the amount of feedback learners receive during training.
A CD-ROM self-study program was as effective as a workshop combined with scoring unedited videotapes for rater reliability training and was significantly more efficient in use of learner time.
We thank Thubi H. A. Kolobe, PhD, PT, and Mark Gelula, PhD, for assistance in the design and analysis of this study; Laura Zawacki, PT, PCS, Elizabeth T. Osten, MS, OT, and Celina Martin, PT, for assistance in videotaping; Thubi H. A. Kolobe, PhD, PT, Elizabeth T. Osten, MS, OT, and Maureen Lenke, OT, for assistance in reviewing and scoring the video clips in the CD-ROM program; Ed Garay, Chris Peterson, Sajjad Lateef, Tamara Mannins, Volker Kleinschmidt, and Mukund Rangarajan for consultation on constructing the CD-ROM program; Kari Mills and Tiffany Lange for assistance with graphic design of the CD-ROM program; Kari Mills for assistance with designing the feedback questionnaire; Mary E. Murney, MS, PT, PCS, and Vanessa M. Barbosa, MS, OT, for valuable editorial advice; and the parents of infants who allowed them to be photographed.
1. Campbell SK, Osten ET, Kolobe THA, et al. Development of the Test of Infant Motor Performance. Phys Med Rehabil Clin North Am. 1993; 4: 541–550.
2. Campbell SK, Kolobe THA, Osten ET, et al. Construct validity of the Test of Infant Motor Performance. Phys Ther. 1995; 75: 585–596.
3. Campbell SK. Test-retest reliability of the Test of Infant Motor Performance. Pediatr Phys Ther. 1999; 11: 60–66.
4. Campbell SK, Kolobe THA. Concurrent validity of the Test of Infant Motor Performance with the Alberta Infant Motor Scale. Pediatr Phys Ther. 2000; 12: 1–8.
5. Campbell SK, Hedeker D. Validity of the Test of Infant Motor Performance for discriminating among infants with varying risk for poor motor outcome. J Pediatr. 2001; 139: 546–551.
6. Maslow A. Motivation and Personality. New York: Harper & Row; 1970.
7. Knowles MS. Assessing needs and interests in program planning. In: Knowles MS, ed. The Modern Practice of Adult Education: From Andragogy to Pedagogy. Chicago: Association Press; 1980: 79–120.
8. Knowles M. The emergence of a theory of adult learning: andragogy. In: Knowles M, ed. The Adult Learner: A Neglected Species. Houston, Tex: Gulf Publishing Co; 1978: 27-59.
9. Knowles MS. Andragogy: an emerging technology for adult learning. In: Knowles MS, ed. The Modern Practice of Adult Education: From Andragogy to Pedagogy. Chicago: Association Press; 1980: 37–55.
10. Mast ME, van Atta MJ. Applying adult learning principles in instructional module design. Nurse Educator. 1986; 11: 35–39.
11. Piskurich TE. Self-Directed Learning: A Practice Guide to Design, Development, and Implementation. San Francisco: Jossey-Bass; 1993.
12. Ash CR. Applying principles of self-directed learning in the health professions. In: Brookfield S, ed. Self-Directed Learning: From Theory to Practice. San Francisco: Jossey-Bass; 1985: 63–74.
13. Knapper CK, Cropley AJ. Lifelong Learning and Higher Education. London: Kogan Page Ltd; 1991.
14. Smith WAS, Stroud MA. Distance education and new communications technologies. In: Knapper CK, ed. Expanding Learning Through New Communications Technologies. San Francisco: Jossey-Bass; 1982: 5–22.
15. van Reenen J. Adult learning and video training in health care. J Audio Media Med. 1990; 13: 143–145.
16. Russell DJ, Rosenbaum PL, Lane M, et al. Training users in the Gross Motor Function Measure: methodological and practice issues. Phys Ther. 1994; 74: 630–636.
17. Doble SE. Test-retest and inter-rater reliability of a process skills assessment. Occup Ther J Res. 1991; 11: 8–23.
18. Doble SE. Intra-rater reliability and concurrent validity of a process skills assessment. Sargent College of Allied Health Professions. Boston: Boston University; 1985.
19. Kommers PAM. Definitions. In: Kommers PAM, Grabinger S, Dunlap JC, eds. Hypermedia Learning Environments: Instructional Design and Integration. Mahwah, NJ: Lawrence Erlbaum; 1996: 1–11.
20. Wishnietsky DH. Hypermedia: The Integrated Learning Environment. Bloomington, Ind: Phi Delta Kappa Educational Foundation; 1992.
21. Sanford MK, Hazelwood SE, Bridges AJ. Effectiveness of computer-assisted interactive videodisc instruction in teaching rheumatology to physical and occupational therapy students. J Allied Health. 1996; 25: 141–148.
22. Toth-Cohen S. Computer-assisted instruction as a learning resource for applied anatomy and kinesiology in the occupational therapy curriculum. Am J Occup Ther. 1995; 49: 821–827.
23. Carey JR, Ellingham C, Chen YG. Computer-based solution to a clinical education problem in a physical therapy course. Phys Ther. 1986; 66: 1725–1729.
24. Barker SP. Comparison of effectiveness of interactive videodisc versus lecture-demonstration instruction. Phys Ther. 1988; 68: 699–703.
25. Dengler PE. Computer-assisted instruction and its use in occupational therapy education. Am J Occup Ther. 1983; 37: 255–259.
26. Farrow M, Sims R. Computer-assisted learning in occupational therapy. Aust Occup Ther J. 1987; 34: 53–58.
27. Aarvold J, Walton G. CD-ROM: towards a strategy for teaching and learning. Nurse Educ Today. 1992; 12: 458–463.
28. Hooper J, O'Connor J, Cheesmar R. Learning clinical biochemistry using multimedia interactive clinical cases. Clin Chim Acta. 1996; 248: 119–123.
29. Hartmann AC, Cruz PD. Interactive mechanisms for teaching dermatology to medical students. Arch Dermatol. 1998; 134: 725–728.
30. Andrew SM, Benbow EW. Conversion of a traditional image archive into an image resource on compact disk. J Clin Pathol. 1997; 50: 544–547.
31. Pastore G, Valentini V, Campioni P, et al. Telecommunications and multimedia systems in education: what developments for radiology? Rays. 1996; 21: 290–301.
32. Linacre JM. A User's Guide to Facets. Rasch Measurement Computer Program. Chicago: MESA Press; 1997.
33. Osten ET. Examination of the rater reliability of the Test of Infant Motor Performance. Department of Occupational Therapy. Chicago: University of Illinois; 1993.
34. Tarnow KG. Working with adult learners. Nurs Educ. 1979; 4: 34–40.
35. Cooper SS. Teaching tips. J Continuing Educ Nurs. 1986; 17: 104–105.
36. Istre SM. The art and science of successful teaching. Diabetes Educator. 1989; 15: 67–75.
37. Schmidt KL, Fisher JC. Effective development and utilization of self-learning modules. J Continuing Educ Nurs. 1992; 23: 54–59.
38. Kulik CLC, Kulik JA, Shwalb BJ. The effectiveness of computer-based adult education: a meta-analysis. J Educ Commun Res. 1986; 2: 235–252.
39. Cipriany-Dacko LM, Innerst D, Johannsen J, et al. Interrater reliability of the Tinetti balance scores in novice and experienced physical therapy clinicians. Arch Phys Med Rehabil. 1997; 78: 1160–1164.
40. Eastlack ME, Arvidson J, Snyder-Mackler L, et al. Interrater reliability of videotaped observational gait-analysis assessments. Phys Ther. 1991; 71: 465–472.
41. Somers DL, Hanson JA, Kedzierski CM, et al. Effective development and utilization of self-learning modules. J Continuing Educ Nurs. 1992; 23: 54–59.