Secondary Logo

Journal Logo

Computer-Assisted Instruction

Revisiting Cognitive and Learning Styles in Computer-Assisted Instruction: Not So Useful After All

Cook, David A. MD, MHPE

Author Information
doi: 10.1097/ACM.0b013e3182541286
  • Free

Abstract

As health professions learners struggle to balance the demands of clinical duties and learning a growing volume of medical knowledge against reductions in available time, the need for efficient methods to facilitate learning has become critically important. Computer-assisted instruction (CAI) has been proposed as one possible solution. A recent meta-analysis of over 260 studies1,2 demonstrated that CAI can be effective, but much remains to be learned about how to optimally employ CAI.2,3 Although some authors have claimed that CAI is more efficient than noncomputer instruction, a meta-analysis found that the time required in Internet-based instruction is similar, on average, to that of traditional methods (pooled effect size = –0.10, P = .63).4 However, time outcomes varied widely, suggesting that course- and context-specific features play an important role in learning efficiency. This begs the question, How can we optimize CAI to enhance effectiveness and improve efficiency? Tailoring instruction to a learner’s individual characteristics is one possible solution. Among the many possible characteristics to which CAI might adapt, cognitive and learning styles (CLSs) have received much attention.

Learning styles are “general tendencies to prefer to process information in different ways.”(5p233) Hundreds of learning styles have been described, including the popular frameworks described by Kolb,6 Jung,7 Felder and Silverman,8 and Reichmann and Grasha.9 Cognitive styles comprise a distinct but related set of stable traits that learners employ in perceiving, processing, and organizing information. Dozens of cognitive styles have been described, each with its own theoretical framework, but most can be grouped in one of three broad clusters: field dependent–independent (wholist–analytic is similar), visualizer–verbalizer, and visual–haptic. Both learning and cognitive styles refer to how information is processed; the key difference is that learning styles refer to preferences (typically self-reported), whereas cognitive styles refer to actual mental operations (typically measured using more objective tests). The discussion below applies to both cognitive and learning styles, even though they are distinct in theory, and I refer to them jointly as CLSs.

Questioning Earlier Conclusions

In a previous review,10 I summarized the available evidence on CLSs in CAI with a focus on how CLSs could be used to tailor instruction. I used the conceptual framework of aptitude–treatment (i.e., aptitude–intervention) interaction to guide my interpretations. An aptitude–treatment interaction occurs when a student with attribute 1 (e.g., active learner) learns better with instructional approach A than with approach B, whereas a student with attribute 2 (e.g., reflective learner) learns better with instructional approach B. The implications for instruction are that if an aptitude–treatment interaction is present, then learning will be optimized by using approach A to teach students with attribute 1, and using approach B to teach students with attribute 2 (i.e., instructional adaptation). If all students learn better with approach B, there is no aptitude–treatment interaction and no need for adaptation. Confirming an aptitude–treatment interaction thus provides evidence that an instructional adaptation actually makes a difference.

In the earlier review,10 four CLS domains appeared frequently—namely, wholist–analytic (field dependent–independent, or global–sequential), verbal–visual, concrete–abstract (sensing–intuitive), and active–reflective (external–internal). Although the evidence was weak and inconsistent, it seemed that wholist–analytic and active–reflective styles showed promise as variables on which to base adaptation. However, after conducting several original research studies exploring hypothesized interactions, I now question these conclusions.

The purpose of the present article is to argue that CLSs do not make a substantive difference in CAI. To support this argument, I will first reinterpret the evidence in light of recently published studies and then outline several conceptual reasons why CLSs really do not matter. I will conclude with suggestions, both methodological and thematic, for further research in the field.

Summary of the Evidence

An updated search for evidence

To identify evidence not included in my previous review (more recently published or inadvertently omitted), I searched PubMed on August 12, 2011, using the terms (learning style or cognitive style) and (medical education or education, professional) and (computer-assisted instruction or computer-based instruction or computer-based learning online or e-learning or Internet or [Web and learning]). I included all studies in which CLSs were measured quantitatively in the context of a CAI intervention, and for which course outcomes (learning tests, course grades, site use, or attitudes toward computer learning) were compared among learners with different styles. I made no restrictions based on year or language of publication. Of 154 studies identified in this search, 8 studies11–18 met these inclusion criteria and were not included in the previous review.10 I also examined the studies included in a recent systematic review of Web-based learning1,2 but found no additional studies relevant to the present review. The findings of the 8 newly identified studies are summarized in Table 1.

Table 1
Table 1:
Studies of Cognitive and Learning Styles and Computer-Assisted Instruction in Health Professions Education, 2006–2011*

Three studies compared different instructional designs and, thus, had the potential to show an aptitude–treatment interaction. One of these11 added self-assessment questions to an online tutorial and found that learners with the intermediate visual–verbal style had higher knowledge scores than those with the visual or verbal style. However, the association was not hypothesized and did not readily lend itself to theoretical explanations. More important, the association with style was the same regardless of instructional format (i.e., no interaction between instructional format and visual–verbal style), and there were no significant associations with the other three CLS dimensions. The other two studies12,13 stated a priori hypothesized relationships (interaction, or differential effect based on alignment/misalignment) between instructional design and CLSs, but none of these hypotheses were confirmed, and no other associations were identified.

The other five reports14–18 described four single-group studies (one report appeared to simply extend a previous study). Although each study found a statistically significant association with CLS, in all but one case the relationship was incidental rather than hypothesized. Moreover, without a comparison group, it was impossible to evaluate aptitude–treatment interaction.

In summary, none of the eight studies identified in this updated literature search demonstrated an aptitude–treatment interaction.

Reinterpretation of the evidence

In light of this less-than-impressive finding, it seemed reasonable to take another look at the evidence I had presented in 2005 and to focus on the prevalence of aptitude–treatment interactions. Of 16 health professions education studies identified in that review, only 9 had a comparison group. Because several of those 9 studies reported >1 CLS dimension, together they reported 16 different contrasts (i.e., 16 distinct statistical analyses relating CLSs to educational outcomes), and, of these, only 1 analysis (6%) showed an interaction between CLS and instructional approach. The other studies showed only main effect associations (i.e., all learners with a given style did better than those with another style, regardless of intervention) or no significant association.

Of 29 studies in non-health-professions education, 17 had a comparison group, and these reported 35 separate contrasts. Of these 35, only 8 analyses (23%) showed significant interactions.

Pooling all of these comparative studies together with the 3 new comparative studies identified (reporting 4 or 6 contrasts each, as shown in Table 1), there are 48 studies reporting 65 contrasts. Of these, only 9 contrasts (14%) showed significant interactions between CLS and instructional approach. The low frequency of aptitude–treatment interactions seems particularly salient when considering that positive studies (those that find a significant effect) are more likely to be written up and published than those without statistically significant findings (i.e., publication bias). If aptitude–treatment interactions with CLSs exist, they seem to be infrequent and small in magnitude.

Other reviews of CLS research beyond CAI have not offered evidence to the contrary. In a comprehensive review of 61 studies exploring CLSs in medical education, Curry19 noted multiple associations with medical specialty choice, individual characteristics, and learning outcomes but made no comments to suggest the presence of aptitude–treatment interactions. In Jonassen and Grabowski’s5 review of individual differences, the only CLSs for which interactions were noted (and even these were infrequent) were field independence–field dependence (analogous to wholist–analytic styles) and external–internal locus of control.

Why Don’t CLSs Have More Effect?

Why don’t CLSs have a more substantial additive benefit in CAI? There are a number of reasons why they have not worked so far, including inadequate instruments for assessing CLSs, theoretical uncertainties, and flawed study designs. I will discuss each of these potentially fixable problems and related issues below. Yet, even if all of these were corrected, I do not believe that CLSs would play an important role in education, for the simple reason that instructional methods are more important.

Importance of instructional methods

Multiple authors across several fields, including Pashler et al,20 Merrill,21 and Jonassen and Grabowski,5 have proposed that the effect of CLSs is small relative to that of instructional methods. They argue that the greatest learning gains will come from using effective instructional methods and carefully aligning instructional methods with learning objectives; once this has been done, the incremental gain from CLS adaptation is minimal. Stating this in terms of the aptitude–treatment interaction model, these authors propose a main effect from instructional methods but little or no interaction with CLS. As noted above, associations between learning outcomes and CLSs are fairly frequent in published research, but interactions are rare. Of course, the absence of evidence does not necessarily mean that the effect is absent. The lack of evidence could be attributed to a paucity of studies designed to find interactions. However, existing evidence cannot disprove the primacy of instructional methods over CLSs and suggests that the influence of CLSs is at best weak and inconsistent (see Pashler et al20 for an extended discussion of this issue).

Inadequate instruments for assessing CLS

Several other problems afflicting CLS research are potentially fixable. Foremost among these is the fact that current CLS assessments are inadequate. Individuals are so unique, and our methods for assessing this individuality so crude, that we cannot accurately characterize the individual. Computers can adeptly measure learners’ prior knowledge and track their use patterns, and adaptation to these characteristics can be and has been successfully done.22 However, most other individual differences, including CLS, are poorly assessed using currently available methods.

Numerous instruments for measuring CLS have been described. Validity evidence to support these instruments’ scores nearly always includes the internal consistency of scale scores (i.e., Cronbach alpha), often includes a description of instrument development, and occasionally includes exploratory correlation with other outcomes, exploratory factor analysis, test–retest reliability, or details on scoring rubrics and standards. Although these are useful, they are not the most important sources of validity evidence. High reliability and strong correlation with performance outcomes do not guarantee meaningful interpretations and useful consequences.

Acknowledging the foundational importance of these evidence sources, Kane23(p44) proposes that “most of the evidence needed to evaluate a theory-based inference is that needed to evaluate the theory.” Thus, strengthening the theory itself is an important part of validation. Evidence accrues by evaluating theory-predicted consistencies among different measures of the same construct, by evaluating discrimination between different constructs (using advanced techniques such as confirmatory factor analysis and multitrait–multimethod matrices), and by using the theory to predict causal relationships (i.e., associations and interactions) and observing to see whether predictions are verified. Confirmatory evidence would support both the theory and the meaningful interpretations of CLS scores.

There is also a step beyond meaningful score interpretation—namely, the use of scores in making decisions. Kane23(p51) notes, “Decisions are evaluated in terms of their outcomes or consequences.” Indeed, the most important source of validity evidence is that of consequences.24 In medicine, a perfectly accurate diagnostic test may receive little use if the disease is untreatable. Likewise, the consequences of decisions and actions following an assessment of CLSs will determine in large part the value of that assessment. For example, evidence of consequences might derive from successful adaptation based on CLS assessment.

Challenges of theory-based prediction

Even assuming accurate assessment, meaningful CLS research must be more than trial and error; it requires a clear theoretical foundation. Exploring associations and interactions between interventions and CLSs without first having stated the predicted relations is prone to spurious observations, and statistically significant findings must be regarded with great skepticism (I speak from experience of having done this).11 Yet problems arise when using theory to predict CLS interactions.

First is the multiplicity of theories, constructs, and instruments developed to explain and measure CLSs. Jonassen and Grabowski5 described well over 30 styles in 1993, and the number has grown since then. Selecting the model(s) most appropriate for a given educational context can be a daunting task, made more difficult when multiple CLS frameworks are required—as is often the case. Simultaneously responding to multiple learner characteristics may enhance an adaptation’s performance, but different aptitudes may themselves interact in complex ways. This further complicates an already-difficult problem.

Second, in planning a hypothesized aptitude–treatment intervention, each intervention (i.e., treatment) must be designed to benefit learners with a specific style. Although conceptually simple, in practice this can be a formidable challenge. Theories are vague, and empiric evidence is virtually nonexistent to guide the planning of style-targeted instructional designs. After developing a course with sound instructional methods designed to achieve learning objectives, there is often little room for incremental improvements that would be theoretically predicted to favor one cognitive learning style over another. In fact, in my experience it nearly always seems appropriate to allow learners from either extreme of a given cognitive learning style to experience whatever new instructional approach I envision. There is an unavoidable tension: If the researcher plans too little difference between the interventions intended to target specific styles, there will be no significant interaction; if the difference is too great, it likely comes at the cost of instructional effectiveness in one arm and may disadvantage that group.

Third, most researchers have hypothesized that instructional designs should take advantage of CLS strengths and compensate for weaknesses. For example, reflective learners would be provided instruction that emphasized reflection, whereas active learners would learn using active instructional methods. However, there are two alternative perspectives worth considering. First, some view the nondominant CLS as a weakness and argue that, instead of tailoring instruction to accentuate the dominant style, teachers should design instruction to target and strengthen weaknesses (akin to weight training to build strong muscles). Related solutions include consciously attempting to change the style and teaching learners strategies to help them overcome style limitations. Second, several of the theories from which CLSs have derived emphasize that the most effective learning involves balanced use of all styles rather than emphasis on one. For example, Kolb6 hypothesized a learning cycle with four stages corresponding to the four styles listed above. Learners might prefer to enter the cycle at one stage, but effective learning requires passage through all four stages of the cycle.25 This underscores the primacy of instructional methods.

Economic and practical considerations

Finally, from a purely practical standpoint, collecting evidence to support specific interactions will be a daunting task. Aptitude–treatment interaction research is a complex endeavor, and rigorous research in this field will involve much experimentation and likely much failure. After more than a decade of aptitude–treatment interaction research (not restricted to CLSs), Cronbach concluded, “Almost no [aptitude–treatment interaction] effects were confirmed by multiple studies…. The evidence was negative.”26 My experience with CLSs in CAI was similar and led me to abandon my research program after six years of successively negative experiments.

Each experimental cycle will require identification of an appropriate style model, validation of needed instruments, development of at least two alternate instructional designs, prediction of the interaction, and then testing with a rigorous research design (see considerations on research below). Positive findings will need to be replicated to be sure they are not spurious. Even reproducibly positive findings may still not be worth implementing if the magnitude or scope of benefit is small. Further testing with different learners, different topics, and different contexts would be needed to extrapolate the findings beyond the situation of the original study. The final step—implementation of robust findings in actual practice—would be technically straightforward but would entail costs and barriers to adoption. This process would need to be repeated for each instruction-style permutation.

A final economic consideration involves the number of learners who benefit. Adaptation is most likely to benefit those with a strong tendency toward one style or another (e.g., the extremes of the spectrum). If “intermediate” learners, who might constitute 33% to 50% of target learners, do not gain from adaptive instruction, the cost-effectiveness of the entire enterprise declines accordingly.

In short, even if instructional methods did not dominate, and accurate assessment and theory-guided predictions were possible, the economic and logistic challenges associated with developing, testing, and implementing these adaptations appear prohibitive on a large scale.

Considerations in Conducting CLS Research

To those still anxious to embark on rigorous study of adaptation to CLS (or another aptitude), I strongly recommend a careful reading of Cronbach and Snow,27 who describe how to properly execute aptitude–treatment interaction research. I will summarize briefly a few salient precautions.

First, as noted above, adaptation is most likely to benefit those at the extremes of the aptitude spectrum. Yet, most research in the field includes participants with “intermediate” styles or (worse) dichotomizes learners using the midpoint of the scale. Researchers will be more likely to confirm hypothesized relationships when participants have been selected to clearly represent the style in question.

Second, because CLS research explores interactions rather than main effects, defensible results typically require relatively large samples (typically 200 or more participants, unless an “extreme-groups” design is employed); smaller studies are likely to be underpowered and may yield false-negative findings.

Third, in most cases, CLS studies should be analyzed using regression analysis (i.e., using the CLS measure as a continuous predictor variable, as done in Cook et al, 200913). Ideally, the more familiar t test or analysis of variance should be employed only when using an extreme-groups design.

Fourth, interactions are highly complex. The optimal instructional design varies according to the topic, objectives, context, and instructor, in addition to students’ individual differences. All of these constitute potential interaction effects and make generalizable statements across settings difficult. In exploring aptitude–treatment interactions in CLS (or any other trait), it will be necessary to evaluate other relevant interactions (and power the study accordingly). Moreover, Cronbach28 eloquently illustrated that findings observed in a controlled setting do not always translate to practice and that success in a pilot run may not always replicate. Thus, although research in tightly controlled settings would be invaluable in advancing our understanding of aptitude–treatment interactions, promising results would need to be confirmed in authentic educational contexts, and preferably multiple times, before broad adoption.

Fifth, research based on invalid data is uninterpretable. Such is unfortunately the case for research using Riding’s Cognitive Styles Analysis (CSA) tool,29 whose scores have been shown to be unreliable30–33 and correlate poorly with other measures.34,35 Such invalidity challenges the conclusions of studies using the CSA tool10 (including my own)12 and highlights the imperative to establish validity early in the course of a research program.

Finally, quantitative research evaluates for aptitude–treatment interactions by comparing group performance (means or best-fit regression line). However, when considering individual differences, the group’s performance may not be relevant. Such analyses represent the best estimate of an individual learner’s response to a given intervention, but the group mean, nonetheless, may not predict the success or failure of that individual.

Do These Arguments Generalize?

The above arguments have focused on CAI. However, I expect most or all would apply to other applications of CLSs. Evidence supporting the use of CLSs in face-to-face instruction and other contexts consists almost exclusively of research looking at associations,19 with virtually no practical application for instruction or education administration and no indication that measuring and responding makes a difference in the lives of students or teachers. Furthermore, the same caveats regarding instructional methods, assessments, study design, and logistics apply regardless of the instructional approach. Thus, I agree with previous authors that assessing and adapting to CLSs add little value to instruction.20,21

Conclusion and Next Steps

Educators perceive that individual differences influence learning, and they often invoke CLSs in their attempts to measure and classify such differences. Unfortunately, as illustrated by the evidence and arguments presented above, adaptation to learners’ CLSs is unlikely to enhance CAI. So, where do we go from here? I see at least three options.

First, we could accept that instructional methods dominate and ignore individual differences. For the vast majority of health professions learners, this approach would probably work adequately. However, given the power of modern computers, it seems desirable to harness this capacity to assess and adapt to the individual.

Second, we could turn to individual characteristics other than CLS. For example, learners’ prior knowledge of a topic has been shown to have a significant effect on (interaction with) performance in a course, and studies that adapt to prior knowledge show benefit with reasonable consistency.22 Another learner characteristic is motivation.36 Learner motivation is often discussed among educators, but rigorous medical education research involving this construct is rare, and I am not aware of any studies that explore motivation aptitude–treatment interactions in medical education CAI. Research using these, or other constructs, would need to consider the above issues regarding instrument validation and the study of aptitude–treatment interactions.

Third, we could let learners adapt for themselves. If we provide learners with multiple educational tools, all known to be effective, then perhaps we do not need to preoccupy ourselves with which ones they choose to use. This is particularly true if learners are held to a fixed level of achievement (i.e., a mastery learning model). Computers would still play a valuable role in both instruction (offering multiple tools and allowing individual selection) and in the assessment of mastery (with provision of feedback and reassessment as needed).

Acknowledgments: The author thanks M.M. Triola for his critical review of the manuscript.

Funding/Support: None.

Other disclosures: None.

Ethical approval: Not applicable.

References

1. Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: A meta-analysis. JAMA. 2008;300:1181–1196
2. Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Instructional design variations in Internet-based learning for health professions education: A systematic review and meta-analysis. Acad Med. 2010;85:909–922
3. Cook DA. The research we still are not doing: An agenda for the study of computer-based learning. Acad Med. 2005;80:541–548
4. Cook DA, Levinson AJ, Garside S. Time and learning efficiency in Internet-based learning: A systematic review and meta-analysis. Adv Health Sci Educ Theory Pract. 2010;15:755–770
5. Jonassen DH, Grabowski BL. Handbook of Individual Differences, Learning, and Instruction. 1993 Hillsdale, NJ Lawrence Erlbaum
6. Kolb D. Experiential Learning: Experience as the Source of Learning and Development. 1984 Englewood Cliffs, NJ Prentice-Hall
7. Jung CG. Psychological Types. 1976 Princeton, NJ Princeton University Press
8. Felder RM, Silverman LK. Learning and teaching styles in engineering education. J Engineering Educ. 1988;78:674–681
9. Reichmann SW, Grasha AF. A rational approach to developing and assessing the construct validity of a student learning style scales instrument. J Psychol. 1974;87:213–223
10. Cook DA. Learning and cognitive styles in Web-based learning: Theory, evidence, and application. Acad Med. 2005;80:266–278
11. Cook DA, Thompson WG, Thomas KG, Thomas MR, Pankratz VS. Impact of self-assessment questions and learning styles in Web-based learning: A randomized, controlled, crossover trial. Acad Med. 2006;81:231–238
12. Cook DA, Gelula MH, Dupras DM, Schwartz A. Instructional methods and cognitive and learning styles in Web-based learning: Report of two randomised trials. Med Educ. 2007;41:897–905
13. Cook DA, Thompson WG, Thomas KG, Thomas MR. Lack of interaction between sensing–intuitive learning styles and problem-first versus information-first instruction: A randomized crossover trial. Adv Health Sci Educ Theory Pract. 2009;14:79–90
14. Halbert C, Kriebel R, Cuzzolino R, Coughlin P, Fresa-Dillon K. Self-assessed learning style correlates to use of supplemental learning materials in an online course management system. Med Teach. 2011;33:331–333
15. Hansen-Suchy K. Evaluating the effectiveness of an online medical laboratory technician program. Clin Lab Sci. 2011;24:35–40
16. McNulty JA, Espiritu B, Halsey M, Mendez M. Personality preference influences medical student use of specific computer-aided instruction (CAI). BMC Med Educ. 2006;6
17. McNulty JA, Sonntag B, Sinacore JM. Evaluation of computer-aided instruction in a gross anatomy course: A six-year study. Anat Sci Educ. January–February 2009;2:2–8
18. Svirko E, Mellanby J. Attitudes to e-learning, learning style and achievement in learning neuroanatomy by medical students. Med Teach. 2008;30:e219–227
19. Curry LRiding RJ, Rayner SG. Review of learning style, studying approach, and instructional preference research in medical education. International Perspectives on Individual Differences, Volume 1: Cognitive Styles.. 2000 Stamford, Conn Ablex Publishing Corporation:239–276
20. Pashler H, McDaniel M, Rohrer D, Bjork R. Learning styles: Concepts and evidence. Psychol Sci Public Interest. 2008;9:105–119
21. Merrill MDReiser R, Dempsey JV. Instructional strategies and learning styles: Which takes precedence? Trends and Issues in Instructional Design and Technology. 2002 Upper Saddle River, NJ Merrill/Prentice Hall:99–106
22. Cook DA, Beckman TJ, Thomas KG, Thompson WG. Adapting Web-based instruction to residents’ knowledge improves learning efficiency: A randomized controlled trial. J Gen Intern Med. 2008;23:985–990
23. Kane MTBrennan RL. Validation. Educational Measurement.. 20064th ed. Westport, Conn Praeger:17–64
24. Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: Theory and application. Am J Med. 2006;119:166.e167–116
25. Armstrong E, Parsa-Parsi R. How can physicians’ learning styles drive educational planning? Acad Med. 2005;80:680–684
26. Cronbach LJ, Lee JLindzey G. Cronbach. A History of Psychology in Autobiography.. 1989;Vol VIII. Stanford, Calif Stanford University Press:64–93
27. Cronbach LJ, Snow RE. Aptitudes and Instructional Methods: A Handbook for Research on Interactions. 1977 New York, NY Irvington Publishers
28. Cronbach LJ. Beyond the two disciplines of scientific psychology. Am Psychol. 1975;30:116–127
29. Riding RJ. Cognitive Styles Analysis: Research Administration. 2000 Birmingham, UK Learning and Training Technology
30. Cook DA. Scores from Riding’s cognitive styles analysis have poor test-retest reliability. Teach Learn Med. 2008;20:225–229
31. Parkinson A, Mullally AAP, Redmond JA. Test-retest reliability of Riding’s cognitive styles analysis test. Pers Individ Dif. 2004;37:1273–1278
32. Rezaei AR, Katz L. Evaluation of the reliability and validity of the cognitive styles analysis. Pers Individ Dif. 2004;36:1317–1327
33. Peterson ER, Deary IJ, Austin EJ. The reliability of Riding’s cognitive style analysis test. Pers Individ Dif. 2003;34:881–891
34. Cook DA, Smith AJ. Validity of index of learning styles scores: Multitrait-multimethod comparison with three cognitive/learning style instruments. Med Educ. 2006;40:900–907
35. Ong YW, Milech D. Comparison of the cognitive styles analysis and the style of processing scale. Percept Mot Skills. 2004;99:155–162
36. Cook DA, Thompson WG, Thomas KG. The motivated strategies for learning questionnaire: Score validity among medicine residents. Med Educ. 2011;45:1230–1240
37. Felder RM, Soloman BA. Index of learning styles. http://www4.ncsu.edu/unity/lockers/users/f/felder/public/ILSpage.html Accessed April 6, 2012
    38. Martinez M. Development and Validation of the Successful Learning Orientation Questionnaire. 1998 Provo, Utah Brigham Young University
      39. Briggs Myers I, Briggs KC. Myers-Briggs type indicator. http://www.cpp.com/products/mbti/index.asp Accessed April 6, 2012
        40. Kolb D. Learning Style Inventory, Version 3. 1999 Boston, Mass Hay Group
          41. Biggs J, Kember D, Leung DY. The revised two-factor study process questionnaire: R-SPQ-2F. Br J Educ Psychol. 2001;71:133–149
            © 2012 Association of American Medical Colleges