Student assessment involves balancing assessment aimed at making decisions about students and their progression (summative assessment) with assessment that provides students with feedback to enhance their learning (formative assessment). Formative assessment is performed in the spirit of “assessment for learning” rather than “assessment of learning.”1 By providing feedback and guidance to students, formative assessment has positive effects on learning and performance.2,3 It is an essential element of self-regulated learning4,5 and informed self-assessment.6 When done thoughtfully, it can be a catalyst for growth and development,1,7 reducing uncertainty and leading to more focused and efficient gains in skill and knowledge.8
Summative assessments typically occur at the end of a program or an experience and translate into a score or grade, allowing educators to compare learners and to determine whether they know enough and demonstrate competence to progress. However, when practicing summative assessment, we are acting far more as regulators than educators. Although summative assessment is certainly necessary, our assessment focus must be far broader. The educator’s role, accomplished via formative assessment, is to impart information, instill values, and inspire excellence and ongoing learning.
Formative Assessment in Light of Current Educational Trends
Self-regulated, lifelong learning; learner-centered curricula; and a focus on learning outcomes through competency-based assessments have been prominent themes in the curricular reform movement in medical education over the past decade.9,10 Competency-based assessment that links learning outcomes with specific learning objectives requires continuous and frequent assessment.11,12 Actively engaging the learner, through activities such as self-directed assessment-seeking behavior,13 in which students actively seek feedback on performance for the purpose of improvement, is considered an essential component of competency-based medical education.12 An assessment system that enables teachers to assist students in developing and achieving their learning goals is a vital component of a learner-centered curriculum, which addresses the needs of millennial learners, many of whom engage in independent, asynchronous learning.
The Case for a Formatively Focused Assessment System in Undergraduate Medical Education
The Liaison Committee on Medical Education (LCME) has mandated formative assessment as a requirement in undergraduate medical education (UME) through midclerkship and/or midcourse feedback to students for remediation purposes.14 To serve as a critical contributor to medical students’ education, however, formative assessment must be at the heart of their training, not just included to satisfy accreditation requirements or to ensure that everyone passes. Unlike other professional training cultures such as music and sports, in which feedback is expected, respected, and given regularly to all learners, we have not yet cultivated a true feedback culture in medical education.15,16
Formative assessments typically used today could be likened to a series of punch biopsies performed by independent physicians who do not communicate with each other. They are highly context dependent, often done on the fly, and focused on ensuring that the learner is doing well. These assessments do not go very deep, nor are they truly valued by the students because they do not “count for a grade.” They may uncover a potential problem, but remediation efforts are usually limited to the time of that teacher’s course. Information, negative or positive, is rarely passed on to future courses and instructors.
We suggest that assessment is most useful when it is part of an organized and ongoing process rather than a set of unrelated events. The process should generate information that (1) is systematically translated into detailed feedback that informs students about their performance, (2) leads to the development of specific plans for improvement supported and guided by faculty, (3) is subject to follow-through whereby students present evidence of progress, and (4) is part of a continuous cycle over time. Such a process would encourage the learner to engage in a personal, educational plan–do–study–act (PDSA) cycle (see Figure 1), a commonly used framework in quality improvement in medicine,17 and applicable to continuous improvement throughout all four years of medical school, into residency training, and beyond. In fact, this cycle is most effective when teacher and student collaborate in a dynamic relationship of teaching and learning informed by formative assessments.
The Next Accreditation System18,19 in graduate medical education (GME) integrates formative and summative assessments into one system, with formative data gathered systematically and shared deliberately with the learner over time, ultimately informing the final summative decision. Multiple assessments by multiple assessors inform clinical competency committees’ evaluations of a trainee’s progress every six months, and specialty-based milestones are used to give more granularity about the level of competence attained. Milestones decisions are informed by individual workplace-based assessments which provide feedback to the trainee in the moment. Milestones are intended to serve as a framework for formative assessment20 and to promote a longitudinal, developmental approach, over the course of a residency program, to achieving competence and becoming ready for independent practice. This new assessment system shifts attention away from performance and instead emphasizes learning in much the same way that we are advocating for UME assessment. A formative focus in UME better prepares students for residency training, which now demands ongoing formative assessment wherein integrating feedback into practice is critical for developing competence and improving quality of care and patient safety.
Challenges to Building a Formatively Focused Assessment System in UME
Educators in GME face numerous challenges in implementing a formatively focused assessment system, including the feasibility of frequent, accurate, and reliable workplace-based assessments. However, their summative duty—to certify that those completing their program are fit to be independent practitioners in their specialty—emphasizes the need to be absolutely certain that the assessment process identifies those who are not ready to provide high-quality care and ensure patient safety. In contrast, the challenges for assessment in UME are considerably more diverse; faculty might be uncertain about expected standards of competence and potentially less invested in learners who are not entering their field. Further, students often come to medical school from a culture that focuses on individual achievement and competitive advantage,21,22 and they may not be prepared to accept constructive feedback or to trust that anything but positive feedback will somehow put them at a “competitive disadvantage.” They may not see the value of formative assessment if it does not “count towards the grade.” Teachers in UME often view themselves narrowly as imparters of knowledge, leaving assessment to others. Compounding the problem, their exposure to students is typically brief, so that by the time teachers come to know their students’ strengths and shortcomings through some form of assessment, the students become someone else’s “gift,” if they are strong—or “burden,” if they have challenges.
Conceptual Underpinnings of a Formatively Focused Assessment System
An assessment system that emphasizes the formative can only thrive in a culture that embraces and supports improvement. Referring largely to individual orientations, psychologist Carol Dweck has distinguished between a “performance” orientation versus a “learning” or “mastery” orientation.23,24 In the system that values the former, students have the goal of looking good, to make others think favorably of them. With such an orientation, students are prone to hide errors, mask or deny any uncertainty, and view feedback as punitive and generally to be avoided. In contrast, a learning orientation is one in which the student’s goal is to improve—to gain mastery over the subject or skill at hand. With a learning orientation, students would freely admit uncertainty in order to gain advice and counsel. Feedback would be welcomed as a way of ascertaining where and how to improve.
For formative assessment to be effective, a medical school can (and must) create a culture in which a learning orientation can thrive. This begins on day one of medical school, when students are told that they can take risks and expose both their strengths and challenges, and that principle is reinforced throughout the curriculum. In such an environment, educators care as much about students’ questions as their answers, the rationale for their responses as much as the responses themselves. A formatively focused assessment culture not only can thrive when a learning orientation exists but also, in the spirit of “assessment drives learning,” can facilitate its existence. However, as important as it is to tell students that inquiry, curiosity, and risk taking are valued, espousing the rhetoric of a learning environment in the presence of an assessment system that supports looking good creates a hidden curriculum25 that can be damaging.
Vygotsky’s26 theory of the Zone of Proximal Development (ZPD) is also useful in informing a formatively focused assessment system. The ZPD is defined as the “learning edge”: the knowledge or skills that are at the limits of a student’s competence, the ones by which they are challenged but that are achievable through guidance. According to Vygotsky, a “knowledgeable other,” someone who has mastered the area already, needs to coach learners in their ZPD. Formative assessment is critical to this coaching. Empowering faculty to function as “knowledgeable others” in the ZPD and building the trust of the students that faculty will function in this way would be a real culture shift toward an educational system that encourages learning over performance.
Implementing a Formatively Focused Assessment System in UME
A systems approach to assessment design has been endorsed in the literature.27,28 Ideally, such a system would encompass the complementary functions of formative and summative assessment.27 In the following section, we review the elements, characteristics, infrastructure, and resources necessary for a formatively focused system.
Virtually all assessment approaches can be used in formative assessment, as long as they are specifically devised to emphasize providing feedback that encourages self-reflection and are used to direct, guide, and catalyze learning. For example, workplace-based assessments such as the American Board of Internal Medicine’s Mini-Clinical Evaluation Exercise29,30 can be a powerful tool for improving the clinical skills of students and residents—if at the end of each encounter, the faculty member and student engage in a brief discussion wherein the trainee offers a self-appraisal and the faculty member offers additional feedback, with the goal of creating a plan for improvement. Although simulation is often thought of as summative assessment, debriefing sessions can also help learners understand expected standards, improve their motivation to learn and their ability to self-assess, and promote deliberate practice.31 Even traditional forms of summative assessment such as multiple-choice question exams can be used formatively by encouraging learners to analyze their results to identify knowledge gaps and generate plans for closing those gaps.
List 1 highlights the characteristics of an institutional culture that would support a formatively focused assessment system, including central oversight and commitment to a longitudinal four-year approach. List 2 articulates attributes of this culture’s learner–teacher relationship. Embedded within this approach is the development of close, trusting student–teacher relationships focused on honest and specific feedback, reflection, and promotion of student development.
Institutional Characteristics of a Formatively Focused Assessment System
Institutionally, a formative assessment system should:
- Be organized, integrated, and comprehensive, having the characteristics of a coordinated and unified system.
- Be complementary to the summative assessment system. In mapping a system, educators should consider purpose and optimal use of formative and summative elements.
- Provide data and feedback in many different forms from a variety of sources.
- Have both central stewardship and local accountability. A designee of the medical school should oversee both formative and summative assessment systems to ensure that both assessment functions are serving to complement one another.
- Be seen as a continuous process over the learner’s entire tenure and implemented at multiple points in time.
- Include systemic collection and utilization of assessment data so that feedback and improvement discussions become part of a team effort rather than a private transfer of information between learner and teacher.
- Place responsibility for improvement on both learner and teacher, and hold both accountable for seeking and monitoring progress.
- Include learner sessions on how to effectively seek, receive, and use feedback.
- Include faculty development sessions so that teachers can learn how to engage as coaches, using appropriate motivational techniques to encourage self-improvement in learners.
Learner–Teacher Relationship in a Formatively Focused Assessment System
Interpersonally, the nature of the student–teacher relationship in a formative assessment system should:
- Be developmental. Markers or milestones must be laid out for both assessors and students so that they have a sense of the proper expectations and faculty can communicate how the student may reach the next level.
- Be learner centered. Assessment methods should relate to the student’s learning goals, and the student’s performance should be related to external measures of performance.
- Be improvement focused. Learners should be encouraged to work constantly towards continuous improvement and aspire to excellence rather than accepting a test score, even minimal competence, and then moving on to a new subject.
- Encourage student self-reflection. Students should be encouraged to take responsibility for assessing their own performance so as to improve skills in self-assessment and internalize skills for using feedback to improve performance.
- Draw on a broad range of assessment data, which encourage exploration of the learner’s thinking process and multiple dimensions of performance.
- Involve regularly scheduled feedback to the learner to close the loop. Feedback must be regularly scheduled rather than exclusively “on-the-fly,” and be substantive and specific to motivate the learner to continue improving.
- Encourage relationship building. Feedback should be given face to face, with a coaching focus so as to strengthen the bond between learner and teacher.
- Include follow-up to ensure that the learner is accountable for continuing to work on performance issues.
- Provide learners with directions and resources to improve, rather than just vague encouragement, and encourage the learners to identify their own strategies for improvement.
- Promote teacher self-reflection on the nature of the feedback conversation and ways of making it more effective.
The foundation of any competency-based assessment system is a set of defined expectations for achievement. While goals and objectives are required by the LCME, they are related to levels of achievement either at the end of courses and clerkships or at the end of medical school. In a formatively focused assessment system where the development of competence is monitored longitudinally, it is essential that both learner and teacher identify milestones—conceived as markers of progress—in knowledge and skills across the curriculum. The RIME (reporter–interpreter–manager–educator) framework32 is an example of a developmental framework for assessing skills and providing feedback across clinical learning experiences in UME. Entrustable professional activities (EPAs) also can be used to guide undergraduate learning across the arc of the medical school curriculum.33 On this foundation of well-defined markers of achievement stand four pillars to support the formatively focused assessment system: faculty development, learner development, a longitudinal advising and coaching program, and a method for documentation.
Building an effective student–teacher relationship is essential to optimal learning and can be taught explicitly.34 Faculty must learn to articulate the importance of both formative and summative assessment, as well as their role as assessors and coaches, as a critical step in promoting trust. Feedback strategies should be tailored to building the student–teacher relationship and challenging learners to self-assess and develop their own strategies for improvement. The ask–discuss–ask feedback framework35 is designed to achieve these goals. Faculty development should also focus on developing and disseminating a shared mental model for milestones and a time frame for achieving competence. Finally, faculty need to be trained in Vygotsky’s ZPD, to serve as the “knowledgeable other,” encouraging students to discover and share their “learning edge” and coaching them for improvement.
The process of training medical students to become engaged in the cycle of feedback, development, and growth begins with creating a learning/mastery culture where students feel safe, even rewarded, for acknowledging their limitations and seeking feedback. This should begin on entry and progress throughout their four years of medical school with increasing responsibility placed on the student for self-directed learning and self-assessment, and engaging in effective student–teacher relationships. Students should also receive specific training on how to seek and integrate feedback, calibrate their self-assessments,36,37 and engage in PDSA cycles informed by both summative and formative assessments. Finally, students should be assessed on their ability to receive and integrate feedback as a part of developing competence in practice-based learning and improvement critical to success in GME.
Longitudinal academic advising and coaching.
A longitudinal academic advising program, in which faculty members become closely invested in their advisees and students come to see faculty as valuable resources, is essential to our proposed system. Advisors would have access to students’ summative evaluations, as well as formative evaluations, and would be responsible for assisting them in developing individual learning plans (ILPs).38 This advising program should be complemented by an effective coaching culture in courses and clerkships, with a focus on promoting deliberate practice through ongoing formative assessment.39
Both advising and coaching programs require establishing a trusting relationship between teacher and learner. Ideally this is achieved through sustained, longitudinal relationships, more easily implemented in an advising program than in a clinical coaching program, given the short curricular rotation blocks for students and shorter inpatient blocks for faculty. Building trust and continuity in learning, teaching, and assessment has been addressed in the movement toward learning communities in UME40 and in longitudinal integrated clerkships.41,42 However, many schools have not found it feasible to structure longitudinal student–teacher experiences, and further, the challenge in passing on and conveying information about a medical student’s strengths, challenges, and competence to future teachers or course directors persists. Much controversy surrounds sharing information about students, or “forward feeding,” in UME. Some have warned about the dangers of self-fulfilling prophecies, labeling, and potential stigmatization, while others argue that when done correctly, sharing information for the student’s benefit contributes to meaningful longitudinal assessment and develops better doctors.43–45 In fact, forward feeding has been endorsed by a majority of internal medicine clerkship directors46 and may be a critical element of systematizing formative assessment throughout a student’s education and promoting accountability for learning outcomes.
One way to generalize the continuity of learning and assessment is to charge both teacher and learner with carrying information forward. For example, in an interview at the end of a course, a student would share strengths, challenges, and learning goals and receive feedback from a faculty member. Students would then meet with their advisor to develop a learning plan for the next clerkship and arrive prepared to share it and implement it as part of their educational PDSA cycle. The teacher would be prepared to receive this information and use it to help build an effective collaborative coaching relationship with the learner.
Required documentation of learning activities, competence achieved, and feedback given is essential in a formatively focused assessment system. While the transcript serves as a formal documentation of summative assessment, formative assessment is less likely to be documented longitudinally unless a learner is experiencing serious difficulties. Although used by many strictly for summative purposes, the portfolio can be used equally for documenting developing competence. Portfolios can be most effective when advisors review and assist in the interpretation of the evidence47 and work with learners to revise ILPs with an eye toward monitoring ongoing progress.48,49
Much of what we propose demands strong leadership and management of cultural change and a retooling of existing infrastructure, rather than a large outlay of capital for new programs. Markers of achievement of competence may be developed through national organizations, similar to the Association of American Medical Colleges’ development of UME EPAs.50 While recruiting faculty to serve as longitudinal academic advisors or coaches may require resources to support their time, the improved transfer of information may actually make teaching and learning more efficient, and stronger student–teacher relationships may make it more rewarding.
Formative student assessment must be seen as a continuous process involving several components forming a closed loop: collecting relevant, usable information; sharing that information between a student and all those with whom he or she interacts; developing self-directed (but closely overseen) learning goals along with guidance and resources for accomplishing these goals; and regularly following up to identify progress and any barriers to achieving those goals.
Although not often recognized, the parallels between the principles espoused here and those of quality improvement in clinical care are noteworthy. How different is a commitment to patient-centered care from a concern for learner-centered education? Continuity and coordination of care seem equally relevant when we talk about improvement of the student as well as recovery of the patient. Assessment and quality care both work best when all of the elements of the system are in sync, and when we have teamwork among professionals working together rather than relying on the skills of isolated individuals. It is crucial in both patient-centered care and learner-centered education to know how to conduct successful handoffs and engage in constant monitoring. In essence, quality formative assessment of medical students is little more than the practice of good medicine, and we should be as committed to it in the educational sphere as we are in the clinical. It is this approach that will lead to physicians who will learn and grow throughout their careers and, thus, provide the best care to their patients.
1. Schuwirth LW, Van der Vleuten CP. Programmatic assessment: From assessment of learning to assessment for learning. Med Teach. 2011;33:478485.
2. Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory. 1986.Englewood Cliffs, NJ: Prentice Hall.
3. Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77:81112.
4. Nicol DJ, Macfarlane-Dick D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Stud High Educ. 2006;31:199218.
5. Clark I. Formative assessment: Assessment is for self-regulated learning. Educ Psychol Rev. 2012;24:205249.
6. Sargeant J, Armson H, Chesluk B, et al. The processes and dimensions of informed self-assessment: A conceptual model. Acad Med. 2010;85:12121220.
7. Norcini J, Anderson B, Bollela V, et al. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 conference. Med Teach. 2011;33:206214.
8. Shute VJ. Focus on formative feedback. Rev Educ Res. 2008;78:153189.
9. Cooke M, Irby DM, Sullivan W, Ludmerer KM. American medical education 100 years after the Flexner report. N Engl J Med. 2006;355:13391344.
10. Cooke M, Irby DM, O’Brien BC, Shulman LS. Educating Physicians: A Call for Reform of Medical School and Residency. 2010.San Franciso, Calif: Jossey Bass.
11. Iobst WF, Sherbino J, Cate OT, et al. Competency-based medical education in postgraduate medical education. Med Teach. 2010;32:651656.
12. Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32:676682.
13. Eva KW, Regehr G. “I’ll never play professional football” and other fallacies of self-assessment. J Contin Educ Health Prof. 2008;28:1419.
14. Liaison Committee on Medical Education. Functions and structure of a medical school: Standards for accreditation of medical school programs leading to the M.D. degree. March 2014. http://www.lcme.org/publications.htm
. Accessed February 4, 2016.
15. Archer JC. State of the science in health professional education: Effective feedback. Med Educ. 2010;44:101108.
16. Watling C, Driessen E, van der Vleuten CP, Vanstone M, Lingard L. Beyond individualism: Professional culture and its influence on feedback. Med Educ. 2013;47:585594.
17. Berwick DM. Developing and testing changes in delivery of care. Ann Intern Med. 1998;128:651656.
18. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Engl J Med. 2012;366:10511056.
19. Caverzagie KJ, Iobst WF, Aagaard EM, et al. The internal medicine reporting milestones and the next accreditation system. Ann Intern Med. 2013;158:557559.
20. Holmboe ES. Realizing the promise of competency-based medical education. Acad Med. 2015;90:411413.
21. Brieger GH. The plight of premedical education: Myths and misperceptions—part I: The “premedical syndrome.” Acad Med. 1999;74:901904.
22. Gunderman RB, Kanter SL. Perspective: “How to fix the premedical curriculum” revisited. Acad Med. 2008;83:11581161.
23. Dweck C. Self-Theories: Their Role in Motivation, Personality, and Development. 2013.Philadelphia, Pa: Taylor Francis/Psychology Press.
24. Mangels JA, Butterfield B, Lamb J, Good C, Dweck CS. Why do beliefs about intelligence influence learning success? A social cognitive neuroscience model. Soc Cogn Affect Neurosci. 2006;1:7586.
25. Hafferty FW. Beyond curriculum reform: Confronting medicine’s hidden curriculum. Acad Med. 1998;73:403407.
26. Vygotsky L. Thought and Language. 1962.Cambridge, Mass: MIT Press.
27. van der Vleuten CP, Dannefer EF. Towards a systems approach to assessment. Med Teach. 2012;34:185186.
28. Dijkstra J, Van der Vleuten CP, Schuwirth LW. A new framework for designing programmes of assessment. Adv Health Sci Educ Theory Pract. 2010;15:379393.
29. Holmboe ES, Yepes M, Williams F, Huot SJ. Feedback and the mini clinical evaluation exercise. J Gen Intern Med. 2004;19(5 pt 2):558561.
30. Pelgrim EA, Kramer AW, Mokkink HG, Van der Vleuten CP. Quality of written narrative feedback and reflection in a modified mini-clinical evaluation exercise: An observational study. BMC Med Educ. 2012;12:97.
31. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70S81.
32. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74:12031207.
33. Chen HC, van den Broek WE, ten Cate O. The case for use of entrustable professional activities in undergraduate medical education. Acad Med. 2015;90:431436.
34. Haidet P, Stein HF. The role of the student–teacher relationship in the formation of physicians. The hidden curriculum as process. J Gen Intern Med. 2006;21(suppl 1):S16S20.
35. Andolsek K, Padmore J, Hauer KE, Holmboe E. Clinical Competency Committees: A Guide for Programs. Chicago, Ill: Accreditation Council for Graduate Medical Education; January 2015. https://www.acgme.org/acgmeweb/Portals/0/ACGMEClinicalCompetencyCommittee Guidebook.pdf
. Accessed February 7, 2016.
36. Bing-You RG, Bertsch T, Thompson JA. Coaching medical students in receiving effective feedback. Teach Learn Med. 1998;10:228231.
37. Algiraigri AH. Ten tips for receiving feedback effectively in clinical practice. Medical Education Online. 2014;19http://med-ed-online.net/index.php/meo/article/view/25141
. Accessed February 4, 2016.
38. Li ST, Burke AE. Individualized learning plans: Basics and beyond. Acad Pediatr. 2010;10:289292.
39. Gifford KA, Fall LH. Doctor coach: A deliberate practice approach to teaching and learning clinical skills. Acad Med. 2014;89:272276.
40. Ferguson KJ, Wolter EM, Yarbrough DB, Carline JD, Krupat E. Defining and describing medical learning communities: Results of a national survey. Acad Med. 2009;84:15491556.
41. Hirsh DA, Holmboe ES, ten Cate O. Time to trust: Longitudinal integrated clerkships and entrustable professional activities. Acad Med. 2014;89:201204.
42. Hauer KE, O’Brien BC, Hansen LA, et al. More is better: Students describe successful and unsuccessful experiences with teachers differently in brief and longitudinal relationships. Acad Med. 2012;87:13891396.
43. Pangaro L. “Forward feeding” about students’ progress: More information will enable better policy. Acad Med. 2008;83:802803.
44. Jussim L, Harber KD. Teacher expectations and self-fulfilling prophecies: Knowns and unknowns, resolved and unresolved controversies. Pers Soc Psychol Rev. 2005;9:131155.
45. Cleary L. “Forward feeding” about students’ progress: The case for longitudinal, progressive, and shared assessment of medical students. Acad Med. 2008;83:800.
46. Frellsen SL, Baker EA, Papp KK, Durning SJ. Medical school policies regarding struggling medical students during the internal medicine clerkships: Results of a national survey. Acad Med. 2008;83:876881.
47. Driessen EW, van Tartwijk J, Overeem K, Vermunt JD, van der Vleuten CP. Conditions for successful reflective use of portfolios in undergraduate medical education. Med Educ. 2005;39:12301235.
48. Friedman Ben David M, Davis MH, Harden RM, Howie PW, Ker J, Pippard MJ. AMEE medical education guide no. 24: Portfolios as a method of student assessment. Med Teach. 2001;23:535551.
49. Dannefer EF, Henson LC. The portfolio approach to competency-based assessment at the Cleveland Clinic Lerner College of Medicine. Acad Med. 2007;82:493502.
50. Association of American Medical Colleges. Core entrustable professional activities for entering residency. 2014. https://www.aamc.org/initiatives/coreepas/
. Accessed February 7, 2016.