Skip Navigation LinksHome > March 2006 - Volume 81 - Issue 3 > Implementing the Logic Model for Measuring the Value of Facu...
Academic Medicine:
Faculty

Implementing the Logic Model for Measuring the Value of Faculty Affairs Activities

Otto, Ann K. PhD; Novielli, Karen MD; Morahan, Page S. PhD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Otto is associate vice president, Faculty Affairs and Institutional Effectiveness, Northeastern Ohio Universities College of Medicine, Rootstown, Ohio.

Dr. Novielli is associate dean, Faculty Affairs and Faculty Development, Jefferson Medical College, Philadelphia, Pennsylvania.

Dr. Morahan is co-director, Elam Program and Professor, Microbiology and Immunology, Drexel University College of Medicine, Philadelphia, Pennsylvania.

Correspondence should be addressed to Dr. Novielli, Jefferson Medical College, 1025 Walnut Street, Suite 123, Philadelphia, PA 19107-5083; telephone: (215) 955-2361; fax: (215) 923-9276; e-mail: 〈karen.novielli@jefferson.edu〉.

Collapse Box

Abstract

In today’s environment of increasing accountability in higher education and health care, it is critical that administrative units of a medical school demonstrate the added value of their activities to the school’s mission and that these units discriminate those activities that demonstrate the most return on investment. This is particularly important for administrative units whose activities may not be considered essential to the basic functioning of the medical school. For example, admissions would likely be considered an essential administrative unit that the medical school cannot do without, while faculty development might be considered nonessential. Effective measurement systems serve two purposes. They guide decision making throughout the organization and they serve as a basis for evaluating performance. This article describes use of the program logic model to measure the contribution of faculty affairs and development offices to the recruitment, retention, and development of a medical school’s teaching faculty, an outcome central to the mission of the medical school. The process of developing and rewarding faculty for teaching is used to illustrate the application of this method in linking activities of faculty affairs and development offices to outcomes that are of importance to the medical school.

Faculty are increasingly recognized as the medical school’s most important asset.1 Moreover, faculty recruitment is a major expense at academic health centers; a recent study documented that costs related to faculty and staff turnover may consume as much as 5% of the annual operating budget of an academic medical center.2 Studies are beginning to demonstrate that institutions can achieve real cost savings by retaining and developing current teaching, research, and clinical faculty.3 One school recently calculated that the total approximate cost for a failed tenure bid of a basic science faculty member (e.g., recruiting and supporting in a phased way for six years) is $864,500.4 Other schools have discussed the “hidden” costs of dealing with faculty problems—such as reduced patient satisfaction when clinical faculty lack effective communication skills and institutions need to bring in consultants to straighten out poorly managed conflicts and unresolved issues.5 A recent study in corporate America has proposed the concept of “presenteeism”—the costs related to employees who are present at work, but whose productivity suffers because they are no longer intellectually and emotionally engaged.6 Medical schools are certainly not exempt from this presenteeism problem. However, while institutions invest heavily to attract the best faculty members to their schools, there is no evidence that they pay equal attention to keeping these faculty members satisfied and productive—they do not protect the “sunk” costs.4,7 Therefore, administrators and leaders in academic medicine need to pay increased attention to managing faculty retention and development.

The successful management of faculty development for teaching, as well as recognizing and rewarding teaching, is of particular concern to medical schools and faculty development offices. The availability and well-being of talented teaching faculty are being challenged by a number of factors, and the problem is especially critical in research-intensive medical schools.8,9 Among these factors is a projected shortage of faculty. Bland and Berquist suggest that there will not be enough qualified young investigators in the pipeline to supply personnel for the current changes in medical research and education until the year 2007 or later.10 The need for faculty development is exacerbated by the requirement for educational talent and time during extensive curricular reform.11 These pressures have led to increased interest in establishing centralized offices of faculty affairs and development.12

Back to Top | Article Outline

Background

Some offices of faculty affairs and development have attempted to establish outcomes measures to show the value of a school’s faculty investment in this era of reduced resources and increased accountability; however, these centralized functions often lack appropriate resources and integration in medical schools, and thus can be vulnerable to budget decreases.4,13 For example, fewer than half of medical schools have any centralized office that deals with faculty development.12 Of these, most have been established recently, and few offer comprehensive professional development throughout the academic lifespan.4,14 A survey of the offices indicated their lack of experience in evaluating success factors or ability to demonstrate their sufficient value.12 Beyond reporting faculty satisfaction and numbers of people served, few offices have developed measures of effectiveness. Most institutions have found it difficult to show improved performance on indicators that stakeholders value, in part because of the difficulty in aligning disparate stakeholders’ views in order to identify leverage points to systematically enhance the performance of an institution.15

Tools such as the balanced scorecard,16 the logic model,17 and mission-based management,18 as described in the following paragraphs, can greatly enhance the participatory role of disparate stakeholders in reaching consensus on the desired outcomes for offices of faculty affairs and development, and thus enhance the usefulness of evaluation as a management and organizational learning mechanism. While these tools vary, they share essential common elements to address questions including: What is the institutional priority, e.g., evidence that there is a gap in faculty or student performance? What are the programs or services that might effectively address the institutional priority? What would successful faculty development program look like, and what would the benefits of the activity or intervention be? What are the indicators or metric measures of success, and who will collect them? The metrics should focus on the particular measures of faculty success in language that brings clarity to vague concepts. For many in higher education professions, understanding, documenting, measuring, and managing processes is difficult.19 Process-based approaches such as the balanced scorecard and logic model facilitate acquisition of the information necessary to making the tough decisions and trade-offs required in today’s environment. The usefulness of the models lies in facilitating clarification of the desired outcomes and identification of the exact measurements or indicators necessary to show concrete evidence. Such evaluation will:

* encourage a clear, consistent, and shared view of strategy at different levels of the organization;

* promote organizational learning about what activities, processes, and behaviors make a difference in the achievement of strategic objectives;

* communicate organizational values; and

* be a basis for feedback and adjustment of organizational behaviors.20

The balanced scorecard methodology has been particularly useful in broadening assessment of organizational effectiveness and excellence beyond the fiscal dimension, by measuring the performance of an office or department from four perspectives: financial, customer satisfaction, internal business process, and learning and growth.16 The theory suggests that improved financial performance is a natural outcome of balancing other important goals.21 The number of institutions of higher education and medical centers using the balanced scorecard model increases yearly.22–25 The adoption of well-delineated outcomes measures may be especially attractive to medical school faculty who are trained to use evidence-based approaches to solve problems. The goal of the scorecard is to link each department’s or function’s activities to institutional priorities.26,27 In academic health centers, some larger clinical units have implemented the model.28–30 To our knowledge, however, there has been little attention in testing the usefulness of this approach for assessing educational and service entities such as faculty affairs offices.

The program logic model is a measurement approach widely used by service and educational programs.17,31 Both the Kellogg Foundation and the United Way have made this model the cornerstone for program evaluation of their initiatives: it appears to be particularly useful for service-oriented initiatives such as educational or social services. The model links outcomes with program activities or processes and the theoretical assumptions or principles of an institution’s initiative. It facilitates clear thinking and responsible program management, helping administrators to keep a balanced perspective on both the big picture as well as the component parts (see Table 1). In medical schools, use of the logic model would encourage faculty development designers to move beyond measuring activities (e.g., problem-based learning tutor workshops) and outputs (e.g., the number of faculty attending these workshops) to measuring outcomes (e.g., the number of faculty who are skilled in and who use tutoring in problem-based learning, as measured by student ratings, peer ratings).

Table 1
Table 1
Image Tools

Activity-based costing or mission-based management,18 where the costs of individual teaching and learning activities are determined, can be combined with assessment processes such as the logic model to compare the cost of a process with its quality. Massey predicts that the next frontier in quality improvement will be linking assessment results to budget decisions and the reward system for faculty and staff.25 This enables discussions about cost versus benefits and provides baseline information and measures for faculty performance improvement.

These broad theories and approaches may be specifically applied to evaluating faculty development. We recommend the development of the logic model for measuring the effectiveness of faculty affairs and development activities in enhancing faculty recruitment, development, retention, recognition, and reward for medical education.

Back to Top | Article Outline

Assessment

We chose to assess faculty development in light of the logic model because of its centrality to the mission of medical schools. Our reasons for selecting the logic model were twofold: because of its wide use in evaluating service-oriented programs and due to our prior experience with this approach in designing, implementing, and evaluating curricular change projects and leadership development programs.32

We attended the 2004 Association of American Medical Colleges (AAMC) Faculty Affairs Professional Development Conference and discussed faculty development assessment with other medical educators. Conference attendees affirmed the importance of demonstrating the effectiveness of faculty affairs offices and functions in a workshop devoted to this topic. The logic model was discussed as one tool to measure the effectiveness of faculty affairs and development offices toward achieving the goals of the medical college. We then convened a subgroup of conference participants representing eight medical schools several times via conference call to discuss and refine the logic model for faculty affairs offices and identify indicators to be included to appropriately recognize and reward teaching.

This discussion was based on an original template for a logic model that had been used successfully by one of the authors (PSM) in evaluation of medical education projects.32 We modified it to provide an enhanced template to review outcomes of a profession’s or department’s contributions to a strategic plan based on Barbara Butterfield’s work at the University of Michigan.33 The modified template provides a method to link an institution’s mission statement and strategic plan to activities and programs in its faculty affairs and development office, thus increasing the visibility and credibility of these offices as key assets in the institution.

Back to Top | Article Outline

Recommendations

Medical schools must maintain a cadre of skilled educators in order to fulfill their mission of education. This educational mission requires faculty with a desire to teach, the skills to teach, and rewards and resources for teaching. The model template shown in Figure 1 represents a menu of activities, outputs, and outcomes (benefits) with associated indicators that medical schools could use to measure the effectiveness of a faculty affairs and development office in improving the recruitment, development, retention, recognition, and reward of teaching faculty. This menu should be viewed as representative rather than complete, because a given medical school could identify additional items. Each medical school should select the few, key, critical items viewed as important in the context of that medical school. Use of the logic model facilitates the process of thinking through the entire faculty development initiative to reach consensus among the leadership and other stakeholders about the specific beneficial outcomes desired and how to measure them.

Figure 1
Figure 1
Image Tools

Figure 1 demonstrates the importance in distinguishing among activities, outputs, and desired outcomes (with associated metrics). For example, many faculty affairs and development offices administer faculty development programs (activity) to improve the teaching skills of their faculty. While it is common to enumerate the number of these sessions offered and the number of faculty who attend these sessions (outputs) as evidence of the productivity of faculty affairs and development offices, these activities do not necessarily achieve the desired outcome of improving teaching skills. Indicators that the desired outcome (improved teaching skills) is being achieved could include measures of student and faculty evaluations of teaching, students’ achievement on tests, faculty satisfaction regarding their teaching role, and faculty evaluations of the usefulness of the faculty development program to improve their teaching. These would document with quantitative measurements the benefits to the faculty and students, as well as to the medical school, of the activities of the faculty development unit.

The logic model, through explicit discussion of the spectrum of possible outcomes, facilitates reaching consensus about which of the above indicators is important for a particular medical school. For example, one medical school might conclude that faculty satisfaction regarding their teaching role was a critical desired outcome (in order to have a stable group of teaching faculty), while another medical school might conclude instead that improving student achievement on tests was the key outcome desired (in order to attract more students). A variety of possible inputs, activities, outputs, outcomes and indicators that might be considered are detailed in List 1.

List 1
List 1
Image Tools

A second common example of faculty development facilitated by faculty affairs offices is the establishment of clinician–educator tracks to reward teaching excellence through promotion. However, it does not necessarily follow that this activity provides the desired outcome. Outputs could include the number of faculty appointed to and promoted on this track. Outcome indicators could include tracking the number of faculty who achieve senior faculty ranks in this track and thus change the organizational climate to increase valuation of teaching faculty and the number of faculty who change into the clinician-educator track, and assessment of faculty satisfaction with promotion criteria for this track and faculty satisfaction with their teaching role. Which of these outcomes was most important would likely vary from one medical school to another.

Back to Top | Article Outline

Discussion and Lessons Learned

The logic model we present here for measuring faculty development’s role in improving teaching can be used in several ways. First, a single school can use the template as a starting point for discussing and identifying particular outcomes and associated measures regarding the faculty teaching mission (or other missions of the faculty affairs office) that are most important to that particular school. New offices of faculty affairs and faculty development could use this model to guide their initial program planning. Second, several schools could use the model in collaboration by completing the process, sharing the matrix, and comparing the common and unique outcomes and indicators identified by each school. Third, practicality and efficacy in implementation of the measurement systems could be compared across medical schools. Finally, this model can be adapted by any administrative unit of a medical school (e.g., student affairs, admissions) that desires to link outcomes with the school’s mission and strategic plan.

Faculty affairs and development officers often question whether there may be a way of demonstrating the efficacy of faculty development without the tedious, time-consuming, and expensive work of doing it internally, such as extrapolating evidence for effectiveness from profit-driven organizations, which stress staff development.34 Unfortunately, in a budget crunch, internally derived, empirical, quantitative evidence is necessary to persuade stakeholders to continue the investment in centralized functions such as faculty affairs and development.

Our discussions with attendees at the AAMC Faculty Affairs Group lead us to the following conclusions about implementing any type of program evaluation measurement process. First, time investment is required to learn about evaluation processes and to decide which might be most useful in a given situation. Considerable dialogue was required to reach a common understanding of the important differences among activities, outputs, and outcomes. Time investment is also necessary for stakeholders to move beyond measuring activities and outputs, and reach clarity about the desired outcomes. Second, faculty affairs and development practitioners must make program evaluation processes a top priority. The tendency, when experiencing constraints, is to decide that we cannot dedicate the time to program evaluation processes, which often require long-term effort and resources before end results can be measured. Program evaluation, however, is essential if sustainability and critical involvement in institutional priorities for faculty development are to be maintained through budget crunches and medical school leadership changes. Finally, greater clarity in metrics is possible, and is fostered by collaboration. We predict that as more institutions join these types of efforts, additional, richer metrics will emerge. The challenge is to develop metrics that are less tedious, time-consuming, and expensive to collect and analyze.

The development of practical, common tools such as the logic model can facilitate sharing information and measurements among our institutions. By defining outcomes that are strategic and clearly measurable, employing this process can enhance the visibility and credibility of faculty affairs and development offices, as well as demonstrating their value for the institutional mission, thereby promoting their sustainability amid economic uncertainty.

Back to Top | Article Outline

References

1 Whitcomb ME. The medical school’s faculty is its most important asset. Acad Med. 2003;78:117–18.

2 Waldman JD, Kelly F, Arora S, Smith HL. The shocking costs of turnover in health care. Health Care Manage Rev. 2004;29(1):2–7.

3 Wingard DL, Garman KA, Reznik V. Facilitating faculty success: outcomes and cost benefit of the UCSD National Center of Leadership in Academic Medicine. Acad Med. 2004;79 (10 suppl):S9–S11.

4 Grigsby RK. Why do faculty development? 16 December 2004. Personal communication; response on AAMC Faculty Affairs Listserv.

5 Apted J. Why do faculty development? 16 December 2004. Personal communication; response on AAMC Faculty Affairs Listserv.

6 Hemp P. Presenteeism: At Work - But Out of It. Harv Bus Rev. 2004;82(10):49–58.

7 Wenger DCK. Conducting a cost-benefit analysis of faculty development programs: its time has come. Acad Phys Sci. 2003;1(May/June):5–7.

8 Thibault GE, Neill JM, Lowenstein DH. The Academy at Harvard Medical School: nurturing teaching and stimulating innovation. Acad Med. 2003;78:673–81.

9 Aron DC, Aucott JN, Papp KK. Teaching awards and reduced departmental longevity: kiss of death or kiss goodbye. What happens to excellent clinical teachers in a research intensive medical school? Med Educ Online. 2000;5(3).

10 Bland CJ, Bergquist B. The vitality of senior faculty members. ASHE-ERIC Higher Education Report. 1997;7(7).

11 Cooke M, Irby DM, Debas HT. The UCSF Academy of Medical Educators. Acad Med. 2003;78:666–72.

12 Morahan PS, Gold JS, Bickel J. Status of faculty affairs and faculty development offices in U.S. medical schools. Acad Med. 2002;77:398–401.

13 Gruppen LD, Frohna AZ, Anderson RM, Lowe KD. Faculty development for educational leadership and scholarship. Acad Med. 2003;78:137–41.

14 Nieman LZ, Donoghue GD, Ross LL, Morahan PS. Implementing a comprehensive approach to managing faculty roles, rewards, and development in an era of change. Acad Med. 1997;72:496–504.

15 Seymour D. Once Upon a Campus: Lessons for Improving Quality and Productivity in Higher Education. Phoenix: Onyx Press, 1995.

16 Kaplan RS, Norton D. The Balanced Scorecard: Harvard Business School Press, 1996.

17 W. K. Kellogg Foundation. Logic Model for Program Evaluation 〈http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669.pdf〉. Accessed 9 December 2005. W.K. Kellogg Foundation, Battle Creek, MI.

18 D’Alessandri RM, Albertsen P, Atkinson BF, et al. Measuring contributions to the clinical mission of medical schools and teaching hospitals. Acad Med. 2000;75:1231–37.

19 West Engelkemeyer S. Resources for Managing Our Institutions in These Turbulent Times. Change. 2004;53(January/February):56.

20 Becher BE, Huselid MA, Ulrich D. The HR Scorecard. Boston: Harvard Business School, 2001.

21 Stewart AC, Carpenter-Hubin J. The balanced scorecard: beyond reports and rankings. Plan Higher Educ. 2000(Winter):37–42.

22 Kaplan RS, Norton D. Creating the Strategy Focused Organization. The Strategy-Focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment. Boston: Harvard Business School, 2001.

23 Carpenter-Hubin J, Hornsby EE. Making Measurement Meaningful. Association for Institutional Research Annual Forum, 2003.

24 Shapiro LT, Nunez WJ. Strategic planning synergy. Plan Higher Educ. 2001;30:27–34.

25 Massy WF. Honoring The Trust – Quality and Cost Containment in Higher Education. Bolton: Anker, 2003.

26 Rimar S, Morahan PS, Richman RC. The balanced scorecard: strategy and performance for academic health centers. In: Proceedings of the 2000 Forum on Emerging Issues, 2000.

27 Inamdar SN, Kaplan RS, Jones MLH, Menitoff R. The balanced scorecard: a strategic management system for multi-sector collaboration and strategy implementation. Qual Manage Health Care. 2000;8:21–39.

28 Jones ML, Filip SJ. Implementation and outcomes of a balanced scorecard model in women’s services in an academic health care institution. Qual Manag Health Care. 2000;8:40–51.

29 Rimar S, Garstka SJ. The “Balanced Scorecard”: development and implementation in an academic clinical department. Acad Med. 1999;74:114–22.

30 Rimar S. Strategic planning and the balanced scorecard for faculty practice plans. Acad Med. 2000;75:1186–88.

31 United Way of America. Measuring Program Outcomes: A Practical Approach.: Supported by Grants from Ewing Marion Kauffman and W. K. Kellogg Foundations, 1996.

32 Norcini J, Burdick W, Morahan P. The FAIMER Institute: creating international networks of medical educators. Med Teach. 2005;27:214–18

33 Butterfield B. Measuring human resources contributions in higher education is crucial for organizational success 〈http://www.cupahr.org/newsroom/news_archive.asp〉. CUPA-HR News Online. November, 2005;32 (11).

34 Doyle LL. Why do faculty development? 2004. Personal communication; response on AAMC Faculty Affairs Listserv.

Cited By:

This article has been cited 1 time(s).

Medical Teacher
Faculty development: Yesterday, today and tomorrow
McLean, M; Cilliers, F; Van Wyk, JM
Medical Teacher, 30(6): 555-584.
10.1080/01421590802109834
CrossRef
Back to Top | Article Outline

© 2006 Association of American Medical Colleges

Login

Article Tools

Images

Share