Fahey, Patrick MD; Cruz-Huffmaster, Donabelle MHA; Blincoe, Thomas MBA; Welter, Chris; Welker, Mary Jo MD
The past two decades have witnessed a large growth in ambulatory health care services owned and delivered by academic medical centers (i.e., teaching hospitals). With a need to keep inpatient beds filled and the concomitant trend of managed care, hospital administrators have built or purchased what have been called primary care networks. Administrators of academic medical centers also have felt the need for such networks to enhance their institutions’ teaching opportunities in the ambulatory setting, particularly in primary care. For example, most medical schools now have a required clerkship in Family Medicine. The past several years have witnessed the dismantling of many of these networks due to losses incurred at many of the sites in these networks as evidenced by the “Allegheny Bankruptcy.”1 Reasons for losses included overbuilding, paying physicians salaries as opposed to paying them according to productivity, declining reimbursement, outsourcing of formerly profitable lab work, and more.
There were several other important factors behind the development of primary care networks in addition to needing ambulatory teaching sites and referrals to the tertiary care hospital(s) at the medical center. To be sure, the relative importance of these factors has been quite variable across medical centers. These include such issues as the need to serve a specified geographic area, such as an underserved rural area or an impoverished community located next to the medical center. Many institutions bought the practices of experienced primary care physicians in their service area; these physicians’ finances were being battered by higher overhead, and payor reimbursement was simply not keeping up. This was occurring as their aging patients needed a medical center more than ever. Another factor in some settings was the increased importance of health services research. Ownership of such practices can foster community-based research given the larger amounts of data a network can provide versus a single practice.
Many administrators have continued to operate such networks under the justification of what has come to be called downstream revenue. This term applies to revenue captured by the hospital when patients from these ambulatory sites use the hospital’s facilities of any sort, including radiology, cardiology imaging, inpatient settings, emergency department, and others. (So diverse has hospitals’ ownership of facilities become that many hospitals now call themselves “health systems,” “medical centers,” or the like.) Given the large fixed costs of operating a health system, picking up additional revenue is usually financially beneficial. Though some may argue that all additional revenues flow straight to the bottom line, clearly most revenues generated enter the system before the break-even point. Every physician specialty group in a system might claim their particular contribution is the one on the margin and thus is more valuable for the system.
Academic medical centers are health systems historically staffed by nonprimary care specialists. They typically adapted to the managed care/health maintenance organization era by quickly establishing their primary care networks. Economic challenges for academic medical centers include the teaching mission, sicker patients, inefficient practice styles, and more.2 This has forced academic medical centers to evaluate their financial activities more closely. As an example, Cleverly describes a financial dashboard as one method to quickly and effectively capture key financial indicators critical to the overall performance of an organization.3 With heightened focus toward monitoring financial performance, financial data for primary care networks beyond merely dollars generated on-site had become essential. Our study, in some respects, examined the financial dashboard concept in ambulatory care.
The literature contains little data on downstream revenue. One of the first studies involved a single family practice center at the University of Washington in the late 1980s. Schneeweiss and colleagues4 used the term multiplier effect in reporting that, for every dollar billed by a primary care practitioner, there is $6.40 flowing downstream to a sponsoring institution and its specialty consultants. The assumption that all referrals go to one institution obviously has less validity in a large metropolitan area with multiple health care systems and a primary care network with far-flung sites compared to an office relatively close to a hospital without nearby competition. One concern to be noted is that of Green and Fryer, who commented that if family practice as a specialty tries to justify itself by downstream revenue, then it is subject to minimization by others who require a solid business plan with revenues exceeding expenses.5
The Ohio State University Medical Center (OSUMC) in Columbus has mirrored national trends in many respects. Besides traditional training programs in primary care residencies like general internal medicine and family medicine, the institution made some large leaps into the ambulatory arena. In 1986, it purchased seven local and regional urgent care centers; after many years, a strategy was developed to gradually convert these to primary care offices, with some consolidation along the way. A second large leap occurred about 10 years later with the building of a family practice network of eight offices (various openings and closing occurred over time as expected) and then a smaller number of offices for general internists. This second leap was actually bigger than the first, since the numbers of physicians and exam rooms at these sites were, in most cases, larger than such numbers at the urgent care centers. Not unexpectedly, given national trends seen earlier in other cities, there occurred fiscal losses at these sites, even when “fully engaged” with good collection rates. It was decided to further evaluate how much downstream revenue was flowing into the institution.
As the 2003-04 academic year began, the OSUMC Primary Care Network (hereafter, “the Network”) consisted of 11 family practice centers, 5 general internal medicine offices, 2 occupational medicine offices, and 1 sports medicine/family practice site. These 19 sites are all in Franklin County, Ohio (consisting mostly of Columbus and suburbs). The 76 physicians employed or contracted (part time and full time) plus 104 resident physicians see about 250,000 outpatient visits annually. (This visit number excludes the 43,000 visits to the sports medicine/family practice site, since its downstream data were not available, as noted later in the description of our research methodology.) Prior years have shown fiscal losses in the millions. Such losses are large, but Walker points out that loss of $30,000 to $100,000 per provider are not uncommon for hospital-employed primary care physicians.6 Some reasons for this were cited earlier; in fiscal terms, one of the reasons is that net revenues were $100,000 greater in physician-owned practices than hospital-owned practices according to data in 1998 from the Medical Group Management Association.7
The institutional review board of our medical center did not consider this research for review, noting that our de-identified data meant that we were not dealing with human subjects. No designated funding supported this project.
The substantial operational losses incurred by the Network prompted its leadership to initiate the downstream revenue project. Two of us (TB, MJW), representing the Network leadership, worked in collaboration with OSUMC’s Planning and Business Development group to design and develop a model of evaluating how much downstream revenue was flowing into OSUMC and its specialist physician groups. Central to the successful development of the model was access to OSUMC’s Information Warehouse. The Planning and Business Development group (including DC-H and CW) worked with the Information Warehouse group to capture and analyze the data. This data warehouse contains a large volume of detailed and aggregated health care and financial patient data, enabling us to expand our downstream revenue analysis to include not only gross revenue (billed charges), but also net revenue (net collections or payments), and direct contribution margin (net revenue less direct costs). The direct contribution margin referred to in this analysis is the amount available to pay for the indirect costs (i.e., the general administrative or overhead cost allocation from OSUMC) and thus provide any profit after the direct costs have also been paid. Revenues generated outside OSUMC’s billing systems are obviously not captured by our methodology; this is acceptable, given the goal of determining the contribution of the Network to the sponsoring academic medical center.
The downstream revenue model captures five revenue streams coming from the Network: stream I reflects revenues from Network-physician-attended inpatient admissions; stream II reflects revenues from Network-physician-ordered outpatient tests/procedures; streams III and IV reflect revenues from specialist-ordered outpatient tests and procedures and specialist-attended inpatient admissions, respectively, determined to be a result of previous referrals from Network physicians; and stream V reflects specialist professional fees generated as a result of activities from streams III and IV. The revenues of the first four streams go to the medical center, while revenues of the fifth stream go to the specialists. The section below provides a more detailed description of each revenue stream.
Time frame and data collection
The time frame for calculating the downstream revenue was 12 months, from July 1, 2003, through June 30, 2004, recognizing that some tests and procedures ordered would not be completed or patients would not be discharged until the next fiscal year. But this factor is balanced out by tests and procedures done early in the year, but ordered the prior year; or by patients discharged early in the year, but admitted the prior year. Sports medicine division data were excluded, as such data were not available.
For streams I through IV, the OSUMC Information Warehouse was queried for patient encounters that satisfied the following main criterion: the admission or the outpatient test or procedure must have occurred at an OSUMC-owned hospital or laboratory during the study year. In addition to the Information Warehouse as a source, data were also collected from our University Reference Laboratories to capture additional patient encounters that were generated at OSUMC-owned outpatient laboratories and that qualify for stream II. For stream V, the Specialist Professional Fees, the data were queried from a different database that pulls billing data from OSUMC’s faculty practice plans.
In order to classify the patient encounters into the different revenue streams, a series of data conditions and algorithms was applied. This is discussed further in the succeeding sections.
Stream I and stream II
An encounter was considered to be in streams I or II if the care was associated directly with a Network physician. This means that a Network physician must have either discharged the patient (stream I) or have ordered the outpatient test/procedure (stream II).
Stream III and stream IV
To capture streams III and IV, we first had to establish that the specialist-associated patient encounter was a result of a previous Network referral. Ideally, with any registration or admission, a patient would be asked for his or her referring physician and primary care physician, and this information would enable a straightforward tracking of downstream revenue. However, several issues prevented this tracking from being fully implemented. For example, when asked at registration, patients could not always correctly identify their referring physicians, prompting registrars to often simply skip filling in the information altogether. Consequently, this presented us with several challenges in terms of correctly qualifying the encounter as downstream. Thus, we found it necessary to develop a proxy system to help us identify previous Network referrals for each patient based on the following: (1) whether the patient had prior clinic visits with a Network physician, (2) the length of time between the most recent primary care visit and succeeding specialist encounter, and (3) the relationship between the primary care physician and the patient.
To look for the related primary care visits, we collected Network visits data from the Information Warehouse between July 1, 2002 and June 30, 2004. Notice that we included the 12 months prior to the study period in searching for the related Network visits. This is to recognize that it may take time to schedule a specialist referral or that a patient, after being referred by a primary care physician to a specialist, very often has several specialist visits prior to a hospitalization.
Next, these data were matched with the patient encounters taken from the OSUMC Information Warehouse using the patient’s medical record number. The medical record number matching guaranteed that the patient had been seen in the Network prior to the specialist-associated encounter; however, at this point it still can be argued that an encounter could have occurred without the Network, especially if the preceding Network visit occurred too far in the past. For example, a patient may see his or her primary care physician for allergy concerns and three months later go to a cardiologist for minor chest pains. This could be the result of a phone call and referral from the primary care physician or the patient could have initiated the visit to the cardiologist. To address this uncertainty, a conservative system of weighting was developed and applied to the revenues and expenses. The weights reflect the probability that the specialist-associated patient encounter was generated from a network referral.
The first system of weighting applied was the time weight. The concept was built on the assumption that if the time between the encounter and the most recent Network visit was relatively close, the likelihood that the encounter is related to that visit is higher. The time weights used for streams III and IV are shown in Table 1. Note that the time intervals are lengthened to one year in stream IV to allow for multiple specialist visits to occur following a primary care referral and prior to an admission by the specialist. For example, let us determine how much of the revenue will be considered downstream for an encounter in which the time difference between the most recent Network visit and the specialist-ordered outpatient test or procedure is 25 days. Since the time difference is within 90 days, we are to apply a time weight of 1.00 to the revenue, effectively crediting the Network for its full weight (downstream revenue = OSUMC revenue × 1.00). However, if the difference is between 91 and 180 days, only half of the revenue will be considered downstream (downstream revenue = OSUMC revenue × 0.50). Here we assume that there is a 50% chance the Network was not responsible for the specialist visit. For anything beyond 180 days, the revenue will be excluded.
In addition to the time weight, a relational weight is also applied to stream IV to conservatively adjust the revenue further. This step accounts for and quantifies the relationship between the Network physician and the patient. Here we are saying that the encounter is related to a preceding Network visit if the patient was seen by a Network physician more than once prior to admission. Table 2 shows the relational weights assigned for stream IV. For example, a patient was admitted 90 days after his or her last Network visit and was seen only once by the Network physician prior to the admission. Based on Table 1, we apply a time weight of 1.00. In addition, since the patient was only seen once, we apply a relational weight of 0.50, as shown in Table 2. Thus, only half of the revenue will be considered downstream (downstream revenue = OSUMC revenue × 1.00 × 0.50). However, if the same patient was seen more than once, the Network will be given full credit for the revenue (downstream revenue = OSUMC revenue × 1.00 × 1.00).
Recall the earlier example where a primary care physician sees a patient for allergies and the patient then sees a cardiologist for chest pains three months later—more specifically 95 days later, when some kind of test or procedure was done. The revenue from this procedure would receive a weighting of 0.5 (Table 1), since roughly half the time this revenue is the result of a referral from a primary care physician. If this patient were admitted the following week, 103 days since the primary care visit, the inpatient revenue would be given a time weight of 0.75 (Table 1). If this patient had seen the primary care physician only once in the prior year this revenue would then receive a relational weight of 0.5 (Table 2), bringing the total weight for the inpatient encounter to 0.375 (0.5 × 0.75) of the total OSUMC revenue.
The data for stream V were processed in a fashion very similar to those of stream III. Medical record numbers were matched between the Network and the specialist’s billing records. After this filter was applied, the weighting system applied in stream III helped account for the uncertainty in the relationship between the specialist’s fees and the preceding Network visit. The specialist’s professional fees include both inpatient and outpatient gross revenue (billed charges); net revenues were derived by estimates based on the adjusted collection rate.
A quality assurance approach to validate downstream revenues was used, tracking the top encounters by revenue in each hospital and each revenue stream against key variables such as length of stay, discharge diagnosis, and primary procedure. The assumption was that these variables should account for a substantial amount of the revenue. If they did not, then these revenues were excluded from the study.
The results of the downstream analysis are shown in Table 3. The brief financial highlights for the Network itself in the year of our study include (1) net (operating) revenue of $18.9 million, (2) operating expense of $27.2 million, and (3) operating loss, this being the difference, at $8.3 million. Network highlights do not include financial data from the Sports medicine division as noted earlier and do not include assigned OSUMC overhead assigned to the Network. Several items merit mention. The downstream direct contribution margin of $14 million from just the Network-attended admissions and the Network-ordered tests alone (streams I and II) is nearly twice the 8.3 million dollar operating loss in the Network. Quite significant is the roughly $64 million in gross revenue in the outpatient testing and procedures ordered by the specialists. The combined inpatient hospital gross revenue in streams I and IV is about $157 million. The net revenue downstream ($114.7 million) in all components to the medical center (I to IV) is 6.1 times the net revenue ($18.9 million) in the Network. The direct contribution margin of $51.9 million far outweighed the operating loss of $8.3 million in the Network.
When the downstream specialist professional fees (stream V) were added to the medical center revenues, the downstream gross revenues exceeded $300 million. To further illustrate the Network’s contribution to OSUMC, we measured how much the OSUMC gains were downstream for each $1.00 invested in the Network (Figure 1). The key result is that OSUMC gained $6.30 in downstream contribution margin for each $1.00 of Network investment.
This study demonstrated the large contribution to the revenue stream of an academic medical center by a large primary care network. Over time, it appears that the multiple, inherent in the idea of multiplier effect described earlier, continues. Even excluding all the revenue generated by specialist referrals and care in streams III, IV, and V, the downstream direct contribution margin generated just by the Network outweighed the Network operating loss by nearly $6 million.
One may look at the data from another perspective. Given approximately 250,000 patient visits annually, the overall net revenue downstream is roughly $550 per office visit to all five streams. More narrowly, in assessing flow to the medical center (excluding specialist fees in stream V) the gross revenues (gross charges) downstream are about $1,000 per office visit, the net revenues about $460 per office visit, and the direct contribution margin is about $200 per office visit. As noted in Table 3, the direct contribution margin is the net revenues less direct costs. To establish downstream profit or loss, various indirect expenses incurred by the medical center would have to be applied to the direct contribution margin.
The magnitude of the multiple is similar to that associated with older data cited previously. Variations may occur from other sources not cited. For example, a family practice with many pediatric patients would logically have fewer patients being referred to an adult-focused academic medical center than would a family practice or internal medicine practice with an older population. Another reason for a difference in the magnitude of the multiple is that sites in one study may perform more tests (lab, X-ray, stress testing even) than can be done at the primary care sites in another study. Another consideration is that there may be regional differences in reimbursement of hospital-based tests and procedures versus primary care office expenses.
A more recent evaluation of institutional revenue generated by a primary care network in an academic medical center was reported by Saultz et al.8 Charges from 56,459 patients from 10 clinics were evaluated over a six-month period. Primary care charges were $7,243,312; specialty faculty charges were $8,825,611; and hospital billing system charges from nonprimary care services totaled $43,559,741. The latter figure, approximately six times the primary care charges, is very similar to that from the Washington study. Charges outside the university could not be captured because the data source was the university’s billing system. All of these clinics involved faculty members who were teaching students and residents, representing approximately 30 full-time equivalent physicians in four primary care specialties (including obstetrics–gynecology).
This is the first study to our knowledge that uses the concept of weighting downstream revenues. Without a reliable, accurate system of tracking both primary care and referring physicians throughout the medical center, the weighting concept serves as a proxy. This proxy conservatively estimates the probability that downstream revenue is generated by a primary care physician. Although the proxy estimates were developed with guidance from primary care physicians, they were created conservatively in order to assure that OSUMC leadership would focus on the implications of the downstream revenue and not on the subjectivity of the weighting process. This weighting methodology almost certainly underestimates the true downstream impact of the primary care network.
Potential future studies might evaluate the validity of the proxy. For instance, a chart review analysis might help determine the validity of this weighting concept—both from the perspective of how time frames should be broken down and the perspective of how likely that a specialist-associated patient encounter with a specific time differential is attributable to a primary care physician. Alternatively, a patient-focused survey might help determine whether being a patient in the Network plays a role in the use of the sponsoring academic medical center specialists and facilities.
There are numerous reasons why such primary care networks have not been “profit centers” on their own as private practice has been typically. These factors were not part of our study, but include potentially higher employee salary and benefit costs as part of an institution’s costs, ancillary services done at the medical center, changing payor mix due to the mission of the medical center, and increased teaching obligation of the faculty, thus decreasing patient care time and revenue.
Another issue in recent years is that primary care networks are often referred to as “loss leaders” or, less charitably, as financial black holes requiring subsidies to make up the losses. Given the large amount of revenue generated for a sponsoring academic medical center, many leaders in the field would prefer the term “investment” in primary care versus “operating loss,” “subsidy,” or other such terms, which may be perceived to have negative connotations. Note that a significant multiple (6.1) is true for net revenue downstream versus the Network net revenue, but also for the multiple (6.3) of the downstream contribution margin versus the “investment” (operating loss) in the Network. Thus, we have used the term “investment,” noting its more familiar definition as “operating loss.” In spite of the fact that we have introduced a seemingly conservative weighting methodology (for the first time to our knowledge), a significant multiple persists.
There are important noneconomic aspects of primary care network development and dissolution not covered here. When networks have been scaled back, there have often been negative feelings toward the medical center not only by the physicians involved, but also by the patients and their communities. This can reverse the very positive feelings toward the academic medical center created by the primary care practices when they were first developed or purchased. Such a reversal to the negative side is often the case in urban communities.
The authors gratefully acknowledge the contributions of Armin Rahmanian, MHA, and Irina Belieaeva, MHA, in the development of the model described.