Adjusting and Censoring Electronic Monitoring Device Data: Implications for Study Outcomes

Fennie, Kristopher P MPH, PhD*; Bova, Carol A RN, PhD, ANP†; Williams, Ann B RN, EdD*

JAIDS Journal of Acquired Immune Deficiency Syndromes:
doi: 10.1097/01.qai.0000248336.97814.2f

Summary: Electronic monitoring device (EMD) data are widely used to measure adherence in HIV medication adherence research. EMD data represent an objective measure of adherence and arguably provide more valid data than other methods such as self-reported measures, pill counts, and drug level concentration. Moreover, EMD data are longitudinal, include many measurements, and yield a rich data set. This article illustrates potential pitfalls associated with this measurement technique, including lack of clarity associated with EMD data, and the extent to which adherence outcomes are affected by data management decisions. Recommendations are given regarding what information should be included in publications that report results based on EMD data so as to facilitate comparisons between studies.

Author Information

From the *Yale University School of Nursing New Haven, CT; and †Graduate School of Nursing, University of Massachusetts, Worcester, MA.

Supported by grants from the National Institutes of Health (NIH)/National Institute of Nursing Research (NINR) 5R01NR04744-03, NIH/National Institute of Allergy and Infectious Diseases R01AI057043, NIH/NINR F32 NR07500, and Yale School of Medicine General Research Center M01RR00125.

Reprints: Kristopher P. Fennie, MPH, PhD, Yale School of Nursing, PO Box 9740, New Haven, CT 06536 (e-mail:

Article Outline

Electronic monitoring devices (EMDs) are “the reference standard for adherence measurement because of their precision and sophistication.”1 EMD data provide an objective assessment of medication-taking behavior collected continuously over time.2 EMD data add depth to adherence assessment, providing information not obtainable through self-report, pill count, biologic, or clinical data.

The primary assumption underlying EMD is that each cap opening represents a pill-taking event. In some widely recognized instances, however, this assumption is violated. For example, patients may remove multiple doses from the medication bottle at a single time so as to carry pills more conveniently while away from home (pocket dosing). Use of pill boxes is a major issue in evaluating the validity of EMD data; a patient may use a pill box and not tell his or her primary care provider or, if in a study, research staff. It is possible that a cap may become damaged or otherwise malfunction. Other factors that can affect the validity of EMD data include periods of time when a subject is not responsible for his or her medication taking (eg, hospitalizations, incarceration, drug treatment programs), provider-ordered treatment discontinuation, and, in some circumstances, subject-initiated treatment discontinuation.

Summary measures of EMD data rely on longitudinal counts and timing of events.3 The most commonly reported summary measure of EMD data is the percentage of prescribed doses taken; other common summaries include the percentage of days with correct dosing and therapeutic coverage. Percentage of drug holidays, percentage of days of underdosing or overdosing, percentage of doses with correct timing, and percentage of days with correct doses and timing of doses are specialized summary measures also available based on EMD data. Some of these measures, especially those involving time, require knowledge of the pharmacodynamics and pharmacokinetics of the prescribed drugs, namely, the half-life of the drug. Not all these adherence measures are useful in a general context; they were derived for specific purposes focusing on a particular aspect of adherence.

Summary measures of EMD data fail to make full use of the available data. New methods for analyzing EMD data and novel applications of existing methods that do take advantage of the richness of EMD data are available, however. Girard et al4 use a 2-stage hierarchic Markov model to assess EMD adherence data. The first stage is conditional on individual random effects and individual specific dose time. It assumes that the probability of an individual taking a medication dose at the correct time is dependent on related covariates and the immediate prior dose but is independent of other dosing times. Second, it assumes that the difference between observed and expected dose timing is normally distributed and conditioned on the covariates and dose times. During the second stage, they fit a model based on maximum likelihood to the data over time. This method does not model the EMD data directly but, rather, a transformation of the EMD data, which is a function of a data-based estimate of the basic dosing time of an individual.

Knafl et al5 suggest using an adaptive Poisson regression modeling approach to characterize adherence patterns on an individual and on group data. The approach consists of partitioning the observation period into distinct intervals, computing pill-taking events for each interval, and modeling the events, using Poisson regression, in terms of time elapsed since baseline. A nonparametric curve is fitted to the data on the basis of the use of a rule-based heuristic search of the best-fitting model. The resulting curves can then be classified into adherence type patterns.

Dunsiger et al6 propose an alternative method to characterize adherence patterns. Again, using nonparametric methods, they classify adherence patterns by first using smoothing splines to summarize individual EMD data over time, resulting in a curve representing average medication-taking behavior over time, and then applying a k-means clustering technique to group like patterns into distinct adherence patterns such as strong sustained, strong but unsustained, steady poor, steady very poor, and progressive decay.

Liu et al7 suggest linear and nonlinear methods to evaluate adherence as a predictor of biologic response. They use a repeated-measures linear mixed model to model the relation of adherence and viral load over time, controlling for other factors. To test the potentially nonlinear relation between adherence and viral load, they use restricted cubic spline functions.

Vrijens et al8 carry out a time-dependent continuation ratio model to analyze the effect of adherence on virologic outcome, defined as a 4-level ordinal response variable. This method attempts to use complex data in a minimally collapsed form to attain better estimates of the effect of adherence on virologic response.

Measurement of HIV medication adherence is becoming more complex as novel methods become available. As the complexity increases, the adherence measures become more sensitive to data noise. An understanding of the EMD data management and analysis methods used in a clinical trial is fundamental to interpretation of the trial results. The purpose of this article is to illustrate the potential lack of clarity associated with EMD data and the extent to which adherence outcomes are affected by data management decisions. We demonstrate, by example, the challenge of measurement validity and how data adjustment, particularly censoring, influences the validity of the adherence measurement. We conclude with recommendations regarding key information to incorporate into published reports that include results based on EMD data.

Back to Top | Article Outline


Data used in these examples are fabricated to illustrate the dilemmas of deciding how to examine EMD data. For all examples, the highly active antiretroviral therapy (HAART) regimen is assumed to include twice-daily dosing of lamivudine, which is monitored with an EMD, during the period February 2, 2005 to March 20, 2005. Two summary adherence measures are calculated:

Percent of prescribed doses taken, which is calculated as the number of doses taken over the total number of doses prescribed:3

where Mi is the monitoring period, aij is the number of administrations taken by the i-th subject on the j-th day, fij is the prescribed (daily) dosing frequency, Ki is the total number of doses taken over the monitoring period Mi, and Pi is the total number of prescribed doses over the monitoring period Mi.

Percentage of days with correct number of doses taken:3

where Mi is the monitoring period, I{} is a conditional 0,1 function indicating adherent or nonadherent, aij is the number of administrations taken by the i-th subject on the j-th day, fij is the prescribed (daily) dosing frequency, Di is the total number of days with correct dosing during the monitoring period Mi, and Pi is the total number of prescribed doses over the monitoring period Mi.

Adjustment of data is a broad term meaning that the observed data are changed by commission or omission of events based on external sources such as diary data or self-report. Changes generally take place when data are imported into a statistical package. In this article, “censoring” refers to a type of adjustment in which time ranges or data points are omitted from analysis because the events contained therein are not considered valid. Censoring generally involves changing the denominator or observation period.

Back to Top | Article Outline


Pocket Dosing

Figures 1 through 3 represent EMD data of 3 subjects who admit to pocket dosing on weekends. The pattern is seen clearly in Figures 1 and 2. Subject A in Figure 1 attempts to account for the pocket dosing by opening the cap 4 extra times on Friday, and subject B in Figure 2 makes no attempt to “adjust” his or her EMD cap. If one were to examine the time of cap openings for these 2 subjects, it would seem likely that they would be in equally spaced intervals. Based on this information and the subjects' self-report of weekend pocket dosing, some researchers would adjust the EMD data. Adjusting the data for subject A would entail moving the extra events on Fridays to the weekend by changing the dates and times of the data. Adjusting the data for subject B would entail adding additional events on Saturday and Sunday.

For subject A, adjusting the data would not change the summary measure “pills taken over pills prescribed” (100% adherence unadjusted and adjusted) but would change the summary measure “days with correct doses taken” (55% adherence unadjusted vs. 100% adherence adjusted; Table 1). In the case of subject B, however, who did not manipulate the cap openings, adjusting the data would change the summary measure pills taken over pills prescribed (70% adherence unadjusted vs. 100% adherence adjusted) and the summary measure days with correct doses taken (70% adherence unadjusted vs. 100% adherence adjusted). In these 2 situations, a case could be made for adjusting the data because there is information from patient interview to substantiate the pocket dosing. Is it justifiable to adjust the data regardless of whether additional data support adjusting? The patterns are clear but which is the best measure that represents true adherence? Finally, it is problematic to apply adjustments to these data if using summary measures that employ exact timing of events (eg, percentage of doses with correct timing).

Figure 3 shows data for subject C, who also claims to pocket dose. A clear pattern is not discernible from the data. For example, there is an opening on Sunday, March 6, 2005, which calls into question the pocket-dosing report, because the subject reports pocket dosing on all weekends. This admission by the subject is not consistent with the EMD data, and one cannot determine which weekends the subject chose to pocket dose. The unadjusted summary measure pills taken over pills prescribed is 30% and that of percent days with correct doses is 17%. How does one adjust the data? According to diary or self-report, we would assume 100% adherence on weekends, because that is what was assumed with subjects A and B. This, however, would likely overestimate the adherence. One might choose to adjust only those weekends when there were openings on the prior Friday. The most conservative approach would be to use unadjusted data for subject C. Moreover, because adherence is so low, the decision to adjust the data for the summary measures may be moot.

These 3 examples suggest that adjusting for a particular subject may be more appropriate than for another; however, deciding on a case-by-case basis introduces bias. More importantly, adjusting outcome (adherence) data on some subjects but not on others necessarily creates a bias that cannot be measured.

Back to Top | Article Outline
Excessive Openings per Day

Consider the adherence patterns of subjects represented in Figures 4 through 6. These patterns are based on actual observations.9 Figure 4 shows data of a usually adherent subject D. On February 23 and March 12, however, there are 14 and 25 openings, respectively. In the interview and diaries, there was no mention of overdosing. Observing the times of the openings, they are not scattered throughout the day but concentrated in the afternoon. The unadjusted summary measure total pills taken over total pills prescribed is 133%, and the unadjusted percent days with correct dosing is 87%. Note that in the latter measure, February 23 and March 12 are not included as days with correct dosing because there was an excess of openings on those days. It is improbable that the subject truly took 14 and 25 doses in 1 day; therefore, adjusting by truncating the number of openings on those days to 2 seems reasonable. Possible reasons for the large number of openings include, allowing the cap to jiggle when carrying it about, which can mechanically cause a false reading, or playing with the cap by repeatedly opening and closing it.

Figure 5 shows data of adherent subject E, who also has days on which more than 2 openings are recorded but only to a maximum of 4 openings per day. The unadjusted summary measure total pills taken over total pills prescribed is 111%, and for percent days with correct dosing, the unadjusted summary measure is 81%. (Note, again, that days with >2 openings are not counted as correct dosing days.) Without additional information, it is impossible to know if these data accurately represent true doses, because the subject may have impaired thought processes or be confused about the prescribed dose.10 If interview and diary data are available, they may or may not support overdosing. In this example, it is more problematic in deciding how to adjust.

The pattern in Figure 6 differs subtly from the pattern in Figure 5 in that many of the dates with more than 2 cap-opening events are preceded by dates with <2 events, suggesting that subject F may be catching up on missed doses. This possible explanation could be explored by looking at diary or interview data, and perhaps even by examining timing of doses. The unadjusted summary measure pills taken over pills prescribed is 89%, and the measure percent days with correct dosing is only 51%. Adjusting the data in this example probably would not be representative of therapeutic effectiveness.

The differences between unadjusted and adjusted summary measures are often minimal and do not affect the overall adherence significantly. Nevertheless, there is currently no consensus on the protocol for adjusting EMD data. Some researchers consider the situations described here as “noise,” assume the data are equally distributed in a randomized trial, and do not adjust the data at all. Others adjust in some situations and not in others. At what point does one decide to adjust the data: >2 openings, 4 openings, or 8 openings? These decisions rarely are discussed in the methods section of published papers.

Back to Top | Article Outline

Consider subjects G and H in Figures 7 and 8. They each report that they entered residential drug treatment and were not responsible for administering their medication for some time during the months before the study interview. They recalled that the date was “toward the end of February.” Determining the exact date of admission is a challenge when the subject is interviewed after leaving treatment and the rehabilitation center or other institution is not willing to divulge admission and discharge dates. For the purposes of this illustration, we posit that subjects G and H both entered residential drug treatment on February 28 and no longer had access to their medication with the EMD.

Based on the EMD data, subject G was relatively consistent with taking medication at the frequency of at least 1 dose per day for the first 2 weeks, missed some doses in week 3 (perhaps indicating increased drug use leading up to going into treatment), and then stopped using the EMD entirely after February 28. It would seem reasonable to censor the data at February 28 as a period when the subject was not responsible for his or her medication-taking behavior.

Applying the same logic to the data for subject H would lead to the assumption that subject H entered residential treatment on February 21. In reality, the subject was intensively using drugs during that period and was nonadherent to her antiretroviral regimen. Without adjusting the data, the adherence, as measured by pills taken over pills prescribed, is 35% for subject G and 21% for subject H. Adherence measured by percent days with correct dosing is 19% for subject G and 13% for subject H.

For maximum accuracy, it is appropriate to adjust or censor the data for those periods when the subject is not in control of taking his or her medication. The challenge is in determining the correct dates to censor. Interview and diary data are frequently vague or inexact. The data management decisions made can have a significant impact on the results. For example, with accurate information about the date that subject H entered residential treatment, the data would be censored for the period beginning February 28. In that instance, the summary measure pills taken over pills prescribed would result in 37% adjusted adherence, and the summary measure percent days with correct dosing would be 22%. In the absence of verifiable information, however, adjusting the data based on interpretation of the pattern in the EMD data would lead to the conclusion that the data should be censored a full week early, on February 21. As a result, subject H's adherence would be registered as 53% of pills taken over pills prescribed and the number of days of correct dosing would be 32%. These figures are considerably higher than those based on the correct date, February 28. Table 2 shows comparisons of the summary measures under different circumstances for subject H.

Finally, Figure 9 presents data from subject I, who demonstrates variation in his or her medication taking, including 2.5 weeks of no cap readings. Interview and diary data do not suggest any situation in which the subject was not in control of his or her medications. If the subject admits, through diary or interview data, to missing those doses, the data would not be adjusted and the adherence would be 43% if calculated using the number of pills taken over the number of pills prescribed and 30% if calculated using the percent days with correct dosing.

More than likely, however, supplemental data do not provide this information. In that case, some researchers would consider the series of days with no cap openings as a treatment discontinuation and censor those data. Censoring the data between February 14 and March 2 results in adherence rates of 67% and 47% using pills taken over pills prescribed and percent days with correct dosing, respectively. Table 3 allows comparisons of the measures with and without censoring. The adherence changes dramatically.

Censoring large numbers of days or weeks and labeling as treatment discontinuation11 is a way to deal with problematic data patterns in which a researcher does not know what the circumstances are during a given period. Using all data, some of which might be noise, can lead to spurious results or bias to the null in testing differences between groups. By censoring the data, an attempt is made to look at adherence during the time that subjects are taking their medication. The distinction between using censored and noncensored data with respect to measurement is that they are measuring 2 different constructs. For censored data, one is measuring adherence during periods when a subject is attempting to and (partially) succeeding at taking medication. When using uncensored data, however, one is measuring if the EMD cap was opened or not and assuming that such an event is related to actual pill-taking behavior.

Back to Top | Article Outline


The purpose of presenting these results is 2-fold: to illustrate the complexities of EMD data, and that reported adherence results are influenced by data management decisions, and to demonstrate, by example, some issues of measurement and validity. It is not that a specific method of handling EMD data is better than another when calculating adherence; rather, it is what is being measured that differs. It is precisely because of this point that researchers, when presenting studies, need to be clear about how data are managed and adherence is calculated so that what is being measured is more evident.

We argue that censoring or adjusting EMD data affects estimates of adherence. The extent to which this is true depends on the type of analyses. Traditional analyses of EMD adherence data rely on a summary measure as described by Equation 1 or Equation 2, where dose timing or uncovered hours are not considered. Collapsing adherence values into more clinically meaningful categories of adherent versus nonadherent, as defined by 95% or greater of medications taken as adherent12 or, with the addition of new potent regimens, 85% or greater,13 censoring or adjusting EMD data has a minimal effect on adherence values. An exception to this would be if adjustment leads to a subject being classified in a different adherence group. In the previous examples, censoring data would not change the results if the analysis used a dichotomized adherence variable. Using other summary measures, however, such as the percent of uncovered minutes may be more sensitive to censoring even when using dichotomous variables.

Using adherence data as a continuous measure, whether used as an outcome4,14,15 or a covariate to predict biologic outcome,16-18 is more sensitive to censoring and adjustment. In this context, detailing how adjustment of data is made is paramount to proper interpretation of results.

A limitation of summary measures of EMD data is that data are not used to their full potential. As newly emerging methods of analyzing adherence data as outcome and predictor variables become familiar, researchers need to pay increasing attention to protocols for data adjustment and censorship. For example, methods to identify adherence patterns, such as Knafl et al5 using adaptive Poisson regression and Dunsiger et al6 using smoothing splines and k-means clustering, are assumed to be sensitive to censoring and adjustment, particularly if large periods of time are censored, such as a nonprescribed treatment holiday. This is particularly important if classifying adherence patterns and relating to medication-taking behavior and/or behavioral types. Likewise, results based on techniques that use multiple adherence measures, such as the composite adherence scoring suggested by Liu et al,19 and studies that use multiple adherence measures separately are also affected by censored or adjusted adherence measures.

Through example and discussion, we have attempted to demonstrate that use of EMD data is excellent for measuring adherence but that one needs to consider how one handles the data and the effect that has on adherence values. In accordance with the advice of Bova,20 one needs to be conceptually clear about what is being measured. At the most basic level, EMD data measure the number and time of cap openings, which is a proxy measure for the number of doses and time of doses taken per subject. At best, one can assume only a strong correlation between EMD data adherence and actual pill taking. What EMD data truly measure is a willingness to be monitored and compliance with all EMD procedures, such as bringing EMD bottle and caps in to be read, only opening the bottle when ingesting the medication, and taking the correct number of pills prescribed. Specificity with respect to measuring true adherence is compromised because of these assumptions. Adding additional issues of drug holidays, pocket dosing, using ancillary diary and interview data, and censoring or adjusting adherence data to account for these added factors can compound the assumptions one is making about EMD data. Without a clear algorithm and method for adjusting data, there is a significant risk of bias in a specific group compared with another.

It is important to recognize that these fabricated data are illustrative and based on anecdotal evidence. A limitation of this report is that it is based on individual case scenarios and does not use cohort or clinical trial data. The degree to which these scenarios would affect population estimates and strength of associations is not known but would be based largely on how frequent the previously described scenarios occur. Nevertheless, what is important is to minimize error. Decreasing error leads to more accurate measures of the strength and magnitude of associations.

It is not clear which are the best methods for analyzing EMD data, although many agree that number of uncovered hours or minutes is the best summary measure; in addition, it is not clear how best to censor and adjust data. There are, however, many techniques that one can use. There are also many important reasons to consider one method over another for determining adherence and to consider censoring and adjusting data. Thus, there is no right or wrong method. Nevertheless, it is important that in publishing results, researchers are clear in their methods and the conceptual underpinnings of their choices.

We suggest a checklist to consider when publishing studies using EMD data. Items in this checklist should be addressed in the article so that it is clear how data were handled in the study. This concept is based on the consolidated standards of reporting trials (CONSORT) statement developed in the mid-1990s to improve transparency of randomized clinical trials in the published literature.21 The CONSORT group recommends using a checklist of items and a flow diagram to help authors in reporting their results.

For such a system to work, adherence researchers need to agree to implement such a checklist into their own publications. The utility of a checklist is dependent on monitoring the literature to ensure that the items to include are useful and add to the quality of the published studies. As with the CONSORT statement, this needs to be an iterative process whereby the checklist can evolve in a valid and meaningful manner. Discussions of and changes to the EMD adherence checklist can take place annually at adherence conferences.

We suggest, as a starting point, elements that would be useful in publications to make cross-comparisons among studies. Table 4 lists the items to be addressed when reporting results from a study using EMD data. Many of the items presented in Table 4 commonly are stated in articles; some of the items relate indirectly to EMD data but, nevertheless, are important to include. The main purpose of including these elements is to enhance understanding by the reader of how data management and analysis were carried out in any given study so that comparisons among studies can be better made and to aid in determining generalizability of the study results.

Back to Top | Article Outline


1. Farmer KC. Methods for measuring and monitoring medication regimen adherence in clinical trials and clinical practice. Clin Ther. 1999;21:1074-1090.
2. Dunbar-Jacob J, Erlen JA, Schlenk EA, et al. Adherence in chronic disease. Annu Rev Nurs Res. 2000;18:48-90.
3. Sereika SM, Dubar-Jacob J. Analysis of electronic event monitored adherence. In: Burke LE, Ockene IS, eds. Compliance in Healthcare and Research. Armonk, NY: Futura Publishing Company; 2001;139-162.
4. Girard P, Blaschke TF, Kastrissios H, et al. A Markov mixed effect regression model for drug compliance. Stat Med. 1998;17:2313-2333.
5. Knafl GJ, Fennie KP, Bova C, et al. Electronic monitoring device event modelling on an individual-subject basis using adaptive Poisson regression. Stat Med. 2004;23:783-801.
6. Dunsiger S, Hogan J, Gifford A. Classification of longitudinal drug adherence patterns from MEMS cap time series data. Presented at: National Institute of Mental Health/International Association of Physicians in AIDS Care International Conference on HIV Treatment Adherence; 2006; Jersey City.
7. Liu H, Miller LG, Hays RD, et al. Predictors of virologic responses to HAART: dose timing, overall adherence, and genotypic sensitivity. Presented at: National Institute of Mental Health/International Association of Physicians in AIDS Care International Conference on HIV Treatment Adherence; 2006; Jersey City.
8. Vrijens B, Goetghebeur E, de Klerk E, et al. Modelling the association between adherence and viral load in HIV-infected patients. Stat Med. 2005;24:2719-2731.
9. Bova CA, Fennie KP, Knafl GJ, et al. Use of electronic monitoring devices to measure antiretroviral adherence. Practical considerations. AIDS Behav. 2005;9:103-110.
10. Lehman HP, Benson JO, Beninger PR, et al. A five-year evaluation of reports of overdose with indinavir sulfate. Pharmacoepidemiol Drug Saf. 2003;12:449-457.
11. Moss AR, Hahn JA, Perry S, et al. Adherence to highly active antiretroviral therapy in the homeless population in San Francisco: a prospective study. Clin Infect Dis. 2004;39:1190-1198.
12. Friedland GH, Williams A. Attaining higher goals in HIV treatment: the central importance of adherence. AIDS. 1999;13(Suppl 1):S61-S72.
13. Bangsberg DR, Weiser S, Guzman D, et al. 95% adherence is not necessary for viral suppression to <400 copies/mL in the majority of individuals on NNRTI regimens. Presented at: 12th Conference on Retroviruses and Opportunistic Infections; 2005; Boston.
14. Wagner GJ. Predictors of antiretroviral adherence as measured by self-report, electronic monitoring, and medication diaries. AIDS Patient Care STDS. 2002;16:599-608.
15. Golin CE, Liu H, Hays RD, et al. A prospective study of predictors of adherence to combination antiretroviral medication. J Gen Intern Med. 2002;17:812-813.
16. Oyugi JH, Byakika-Tusiime J, Charlebois ED, et al. Multiple validated measures of adherence indicate high levels of adherence to generic HIV antiretroviral therapy in a resource-limited setting. J Acquir Immune Defic Syndr. 2004;36:1100-1102.
17. Fletcher CV, Testa MA, Brundage RC. Four measures of antiretroviral medication adherence and virologic response in AIDS Clinical Trials Group Study 359. J Acquir Immune Defic Syndr. 2005;40:301-306.
18. Remien RH, Stirratt MJ, Dolezal C, et al. Couple-focused support to improve HIV medication adherence: a randomized controlled trial. AIDS. 2005;19:807-814.
19. Liu H, Golin CE, Miller LG, et al. A comparison study of multiple measures of adherence to HIV protease inhibitors. Ann Intern Med. 2001;134:968-977.
20. Bova CA. One measurement challenge: getting conceptually clear. Presented at: Enhancing Adherence Conference; 2005; New Haven.
21. Moher D, Schulz KF, Altman DG, et al. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Ann Intern Med. 2001;134:657-662.

medication adherence; electronic monitoring device; measurement; censoring; data adjustment

© 2006 Lippincott Williams & Wilkins, Inc.