When conducting clinical trials relating to the performance of contact lenses or lens care systems, it is important to monitor their impact on the clinical trial participants. In most cases, the data collected include clinical trial participant observations of their own ocular condition or visual performance and subjective ratings of factors such as comfort, vision, and satisfaction with the system under test. These observations and measures are usually collected at specified intervals, both over the course of a day and on numerous days during the clinical trial. This allows comparisons to be made between these measurement time points, to determine the impact of the test lens or care system and is typically compared with a control. These daily assessments are important as dryness and discomfort increases diurnally for both non-lens wearers and contact lens wearers and continues to be a significant issue for many patients.1–3 Despite many years of work for developing new contact lens materials and designs, lens discomfort—particularly at the end of the day—remains the primary reason for discontinuation of contact lens wear.4, 5 Obtaining an understanding of the rate of change in ocular surface comfort and how that differs for patients may lead to a better understanding of how to resolve these problems.6
Subjective assessment ratings can be scored using Visual Analog Scales,7–9 numerical scales,10 or Likert scales.11All are tools designed to collect clinical trial participant-related subjective data in a quantifiable way; to allow for inter- or intraparticipant comparison; and to establish whether a difference between products is apparent.12 These methods are usually incorporated into a study diary or as paper forms, which the participant subsequently presents to the researcher at the next scheduled visit or they may report their ratings by telephone at the end of the day on which the assessments are being undertaken.13 All of these methods suffer from potential inaccuracy because there is no ability to monitor that the participant is recording their ratings at the specified time points, even with incorporating electronic devices such as personal data assistants.14 Stone et al.15 reported that compliance with recording data in a study paper diary are poor, with 90% “hoarding” their responses to one time point, such that the data may be questionable. They were able to conclude this as they used paper-based diaries that were electronically time dated when an entry was actually made.
Although use of the telephone to report information on specific study days ensures the data relate to the correct day, ensuring the data are actually taken at the correct times during that day remains problematic, as the participant has to be trusted to follow the clinical trial instructions relating to the collection times. Having a research coordinator phoning the clinical trial participants at specific times over the collection day would address this issue, but it is costly and possibly intrusive to the participant.
The importance of collecting time-sensitive data have been widely acknowledged, and the incorporation of electronic technology has been used in clinical trials across a broad spectrum of research fields.16–21 Methods adopted have included sending short message service (SMS) to participants' cell phones22, 23 or providing participants with a personal digital assistant (PDA), such as a Palm Handheld or the Apple Newton, with software installed that allows the completion of the scales.15–20
Although both of these methods are potential improvements over diaries or paper-based scales, neither can control for when the participant decides to complete the data collection nor can they provide a timely reminder if the data are not completed. Morgan et al.23 used SMS messaging to collect data and reported up to 93% of study participants responded within a reasonable time period (30 min) of the scheduled time point. Unfortunately, the use of SMS is limited because it is not possible to confirm delivery or the time of delivery of messages. Plowright et al.22 reported response rates, using text messaging, of between 76 and 82%, but the times of responses were not reported.
In an attempt to overcome the above-mentioned limitations, an electronic device was needed that could receive time-stamped e-mail, allow an immediate response to that e-mail, either by reply or by redirection to an internet-enabled database, and be able to monitor the time of the response. The BlackBerry (Research in Motion, Waterloo, ON, Canada) was identified as a possible option.
A web-based [Get On Line Data (GOLD)] database system custom developed at the Centre for Contact Lens Research (CCLR), University of Waterloo, was adapted for use with the BlackBerry PDA for notification to clinical trial participants to complete their subjective assessments.24 The participant was prompted by an e-mail the day before the scheduled assessments and then at the times specified on the day of assessment. This e-mail contained a hyperlink to a webpage where the ratings were entered. If data were not entered within a 1-hour time window, a second e-mail was automatically sent as a reminder to the participant's BlackBerry to encourage a timely response. Examples of the BlackBerry interface are shown in Fig. 1.
After data entry, a second time window of 20 min was set to allow reentry or editing of the recorded values by the participant in case a mistake was inadvertently made. Incorporating the BlackBerry required the custom development of a series of software applications:
- Software A, a customized database where the study designs could be defined, including the allotment of identification codes, the variables to be collected, and the order of collection (Fig. 2).
- Software B, a customized scheduling database for the study coordinator to enter the individual participants' actual schedule for study measure visits (Fig. 3).
- Software C, a customized e-mail generating software application that monitors both softwares A and B to generate the e-mails containing the hyperlinks that are sent to each participant's BlackBerry.
- Software D, a customized software application that monitors data entry and records activity (or lack of activity) and prompts software C to generate a reminder e-mail if data have not been entered or for the GOLD database to close, once the requested entry is completed.
- Software E, a customized webpage generator that creates the user interface for the data to be entered on to (Fig. 1).
Before the commencement of any study involving the BlackBerry, full ethics review was under taken, and a section relating to the responsibilities and use of the device was included in the informed consent letter. This technical note describes the results on the use of BlackBerry technology with the GOLD system from four clinical trials conducted at the CCLR.
Design of the Clinical Trials
A total of 205 participants were recruited into four separate clinical trials and provided with BlackBerrys. Ten percent of these participants were existing BlackBerry users. Regardless of previous experience, all study participants received training to ensure that they could adequately enter the study data using the device. The four clinical trial designs differed slightly but all required data to be collected via a subjective rating scale at different time points during the day and over the course of the clinical trial (Table 1) . In clinical trials 1, 2, and 3, participants entered and transmitted their data at three specific time intervals on specific days during the clinical trial. In clinical trial 4, participants collected data while completing two tasks on prearranged evenings using a BlackBerry. One task was completed at a local café and the other was completed at the participant's home. These tasks involved completing a subjective questionnaire about visual satisfaction in a specific environment and completing a visual task (reading text of varying size and contrast) on the BlackBerry.
Response to Requests to Complete Subjective Ratings
The maximum number of e-mails that could theoretically be sent (Table 1) was not achieved, as data from incomplete clinical trial participation are also included in this technical note, and a number of participants were discontinued from each clinical trial or phase prematurely. The actual number of e-mails sent to study BlackBerrys was 11,081. Of these, the participants' responded to 10,806, which represents a response rate of 97.5% overall (Table 1).
A measure of compliance to the request to enter their ratings was taken as the number of responses received within a “reasonable” time of the request, either on the same day or within 1 hour of the request. This was found to be within the range of 38.5 to 96.6% for the different studies reported (Table 2).
The summary of the utilization of the BlackBerrys from the four clinical trials reported in this technical note would suggest that their use offers internet connectivity that aids in the collection of clinical trial participant rating data at various times during the day and in different real world locations. This appears to address some of the limitations previously reported with the use of study diaries14 or the uncertainty associated with the delivery of text messages.
Our data also suggest that although most participants provide ratings on the required day (78.7 to 100%), not all responded within a reasonable time frame (38.5 to 47.3%) still choosing to delay their response to the e-mail request sent. This decision, of participants to delay their response, is a problem suspected by other investigators using other methods of data collection13 and confirmed here.
For clinical trials 2 and 3, where data collection was time critical, the response rate was marginally higher (78.7 to 88.9%) than reported in clinical trials where SMS messaging via a cell phone was used (76 to 82%).23 Unfortunately, these data were not available for clinical trial 1, because it was conducted before a software upgrade providing this capability. Not all participants responded in a timely manner however, and this is likely as a result of the complexity and frequency of responses required in these clinical trials. Virtually, all participants did respond at some time point (96.9 to 99.8%), suggesting that although being committed to their clinical trial participation, other activities interfered with their ability to respond. In clinical trial 4, where the participants were asked to complete questions relating to an actual activity they were doing at the time, the response rate was high (100%), indicating that when the participant had time scheduled for completion of the task, compliance was absolute.
The next generation of contact lenses will have to be designed to resolve end-of-day discomfort and provide adequate presbyopic correction. Despite many improvements in materials and lens designs, discomfort and visual performance still limit successful lens wear and remain key reasons for lens discontinuation.4 Understanding when discomfort develops and under what circumstances may help researchers resolve this problem. Insight regarding how well a multifocal lens design enables optimal visual function in the real world is of tremendous value. Collecting time-sensitive data relating to comfort and vision in a controlled manner can only add to our knowledge of these issues. There are potentially several other symptoms that could be time or environment sensitive, where being able to collect information on that symptom at specific time points would be of particular value. The use of Smartphones in collecting such information will be of great benefit.
This technical note highlights the potential impact of the use of a web-enabled data delivery system and the technology incorporated in BlackBerrys can have on collecting time-sensitive data. The series of clinical trials reported show the benefits of the utilization of the BlackBerry to collect data via a web-enabled system.
We thank the significant work in the development and management of this communication system by Mr. Trevor German and Ms. Grace Dong.
Centre for Contact Lens Research
School of Optometry, University of Waterloo
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
1.Begley CG, Caffery B, Chalmers RL, Mitchell GL. Use of the dry eye questionnaire to measure symptoms of ocular irritation in patients with aqueous tear deficient dry eye. Cornea 2002;21:664–70.
2.Chalmers RL, Begley CG. Dryness symptoms among an unselected clinical population with and without contact lens wear. Cont Lens Anterior Eye 2006;29:25–30.
3.Fonn D, Situ P, Simpson T. Hydrogel lens dehydration and subjective comfort and dryness ratings in symptomatic and asymptomatic contact lens wearers. Optom Vis Sci 1999;76:700–4.
4.Fonn D. Targeting contact lens induced dryness and discomfort: what properties will make lenses more comfortable. Optom Vis Sci 2007;84:279–85.
5.Richdale K, Sinnott LT, Skadahl E, Nichols JJ. Frequency of and factors associated with contact lens dissatisfaction and discontinuation. Cornea 2007;26:168–74.
6.Woods CA, Richter D, Fonn D. Rate of change of comfort in symptomatic and asymptomatic lens wearers. Optom Vis Sci 2008;85: E-abstract 80115.
7.McCormack HM, Horne DJ, Sheather S. Clinical applications of visual analogue scales: a critical review. Psychol Med 1988;18:1007–19.
8.Brennan NA, Efron N. Symptomatology of HEMA contact lens wear. Optom Vis Sci 1989;66:834–8.
9.du Toit R, Pritchard N, Heffernan S, Simpson T, Fonn D. A comparison of three different scales for rating contact lens handling. Optom Vis Sci 2002;79:313–20.
10.Papas EB, Schultz BL. Repeatability and comparison of visual analogue and numerical rating scales in the assessment of visual quality. Ophthalmic Physiol Opt 1997;17:492–8.
11.Likert R. A technique for the measurement of attitudes. Arch Psychol 1932;22:1–55.
12.Carta A, Braccio L, Belpoliti M, Soliani L, Sartore F, Gandolfi SA, Maraini G. Self-assessment of the quality of vision: association of questionnaire score with objective clinical tests. Curr Eye Res 1998;17:506–11.
13.Dumbleton K, Keir N, Moezzi A, Feng Y, Jones L, Fonn D. Objective and subjective responses in patients refitted to daily-wear silicone hydrogel contact lenses. Optom Vis Sci 2006;83:758–68.
14.Green AS, Rafaeli E, Bolger N, Shrout PE, Reis HT. Paper or plastic? Data equivalence in paper and electronic diaries. Psychol Methods 2006;11:87–105.
15.Stone AA, Shiffman S, Schwartz JE, Broderick JE, Hufford MR. Patient compliance with paper and electronic diaries. Control Clin Trials 2003;24:182–99.
16.Stratton RJ, Stubbs RJ, Hughes D, King N, Blundell JE, Elia M. Comparison of the traditional paper visual analogue scale questionnaire with an Apple Newton electronic appetite rating system (EARS) in free living subjects feeding ad libitum. Eur J Clin Nutr 1998;52:737–41.
17.Stubbs RJ, Hughes DA, Johnstone AM, Rowley E, Reid C, Elia M, Stratton R, Delargy H, King N, Blundell JE. The use of visual analogue scales to assess motivation to eat in human subjects: a review of their reliability and validity with an evaluation of new hand-held computerized systems for temporal tracking of appetite ratings. Br J Nutr 2000;84:405–15.
18.Kreindler D, Levitt A, Woolridge N, Lumsden CJ. Portable mood mapping: the validity and reliability of analog scale displays for mood assessment via hand-held computer. Psychiatry Res 2003;120:165–77.
19.Jamison RN, Gracely RH, Raymond SA, Levine JG, Marino B, Herrmann TJ, Daly M, Fram D, Katz NP. Comparative study of electronic vs. paper VAS ratings: a randomized, crossover trial using healthy volunteers. Pain 2002;99:341–7.
20.Whybrow S, Stephen JR, Stubbs RJ. The evaluation of an electronic visual analogue scale system for appetite and mood. Eur J Clin Nutr 2006;60:558–60.
21.Granqvist S. Enhancements to the Visual Analogue Scale, VAS, for listening tests. TMH-QPSR 1996;37:61–5.
22.Plowright AJ, Morgan PB, Maldonado-Codina C, Moody KJ. An investigation of ocular comfort in contact lens wearers, spectacle wearers and non-wearers. Optom Vis Sci 2008;85:E-abstract 85050.
23.Morgan PB, Maldonado-Codina C, Chatterjee N, Moody KJ. Elicitation of subjective responses via sms (text) messaging in contact lens clinical trials. Optom Vis Sci 2007;84:E-abstract 075143.
24.Woods CA, Cumming B. The impact of test medium on use of visual analogue scales. Eye Contact Lens 2009;35:6–10.