Skip Navigation LinksHome > July 2014 - Volume 32 - Issue 7 > Using a Content Analysis to Identify Study Eligibility Crite...
CIN: Computers, Informatics, Nursing:
doi: 10.1097/CIN.0000000000000061
Feature Article

Using a Content Analysis to Identify Study Eligibility Criteria Concepts in Cancer Nursing Research


Free Access
Article Outline
Collapse Box

Author Information

Author Affiliations: College of Nursing, University of Utah, Salt Lake City (Drs Guo, Sward, Beck, and Wong); School of Nursing, University of Maryland, Baltimore (Dr Staggers); and Biomedical Informatics, University of Utah, Salt Lake City (Dr Frey).

This study was supported by a research award in 2011 from the Gamma Rho Chapter of Sigma Theta Tau International at University of Utah.

The authors have disclosed that they have no significant relationship with, or financial interest in, any commercial companies pertaining to this article.

Corresponding author: Jia-Wen Guo, PhD, RN, 2000 East 10 South, Salt Lake City, UT 84112 (

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (

Collapse Box


The aims of this study were to (1) identify and categorize study eligibility criteria concepts used in cancer nursing randomized controlled trials and (2) determine the extent to which a previously identified set of study eligibility criteria, based primarily on medical randomized controlled trials, were represented in cancer nursing randomized controlled trials. A total of 145 articles of cancer nursing randomized controlled trials indexed in PubMed or Cumulative Index to Nursing and Allied Health Literature and published in English from 1986 to 2010 were screened, and 114 were eligible. Directed content analysis was conducted until data saturation was achieved. Forty-three concepts categorized into eight domains were extracted from 49 articles published in 27 different journals. Most of the concepts identified were related to health status, treatment, and demographics domains. Although many concepts matched to the previously identified study eligibility concepts based on medical research, new concepts may need to be added to fully represent cancer nursing research. This study provides a solid foundation for future study of mapping the concepts to existing standardized terminologies to identify which systems can be adopted. Nursing researchers can use these eligibility criteria concepts as a guideline in structuring the eligibility criteria for their studies.

More than 1.6 million new cases of cancer were expected to be diagnosed in the US in 2012.1 Each day, approximately 1600 cancer patients were expected to die of cancer; however, 67% of people with cancer are now surviving at least 5 years beyond diagnosis.1 Nurses are the most numerous workers in the healthcare system and play an important role in the quality of care for cancer patients. Oncology nurses are challenged to manage the long-term health consequences of cancer2 as cancer increasingly changes from an acute to a chronic condition.1 To provide a better quality of care to cancer patients, clinical research is needed to develop or provide evidence for effective nursing interventions.

A randomized controlled trial (RCT) is considered to produce the highest level of evidence because the design of an RCT minimizes bias.3 According to the Cochrane Glossary,4 an RCT is an experiment comparing two or more interventions through a randomization procedure such as randomly allocating participants. Many challenges exist in conducting RCTs because of the strict study design. Inadequate sample size associated with low participant recruitment rate is a major challenge in conducting high-quality RCTs5 and often seen in cancer nursing RCTs.6–10 The study with an insufficient sample size may limit the statistical power to detect the intervention effects and then result in an insignificant finding.7 Without effective recruitment strategies, researchers take a longer time than planned to obtain the desired sample size,11–13 and studies may even be terminated because of slow recruitment.14

Study eligibility criteria describe the characteristics of the people who may benefit from the study intervention.15 Using study eligibility criteria as keywords to query local electronic health records (EHRs) for potentially eligible participants can increase the efficiency of screening for RCTs and more precisely exclude ineligible participants than manual chart review can.16 Using this method, study enrollment rates can be improved.17 However, interoperability issues exist when using study eligibility criteria across different EHRs. Standardized terminology is needed to address this challenge. Many standardized terminologies exist, such as the Unified Medical Language System, Systematized Nomenclature of Medicine–Clinical Terms, and the National Cancer Institute’s Thesaurus; thus, it is unknown which one can be used for study eligibility criteria in RCTs, especially for cancer nursing RCTs. One of the initial steps of identifying which existing standardized terminologies can be more likely adopted in the study eligibility criteria is to identify the concepts embedded within current study eligibility criteria and then to represent those concepts in an agreed-upon manner. A concept, defined as a type of idea, or type of object, represents a category of information.18 There was a lack of knowledge on what concepts present study eligibility criteria from cancer nursing RCT. The present study was designed to identify concepts embedded in the text of study eligibility criteria from cancer nursing RCTs.

Back to Top | Article Outline


Challenges exist in using EHR data for study eligibility screening without addressing interoperability issues. First, study eligibility criteria are usually represented as narrative data within a study protocol, which is readable by humans but not readily amenable to computer processing. To use study eligibility criteria for automated participant screening, narrative data must be translated into a representation that can be processed by a computer. Second, even within a single organization, the same type of data may be stored in various formats or named differently in multiple heterogeneous databases. Searching for eligible participants across EHRs poses an even more difficult challenge because of interoperability issues. Interoperability is the ability of systems to exchange and make meaningful use of information and requires standard, computable data representations.19 Standard representations are particularly important when searching, integrating, and exchanging data among the many databases that make up a hospital information system.20 Currently, there is no standardized terminology for study eligibility criteria.

Standard, computable representations for study eligibility criteria are increasingly needed.19 Work has begun on the process of studying the eligibility criteria content.21–26 For example, the Eligibility Rule Grammar and Ontology,25 the Agreement on Standardized Protocol Inclusion Requirements for Eligibility,24 and the Clinical Research Filtered Query project22 are studying the complexity, structure, and concepts embedded in study eligibility criteria. Identifying concepts within the text of study eligibility criteria is one of the crucial tasks for developing data standards. A set of study eligibility criteria concepts have been identified from RCTs registered in, a publicly accessible clinical trial registry administered by the National Library of Medicine.23 But those concepts may not appropriately describe cancer nursing research. Only trials that are subject to the Food and Drug Administration oversight (such as drug or device studies) are required to be reported in the database.27 Since nursing research often focuses on aspects of care such as symptom management, quality of life, and disease and disability prevention,28 many nursing RCTs would not be mandated to be included in Therefore, it is unclear if the set of concepts identified from are able to adequately describe cancer nursing research. Thus far, efforts lack a nursing research perspective and have not evaluated the representation of eligibility concepts in published journal articles, which are an important source of evidence for clinical practice for the care of cancer patients.

Back to Top | Article Outline


To facilitate the development of standardized terminology in eligibility criteria, the purpose of this study was to identify and categorize eligibility criteria concepts used in cancer nursing RCTs, as represented in published journal articles. A secondary purpose was to determine the extent to which a previously identified set of study eligibility criteria23 represent the eligibility criteria used in nursing RCTs.

Back to Top | Article Outline


Design and Sampleø

This descriptive study was based on published journal articles involving RCTs in cancer nursing research. MEDLINE via PubMed and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) via EBSCOHost were chosen as representative databases encompassing most clinical nursing publications. For the search strategy, randomized controlled trial, neoplasm, and nursing were used as keywords. For MEDLINE, the Medical Subject Heading term controlled clinical trial was used to expand the search, and for CINAHL, the headings controlled clinical trial and clinical trials were additional search terms. Several search strategies were combined to include more potential eligible journal articles (see Document, Search Strategy for Literature, Supplemental Digital Content 1,

The search was conducted in September 10, 2010. There was no limitation on year of publication because more articles were desired. Articles were included if they (1) involved a nurse as participant, researcher, or intervention team member; (2) were published in English; (3) indicated the study design was RCT; (4) were related to the topic of cancer; (5) described details about eligibility criteria; and (6) were identified as nursing research based on the description of the National Institute of Nursing Research, that is, “(a) build the scientific foundation for clinical practice, (b) prevent disease and disability, (c) manage and eliminate symptoms caused by illness, or (d) enhance end-of-life and palliative care.”28 Articles retrieved by the keyword search were screened in stages. The titles and abstracts were reviewed to exclude articles that were clearly ineligible. After screening the title and abstract, potentially eligible articles were randomly divided into two sets by using SPSS software (version 19; IBM-SPSS, Chicago, IL). One set was used for this study, and the second set was reserved for future analyses.

Only full-text articles retrievable for free from the publisher or the University of Utah library were included, based on the premise that these represent the most likely pool of articles that would be retrieved by a nurse looking for evidence for clinical practice. Studies that were secondary analysis or duplicate studies were excluded, but primary sources of those studies were included if the primary source article could be located and retrieved. The retained articles were assigned a random sequence for content analysis.

Back to Top | Article Outline
Content Analysis

This study used a directed content analysis, which is used to begin coding with the predetermined codes based on a theory or previous studies and so the research findings could be used for comparison between studies.29 The initial codebook in the present study was formed based on a prior research study by Luo et al,23 who identified 27 study eligibility criteria concepts with six domains from RCTs registered in ATLAS.ti version 6.0 (Scientific Software Development GmbH, Berlin, Germany), a qualitative data analysis software, was used for coding the data.

The process of the content analysis was based on methods outlined by Krippendorff.30 Paragraphs including the description of study eligibility criteria, usually found under the “Subject” or “Method” section, were the unit of data collection. The unit of analysis was the study eligibility criterion, which can be a word, phrase, or sentence. Individual words and phrases have higher reliability for content analysis than sentences or larger units do31 and were selected when possible. Phrases were the format most frequently seen in the journal articles. During the analysis, if the unit of analysis could not be categorized to an existing code, a new code was created following the rules suggested by Cimino32—one term per idea, one meaning per term, with explicit unambiguous definitions.

As with many qualitative studies, the sample size (ie, number of articles) was determined by reaching the point of saturation. Francis et al33 suggested that the initial analysis sample (the sample size for the first round of analysis) and stopping criteria should be determined prior to conducting the data analysis. Stopping criteria determine how many additional sampling units need to be analyzed after no new codes are uncovered. The point of data saturation in this study was based on an initial sample of 25 articles with a random sequence, with sampling continuing until no new concept emerged from five consecutive journal articles.

Identified concepts were then grouped into domains (categories of concepts); for example, age and sex were grouped together as the demographics domain. Similar concepts were combined, broad concepts were decomposed, and some concepts were moved to other domains or renamed.

Back to Top | Article Outline
Validity and Reliability

Face validity was evaluated by two nurse researchers with more than 5 years of experience conducting nursing RCT projects. The content experts separately reviewed all concepts, definitions, and domains. Iterative rounds of review and revision were needed until the agreement had been achieved. Two PhD students with experience participating in research projects were asked to independently code eligibility criteria concepts from 20 randomly selected articles, which were not used to extract concepts. The agreement about the unit selected for coding and analysis was measured. Intercoder reliability was assessed by calculating Cohen κ using SPSS, version 19. Disagreements on the coding between coders were discussed until 100% agreement was achieved.

Back to Top | Article Outline


Sample Characteristics

Based on the search keywords, 639 articles were returned, 475 from PubMed and 189 from CINAHL (25 articles were found from both databases, 475 + 198 − 25 = 639). Out of 639 potentially eligible articles, 72 (11%) were excluded because full-text articles were not available. After screening the title and abstract, 280 potentially eligible articles remained and were randomly divided into two sets, one containing 145 articles for this study and the second set with 135 articles for future analyses. Among 145 articles, 110 met eligibility criteria and four were added because the primary sources of these four secondary analysis articles were able to be retrieved. A total of 114 articles remained in the data set (Figure 1). The retained articles were assigned a random sequence for content analysis.

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline
Concept Identification

During the content analysis, the initial 25 articles yielded 34 concepts. The analysis was halted after 49 articles because data saturation was achieved. In total, 43 concepts were extracted from the 49 journal articles published in 27 different journals between 1989 and 2010. The definition and example of each concept are shown in Table 1. All 43 concepts were grouped into eight domains (Table 1).

Table 1
Table 1
Image Tools

Eight domains of concepts were constructed: Demographics (k = 12), Diagnostic Testing (k = 3), Ethical Consideration (k = 3), Health Status (k = 12), Lifestyle (k = 3), Treatment (k = 3), Health Care Service (k = 2), and Study Design (k = 5). The number of domains covered in each journal article was between 3 and 6 (mean, 4.2 ± 1.0). Regarding the frequency of the concepts in each domain, concepts from Health Status (38.0%), Demographics (24.7%), and Treatment (20.6%) were most frequently seen, while the rest of them were Ethical Consideration (5.3%), Study Design (4.3%), Diagnostic Testing (3.5%), Health Care Service (2.4%), and Lifestyle (1.2%).

The frequency with which concepts appeared in the journal article eligibility criteria was evaluated. In the 49 journal articles, 443 units of analysis (eligibility criteria statement) were selected. Codes were used 490 times and some criterion statements were coded with more than one concept. Therapeutic Procedure (17.8%) and Disease or Disorder (17.6%) were most frequently coded; another five concepts were coded more than 20 times (>4.9%): Language Comprehension (6.5%), Symptom or Sign (6.5%), Age (5.9%), Patient Indicator (5.3%), and Malignant Neoplasm Stage or Status (5%); 17 concepts were coded less than three times and 10 concepts were coded only once (Figure 2).

Figure 2
Figure 2
Image Tools
Back to Top | Article Outline
Validity and Reliability

Face validity for these 43 concepts was achieved by two nurse researchers after three iterative rounds of review and revision. For testing intercoder reliability, two coders selected 308 and 297 units for coding, separately; 224 of them were commonly selected. Within these 224 units, 194 (86%) were coded as the same concept with the agreement. The Cohen κ was 0.74. Disagreements between coders were discussed until 100% agreement was achieved. Some of concept definitions were revised and updated to reduce the misinterpretations and misunderstandings.

Back to Top | Article Outline
Comparison of Study Eligibility Criteria Concepts Between Studies

Original concepts from the study of Luo et al23 were modified through renaming concepts, combining concepts, decomposing concepts, and moving concepts to other domains (Figure 3). Five concepts (Age, Allergy, Ethnicity, Life Expectancy, and Organ or Tissue Status) from the study of Luo et al23 were applicable without any change in name or definition. Two concepts (Bedtime and Receptor Status) were not found in the nursing RCT articles and were omitted, and the remainder of the original concepts were modified (Figure 3). Preference and Compliance With the Protocol were merged into Research Participation Agreement Indicator because both indicate a person’s willingness to participate in a study as designed. The concept of Disease, Symptom, or Sign23 was decomposed as Disease or Disorder and Symptom or Sign because many nursing studies focus specifically on symptom management.

Figure 3
Figure 3
Image Tools

Some nursing studies included family members or other nonpatients as participants. Instead of the original concept of Special Patient Characteristics,23 this study articulated specific characteristics including Patient Indicator and Relationship to Patient (capturing information about whether a participant was a patient). Other specific characteristics were Training or Educational Experience and Occupation, reflecting social roles.

Enrollment in Other Studies was renamed Enrollment Status and placed in the domain of Study Design. Literacy23 was renamed Language Comprehension to reflect how it was used in the studies—as an indicator of a person’s ability to participate in the study. Capacity23 was decomposed into Mental Health, Physical Capability, and Research Participation Capability and moved to the domain of Health Status.

Back to Top | Article Outline


This study identified eligibility criteria concepts and domains from published cancer nursing RCT research. Forty-three concepts were identified and categorized into eight domains. Although 27 concepts were coded fewer than three times, this lower volume cannot be interpreted as their degree of importance among all concepts.

The Demographics and Health Status domains included the most number of concepts. This is not surprising because that information is common across most types of studies. The Treatment domain has only three concepts, but these were frequently used. Type of cancer treatment or surgery is related to cancer patients’ symptoms or outcomes, and so researchers address this in the recruitment process.

The findings confirmed the six domains from the study of Luo et al,23 focused mainly on medical research, and were also found in nursing RCT articles. However, there were differences in research interest between nursing and medical studies, which may account for why Receptor Status and Bedtime were not found in this study. Receptor status may be a consideration in a pharmaceutical study.34 Although Receptor Status has potential to be used in cancer nursing research, this concept can be captured by Laboratory Procedure or Result. A statement like “wake time after sleep onset” or “occurring 3 nights per week for 3 months” was more likely a measure of sleep disturbance rather than a person’s lifestyle related to sleep. Hence, this statement was more appropriate to be coded as Symptom or Sign instead of Bedtime. Moreover, sleep/wake disturbance is one of the common symptoms in cancer patients and has been studied principally in nursing studies.35,36

Additional domains were uniquely created in this study. The Health Care Service domain, including Health Care Organization and Health Insurance, can describe the context in which patient care is provided. Some study eligibility criteria were related to the study design. For example, the Cancer Surveillance System program37 was used as a specific source to identify the potential participants. Moreover, more than 30% of nursing research uses telephone or other media such as tape, CD, or DVD as part of an intervention,38 and so participants need to have access to and know how to use that equipment. Accordingly, concepts like Recruitment Source or Study Intervention are needed to capture the information.

Informed Consent or Informed Assent was often found in the eligibility criteria statements; we discussed whether it was a part of eligibility criteria because the consent process often happens after the determination of eligibility.39 We decided to keep this concept in the list for future refinement. However, the ability to consent is considered as a part of the eligibility criteria and the data can be presented by either Research Participation Agreement Indicator, Eligibility Approved By, Mental Health, or Research Participation Capability.

A few concepts were abstract, requiring complicated evaluation or human interpretation, but seen often enough to retain. For example, studies may require that participants have cancer without further detail (Evidence of Disease), must be able to communicate with researchers (Language Comprehension) and need to be able to participate in an intervention (Research Participation Capability).

Symptom management is one of main foci in cancer nursing research, and Symptom or Sign was one of concepts frequently found. More than one symptom is often investigated in nursing research because symptoms like fatigue and depression are related to each other or occur together. One of the content experts with many years of research experience in symptom management suggested creating symptom cluster concepts A concept needs a clear definition, however, and the definition and relationship of symptom clusters in cancer patients is still in the developmental process.40

Back to Top | Article Outline
Strengths and Limitations

This study is the first of its kind focusing on cancer nursing research. The study confirmed that around 93% (25/27) of study eligibility criteria concepts identified based on medical literature23 are also found in nursing literature. The other 7% represented the specific study eligibility criteria concepts for cancer nursing RCTs. Importantly, modifications are needed. This study captured concepts related to the study design that are, in fact, related to the study eligibility criteria but not shown in the previous studies.

The use of published journal articles as a sample in this study was simultaneously a strength and limitation of this study. The strength was to show what and how concepts of study eligibility criteria were presented from published journal articles, an important source of knowledge for evidence-based practice. The limitation was that some study eligibility criteria information may not have been captured because there is no united format for all journal articles, the study eligibility criteria may be in more than one place in the article, and the published study description may not precisely reflect details of the study protocol.41 There is also a potential for selection bias. Studies that were not clearly described as RCTs were excluded in this analysis; however, we minimized the risk of eliminating eligible articles by reviewing full text of articles. This study used two online journal databases, PubMed and CINAHL, and may have missed articles indexed in other databases. But these two databases have been found to provide comprehensive coverage of nursing journals.42

Finally, articles without readily accessible full text were excluded in this study. This was an intentional part of the design. Full-text articles, especially with open access, may reach more readers because information seekers tend to use information that is more available for them.43 Although 40% (257/639) of the keyword search results were without accessible full text, less than 11% (72/639) of them appeared to be eligible based on title and abstract.

Back to Top | Article Outline
Implications for Nursing

This study provides support for the development of a standard representation for cancer nursing RCT eligibility criteria. Developing a standard, computable representation of eligibility criteria is important not only for interoperability but also for appropriately applying research findings to clinical settings. Nursing informaticists can use these concepts to map them with existing standardized terminologies to find out which one can more likely be adopted. Then the mapping result can be used to design an information system that supports querying the EHR for potentially eligible participants, in particular in the matter of interoperability. Nursing researchers can use standard eligibility criteria as a guideline in structuring the eligibility criteria for future studies. Although the concepts identified in this study were generated from cancer nursing RCTs, except Malignant Neoplasm Stage or Status, a concept specific to cancer research, the rest of 42 concepts can be potentially used as a set of common data element for nursing research.

Back to Top | Article Outline


Efforts to standardize the representation of study eligibility information based on medical research19,43,44 may not sufficiently represent nursing research. Nursing research has unique foci and research interests, and current representations needed to be adapted to accommodate nursing research criteria. This study provides a solid foundation for achieving the goal of using information systems to support high-quality clinical nursing research by standardizing eligibility criterion for nursing cancer studies.

Back to Top | Article Outline

We thank Jacqueline Eaton, MS, and Djin Lyn Lai, BS, BSN, RN, for assisting with evaluation of intercoder reliability. This study was supported by a research award in 2011 from the Gamma Rho Chapter of Sigma Tehta Tau International at University of Utah.

Back to Top | Article Outline


1. American Cancer Society. Cancer Facts and Figures 2012. Atlanta, GA: American Cancer Society; 2012.

2. Demark-Wahnefried W, Aziz NM, Rowland JH, Pinto BM. Riding the crest of the teachable moment: promoting long-term health after the diagnosis of cancer. J Clin Oncol. 2005; 23 (24): 5814–5830.

3. Resnik L, Liu D, Hart DL, Mor V. Benchmarking physical therapy clinic performance: statistical methods to enhance internal validity when using observational data. Phys Ther. 2008; 88 (9): 1078–1087.

4. The Cochrane Collaboration. Glossary of terms in the Cochrane Collaboration. Version 4.2.5. Accessed January 14, 2014.

5. Foy R, Parry J, Duggan A, et al. How evidence based are recruitment strategies to randomized controlled trials in primary care? Experience from seven studies. Fam Pract. 2003; 20 (1): 83–92.

6. Dyar S, Lesperance M, Shannon R, Sloan J, Colon-Otero G. A nurse practitioner directed intervention improves the quality of life of patients with metastatic cancer: results of a randomized pilot study. J Palliat Med. 2012; 15 (8): 890–895.

7. Guo Y, Logan HL, Glueck DH, Muller KE. Selecting a sample size for studies with repeated measures. BMC Med Res Methodol. 2013; 13: 100.

8. Koller A, Miaskowski C, De Geest S, Opitz O, Spichiger E. Results of a randomized controlled pilot study of a self-management intervention for cancer pain. Eur J Oncol Nurs. 2013; 17 (3): 284–291.

9. Mehling WE, Lown EA, Dvorak CC, et al. Hematopoietic cell transplant and use of massage for improved symptom management: results from a pilot randomized control trial. Evid Based Complement Alternat Med. 2012; 2012: 450150. doi:10.1155/2012/450150.

10. Scura KW, Budin W, Garfing E. Telephone social support and education for adaptation to prostate cancer: a pilot study. Oncol Nurs Forum. 2004; 31 (2): 335–338.

11. Burke ME, Albritton K, Marina N. Challenges in the recruitment of adolescents and young adults to cancer clinical trials. Cancer. 2007; 110 (11): 2385–2393.

12. Knobf MT, Juarez G, Lee SY, Sun V, Sun Y, Haozous E. Challenges and strategies in recruitment of ethnically diverse populations for cancer nursing research. Oncol Nurs Forum. 2007; 34 (6): 1187–1194.

13. Logsdon MC, Gohmann S. Challenges and costs related to recruitment of female adolescents for clinical research. J Pediatr Nurs. 2008; 23 (5): 331–336.

14. Sprigg N, Willmot MR, Gray LJ, et al. Amphetamine increases blood pressure and heart rate but has no effect on motor recovery or cerebral haemodynamics in ischaemic stroke: a randomized controlled trial (ISRCTN 36285333). J Hum Hypertens. 2007; 21 (8): 616–624.

15. Rothwell PM. External validity of randomised controlled trials: “to whom do the results of this trial apply” Lancet. 2005; 365 (9453): 82–93.

16. Thadani SR, Weng C, Bigger JT, Ennever JF, Wajngurt D. Electronic screening improves efficiency in clinical trial recruitment. J Am Med Inform Assoc. 2009; 16 (6): 869–873.

17. Embi PJ, Jain A, Clark J, Bizjack S, Hornung R, Harris CM. Effect of a clinical trial alert system on physician participation in trial recruitment. Arch Intern Med. 2005; 165 (19): 2272–2277.

18. de Keizer NF, Abu-Hanna A, Zwetsloot-Schonk JH. Understanding terminological systems, I: terminology and typology. Methods Inf Med. 2000; 39 (1): 16–21.

19. Weng C, Tu SW, Sim I, Richesson R. Formal representation of eligibility criteria: a literature review. J Biomed Inform. 2010; 43 (3): 451–467.

20. Richesson RL, Krischer J. Data standards in clinical research: gaps, overlaps, challenges and future directions. J Am Med Inform Assoc. 2007; 14 (6): 687–696.

21. Bhattacharya S, Cantor MN. Analysis of eligibility criteria representation in industry-standard clinical trial protocols. J Biomed Inform. 2013; 46 (5): 805–813.

22. Koisch J, Mead C, Velezis M. Clinical research functional query. Accessed October 4, 2011.

23. Luo Z, Johnson SB, Weng C. Semi-automatically inducing semantic classes of clinical research eligibility criteria using UMLS and hierarchical clustering. AMIA Annu Symp Proc. 2010; 2010: 487–491.

24. 24. Niland J. ASPIRE: Agreement on Standardized Protocol Inclusion Requirements for Eligibility. 2007. Accessed April 24, 2014.

25. Tu S, Peleg M, Carini S, Bobak M, Rubin D, Sim I. The Eligibility Rule Grammar and Ontology (ERGO). University of California. Last modified October 11, 2011. Accessed October 4, 2011.

26. Weng C, Wu X, Luo Z, Boland MR, Theodoratos D, Johnson SB. EliXR: an approach to eligibility criteria extraction and representation. J Am Med Inform Assoc. 2011; 18 (suppl 1): i116–i124.

27. Prayle AP, Hurley MN, Smyth AR. Compliance with mandatory reporting of clinical trial results on cross sectional study. BMJ. 2012; 344: d7373.

28. National Institute of Nursing Research. National Institute of Nursing Research: mission. National Institutes of Health. April 5, 2012. Accessed April 14, 2012.

29. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005; 15 (9): 1277–1288.

30. Krippendorff K. Content Analysis: An Introduction to Its Methodology. 2nd ed. Thousand Oaks, CA: Sage; 2004.

31. Insch GS, Moore JE, Murphy LD. Content analysis in leadership research: examples, procedures, and suggestions for future use. Leadersh Q. 1997; 8 (1): 1–25.

32. Cimino JJ. Desiderata for controlled medical vocabularies in the twenty-first century. Methods Inf Med. 1998; 37 (4–5): 394–403.

33. Francis JJ, Johnston M, Robertson C, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010; 25 (10): 1229–1245.

34. Gianni L, Dafni U, Gelber RD, et al. Treatment with trastuzumab for 1 year after adjuvant chemotherapy in patients with HER2-positive early breast cancer: a 4-year follow-up of a randomised controlled trial. Lancet Oncol. 2011; 12 (3): 236–244.

35. Davidson JR, MacLean AW, Brundage MD, Schulze K. Sleep disturbance in cancer patients. Soc Sci Med. 2002; 54 (9): 1309–1321.

36. Hearson B, Sawatzky JA. Sleep disturbance in patients with advanced cancer. Int J Palliat Nurs. 2008; 14 (1): 30–37.

37. McCorkle R, Benoliel JQ, Donaldson G, Georgiadou F, Moinpour C, Goodell B. A randomized clinical trial of home nursing care for lung cancer patients. Cancer. 1989; 64 (6): 1375–1382.

38. Conn VS, Cooper PS, Ruppar TM, Russell CL. Searching for the intervention in intervention research reports. J Nurs Scholarsh. 2008; 40 (1): 52–59.

39. Gross CP, Mallory R, Heiat A, Krumholz HM. Reporting the recruitment process in clinical trials: who are these patients and how did they get there? Ann Intern Med. 2002; 137 (1): 10–16.

40. Xiao C. The state of science in the study of cancer symptom clusters. Eur J Oncol Nurs. 2010; 14 (5): 417–434.

41. Blümle A, Meerpohl JJ, Rucker G, Antes G, Schumacher M, von Elm E. Reporting of eligibility criteria of randomised trials: cohort study comparing trial protocols with subsequent articles. BMJ. 2011; 342: d1828.

42. Allen MP, Jacobs SK, Levy JR. Mapping the literature of nursing: 1996–2000. J Med Libr Assoc. 2006; 94 (2): 206–220.

43. Davis PM, Lewenstein BV, Simon DH, Booth JG, Connolly MJ. Open access publishing, article downloads, and citations: randomised controlled trial. BMJ. 2008; 337: a568.

44. Tu SW, Peleg M, Carini S, et al. A practical method for transforming free-text eligibility criteria into computable criteria. J Biomed Inform. 2011; 44 (2): 239–250.


Content analysis; Eligibility determination; Neoplasms; Nursing informatics; Randomized controlled trial

Supplemental Digital Content

Back to Top | Article Outline

© 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins.



Article Tools



Article Level Metrics

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.