According to Tyndall et al,1 writing development is a key component of professional nurse identity. Therefore, it is a necessary component of nursing education from the associate to doctoral level. A review of US and international research reveals that this value is a priority in other health professions as well. There are multiple published studies regarding writing development interventions for health science students and faculty including pharmacology,2 medicine,3,4 interdisciplinary health professions,5,6 and nursing.7-11 The focus of this study is writing development in doctor of nursing practice (DNP) students.
Although the challenge of writing skill development is widely documented in the literature, investigators rarely define or operationalize the concept of scholarly or scientific writing, terms that are often used interchangeably. Gopen and Swan12 described scientific writing as a form of discourse in which the author presents new knowledge in the context of previous findings in a structured, predictable format. This is the term and definition that will be used to represent the writing skills discussed for DNP students in this article. Fundamental writing skills (such as grammar, word choice, and organization) are a component of scientific writing, but selection and use of references, critical appraisal, and synthesis are also important skills relevant to scientific writing. Nursing faculty are challenged to evaluate student mastery of health science writing without an effective tool to distinguish and measure skill development.
Faculty-Identified Writing Deficiencies
In a survey,13 DNP program directors identified writing deficiencies in their student populations at rates ranging from 5% to nearly 100%. Further, 87% of faculty reported they were somewhat to very dissatisfied with the final DNP scholarly project.
Descriptions of faculty-identified writing skill deficits varied. Some investigators described generally “poor scholarly writing skills,”13,14 whereas others identified specific skill deficits such as reference style format and low information literacy (ability to find, appraise, and use research evidence).8 A study of health science students (nursing included) identified that plagiarism, lack of citations, and grammar and punctuation errors were prevalent.15 Based on Gopen and Swan's12 definition, while these studies identify a portion of the elements required of a DNP student, there is a lack of consensus regarding skill identification and measurement.
Student-Identified Writing Deficiencies
In studies that identified nursing students' barriers to writing development, participants noted affective and skills-related challenges. In a survey of baccalaureate nursing students, participants identified their greatest writing obstacles included grammar, format, organization, and flow.8 Likewise, students who spoke English as a second language stated that writing assignments were overwhelming due to the additional time needed to read, understand, and translate content.9,11 Similar to native English-speaking students, they also reported poor knowledge regarding approaches to read and review literature, use citations, present ideas logically, proofread, organize, and use appropriate academic tone.
Among graduate nursing students (MSN, DNP, PhD) and faculty writers, many struggled with getting started and figuring out what to write. They expressed a desire for more writing instruction to guide improvement (ie, more time for instruction, faculty feedback, and clear grading criteria).11,16,17 In several other studies, participants stated that they wanted feedback commensurate with effort, individualized and content-focused (vs grammar, punctuation, and structure alone), balanced positive and negative (to avoid demoralizing students), and early enough to allow for revision.17-20 These data indicate that there is a lack of clarity for both students and faculty regarding expectations for student scientific writing performance.
While demonstrated to aid in achieving faculty consensus, there are conflicting findings regarding rubric reliability, validity, and effectiveness for improving student performance. In some studies, nursing student writing performance was evaluated solely based on general essay writing criteria (sentence and paragraph structure, content, word choice, fluency, organization, format, grammar, and punctuation).21,22 Some focused on scientific writing skills particular to the content and conventions of health science writing. They measured reference use, paraphrasing, direct quotes, and plagiarism.14,15,23-25 Although some resulted in decreased grade inflation,14 high internal consistency ratings,24 and interrater reliability,23 the instruments were not readily applicable to DNP students due to the course-specific nature of the rubric,14,24 criteria that were too broad to distinguish specific skills,14,25 or the lack of advanced scientific writing skills, such as synthesis and critical appraisal.23
Roush and Tesoro26 developed the Doctor of Nursing Practice Project Critical Appraisal Tool (DNP-PCAT) to assess DNP Project quality and rigor. It is a 141-point, 16-component instrument with 14 focused on content and 2 pertaining to writing. While the heavy content emphasis is appropriate to evaluate quality and rigor, its usefulness for distinguishing writing skill strengths and deficits is limited for DNP faculty and students.
Existing rubrics for DNP students are lacking in 1 or more of the following: a comprehensive set of scientific writing skills, adequate distinction of skills, a clear description of performance standards, or broad applicability across courses or programs. This study's purpose was to examine scientific writing skill using a standardized essay writing rubric, the City University of New York Assessment Test of Writing (CATW)27 and the Scientific Writing Assessment (SWA) developed by the principal investigator. The specific aims were to (1) identify and rank skills within DNP student writing samples, (2) compare interrater reliability for both rubrics, and (3) examine concurrent validity of the CATW and SWA.
This study used a cross-sectional, descriptive design. Two investigators reviewed a single sample of 27 DNP Project papers. Each investigator (rater 1 and rater 2) independently evaluated and scored each paper using 2 rubrics.
A power analysis was conducted using G*Power28 with a moderate effect size (0.50), α = .05, and power = 0.80. The estimated sample size was 26. Thirty writing samples were selected (from a list of 131 available) to include additional papers for rater training.
A random-number generator was used to select the DNP Project papers from an online repository in the order they appeared on the list. Papers were downloaded, and author names and academic institutions were obscured to limit bias in the review process. This repository is not peer-reviewed. Authors upload DNP Projects to the website for a small fee as a means of dissemination, which are publicly accessible on the organization's website. This study received institutional review board approval.
City University of New York Assessment Test of Writing
The CATW was selected as the essay writing rubric because of its detailed instructions for use, scoring, and interpretation. It is a standardized essay writing test used to evaluate student readiness for collegiate writing.27 Students are required to read a passage and to compose an essay in 90 minutes based on a given prompt. The 5 scoring categories are as follows: (1) critical response (summary, discussion, and providing examples), (2) development (explanation of ideas), (3) structure (organization), (4) language use: sentences and word choice, and (5) language use: grammar, usage, and mechanics. Scores range from 1 to 6 in each category, with 6 representing the highest level of skill. Possible scores range from 5 to 30.
The CATW Information for Students user guide is available online to help students prepare for the assessment, interpret scores, and review examples of poor and exemplary writing performance.27 A review of written communication assessment yielded that there were no known reliability and validity data pertaining to the CATW.29
Scientific Writing Assessment
The SWA is an instrument developed by the principal investigator intended to measure general as well as scientific writing skill proficiency. The SWA is available for academic use (Supplemental Digital Content, Instrument, http://links.lww.com/NE/A801). To establish face validity, it was refined with input by an interdisciplinary panel of educators (nursing, social work, and English) with expertise in scientific writing. The SWA includes 3 broad categories: (1) fundamental skills; (2) information literacy and integrity; and (3) organization, conceptualization, and critical analysis.
Each broad category is divided into 4 or 5 individual skills. Fundamental skills consist of grammar, punctuation, and spelling; format and style; adherence to standard structure/rubric; and concise, nonredundant presentation of information. Information literacy and integrity consist of substantive content, use of primary sources, paraphrasing, and selection of scholarly sources. Organization, conceptualization, and critical analysis consist of clear and narrow focus, organization/use of headings, organization/logical flow, critical appraisal, and evidence synthesis.
The full instrument contains a total of 13 individual items, each with a score range of 1 to 5 (maximum possible = 65). For the item labeled adheres to standard structure/rubric, the raters applied the content-focused components of the DNP-PCAT26 as a model structure (introduction, methods, etc) for DNP Project papers in lieu of a specific university or program rubric. This is the first study to apply the SWA to evaluate scientific writing skill in DNP students and to evaluate reliability and validity.
Electronic copies of the downloaded files were shared between the investigators using a private Google drive folder. The initial 3 papers (in order of retrieval from the database) in the folder were used for training purposes to clarify skill performance expectations for both raters within the CATW and SWA rubrics.
After discussion of the initial papers, the investigators independently reviewed and scored each of the remaining 27 using the CATW and SWA. When all scores were recorded by both raters (on hard or electronic copies), the data were entered into the Statistical Package for the Social Sciences (IBM Corp, Armonk, New York) for storage and analysis.
Total scores, means, SDs, and percentages for the CATW and SWA were calculated. Additionally, means and SDs were calculated for individual items for both rubrics and SWA subscales. To determine overall skill performance, both raters' mean scores were combined for each SWA item, and the combined means were ranked (high to low).
The overall percentage that raters match in their assignment of scores on writing samples is one of several methods to establish interrater reliability. Either exact match (percent at which raters assign the same score) or adjacent match (raters score within 1 point above or below) can be used.30 The percentage that raters matched was determined for each DNP Project paper for all CATW and SWA individual items and evaluated based on a minimum standard of 70%.31
In the absence of a scientific writing rubric, general essay writing rubrics are often used. The CATW is a standardized test used for more than a decade to assess student readiness for the rigors of college writing.27 Although the SWA incudes skills that extend beyond general essay writing, it also includes items that align with those of the CATW. Therefore, it is expected that if there is a moderate correlation with the CATW, the SWA can also be considered an acceptable measure of fundamental writing skills.
Overall and Individual Skill Performance
The overall mean scores for the SWA were 54.7 ± 4.8 (rater 1) and 54.5 ± 4.3 (rater 2). The mean SWA subscale scores for raters 1 and 2 were equal for information literacy and integrity and organization, conceptualization, and critical analysis, with a mean difference of 1.4 points for fundamental skills (Table 1).
Table 1 -
Descriptive Statistics for Total CATW and SWA Total and Subscale Scores (N = 27)
| R1 Fundamental skills
| R2 Fundamental skills
| R1 Information literacy and integrity
| R2 Information literacy and integrity
| R1 Organization, conceptualization, and critical analysis
| R2 Organization, conceptualization, and critical analysis
Abbreviation: R, rater.
aPercentage of possible points for full instrument or subscales.
For both raters' individual CATW and SWA item means, see Supplemental Digital Content Table 1, http://links.lww.com/NE/A802. A cutoff value of 0.5 points was used to compare rater discrepancy for individual items. There was a rater discrepancy of more than 0.5 points for 3 of 5 CATW items (60.0%), including development of ideas, structure of response, and sentence and word choice. There was a rater discrepancy of more than 0.5 points for 4 of 13 SWA items (30.8%), including the use of primary sources, source selection, clear and narrow focus, and organization/logical flow, indicating greater overall agreement for the SWA on individual items.
Supplemental Digital Content Table 2, http://links.lww.com/NE/A803, includes the raters' combined means and percentage of total scores on the 13 SWA items. The skills were ranked from highest to lowest to visualize scientific writing skill strengths and needs for development among the writing samples. Three skills had a combined mean of less than 4.0 (80% of the total possible) and included critical appraisal, concise/nonredundant presentation, and use of primary sources. Among the highest-ranked skills at 4.5 or greater (90%) were substantive content; adherence to standard structure/rubric; paraphrasing, grammar, punctuation, and style; and clear and narrow focus.
The results for interrater agreement are listed in Supplemental Digital Content Table 3, http://links.lww.com/NE/A804, including the percentage that raters agreed exactly or within 1 point of each other (adjacent agreement). Adjacent agreement greatly exceeded exact agreement for both rubrics, but the SWA achieved the highest adjacent agreement at 82.3% compared to 69.6% for the CATW. When examining the match among the SWA subscale scores, fundamental skills had the highest match (92.6%), with information literacy and integrity yielding the lowest match at 75.0%. Nonetheless, this lowest value still exceeded the overall adjacent match for the CATW.
To further examine the difference between raters, t tests were conducted. To evaluate the rubrics on the same scale of measurement, the percentage of the total points possible was calculated for each rater and used in the t test analysis (Table 2). There was a significant difference between raters on the CATW (R1: 86.3 ± 14.9, R2: 77.8 ± 8.2, P = .01), but there was no significant difference on the SWA (R1: 84.1 ± 7.3, R2: 83.8 ± 6.6, P = .88).
Table 2 -
Test for Percentage of Total CATW and SWA Scores (N = 27)
Association of CATW and SWA
Concurrent validity was examined using Spearman ρ correlation analysis to evaluate the relationship between each rater's score on the CATW and SWA. Similar to the t test analysis, percentages were used in the correlation analysis to examine associations between each rubric overall and the SWA subscales. The correlation matrix for rater 1 appears in Supplemental Digital Content Table 4, http://links.lww.com/NE/A805, and rater 2's values appear in Supplemental Digital Content Table 5, http://links.lww.com/NE/A806. The correlations ranged from 0.56 to 0.75, representing moderate to strong associations for both raters between the CATW and SWA rubrics.
SWA subscale associations with the CATW were also examined for raters 1 and 2. The SWA's fundamental skills (R1: r = 0.50, P < .01; R2: r = 0.65, P < .01) and organization, conceptualization, and critical analysis (R1: r = 0.58, P < .01; R2: r = 0.65, P < .01) subscales were most strongly associated with the overall CATW. The weakest association for both raters was the information literacy and integrity subscale (R1: r = 0.20, ns; R2: r = 0.45, P < .05).
This study found that, despite the use of general essay writing rubrics in nursing education and research, the use of a scientific writing rubric was more effective for distinguishing the specific skills required for health science writing and achieving higher rater consistency. When focusing on specific skills, the raters identified several skills including substantive content, clear and narrow focus, paraphrasing, and adherence to structure/rubric. It is possible that these findings appear to differ from those of Dols et al,13 because the sample of papers used in this study reflect the culminating work of the DNP program. Students were likely to either develop the skills that they lacked after completing coursework or receiving editing assistance from faculty mentors. Yet, Roush and Tesoro26 also examined completed DNP Projects from a repository and reported that students lacked the ability to communicate clearly through writing. This could further indicate that the DNP-PCAT's 2 components dedicated to scientific writing evaluation might be too broad to identify specific skills that need development. Using the SWA, raters were able to identify a variety of writing strengths and weaknesses.
In this study, raters agreed that 10 of 13 SWA skills were evident in 80% or more of the DNP student writing samples. However, there were 3 skills that were not consistently demonstrated: critical appraisal, concise/nonredundant presentation, and use of secondary sources. Critical appraisal was lacking in the majority of papers, with most writers summarizing the content of cited studies. Writers also frequently repeated information in various sections of the paper and inappropriately cited secondary sources without acknowledgement of the primary source. The raters retrieved 1 randomly selected source within the paper's introduction and identified several of these errors in the sample. As not every source was inspected exhaustively, it is possible that this error was more prevalent. Simply providing didactic training is insufficient to address these deficits. Instead, it is recommended that faculty inspect students' sources (particularly those used frequently to represent broad or unrelated content areas), provide detailed rubrics that identify content required in each subsection of a paper, and model statements that summarize and critically appraise research findings for students to emulate.
The results of several reviews suggest additional best practices for nursing student writing development.32-34 They include integrated writing activities and instruction throughout the curriculum, faculty training to clarify performance expectations and increase rater consistency, scaffolded writing assignments (with reflexive feedback and student revision), and individualized, content-specific feedback that clearly links to evaluation criteria (ie, rubrics). This study adds the knowledge that, although nursing programs might be successful in developing many skills, critical appraisal, use of primary sources, and concise, nonredundant presentation of information are areas of weakness that need further development.
The results of this study should be considered in the context of its limitations. The DNP Project papers reviewed were obtained from a national repository in which authors from a variety of institutions chose to upload their work. It is possible that these projects do not reflect the breadth of all DNP Projects. However, of the papers available in the repository, the investigators used random selection to obtain an unbiased sample. Additionally, because papers were obtained from various institutions, expectations for structure and organization varied. Nonetheless, despite differing requirements, interrater reliability remained above acceptable standards. Finally, the SWA subscale with the lowest interrater agreement was information literacy and integrity, particularly source selection and use of primary sources. It is possible that further training could improve consistency within this category.
The SWA is a reliable and valid rubric that can be applied in any nursing or health science course that features a scientific writing assignment. It can be a useful tool for faculty to educate, coach, and evaluate students' scientific writing performance, particularly with regard to the 13 individual writing skills identified in the instrument. Previously developed and tested rubrics for nursing and other health science students focused on content requirements for a specific assignment or general essay writing skills. These are insufficient to guide writing development for DNP students. However, the SWA's focus on individual writing skills is recommended to aid faculty in identifying writers' strengths and addressing deficits effectively. Future research is recommended to test the instrument's reliability and validity in larger samples and at various levels of nursing education.
The authors thank Sulekha Anand, PhD; Laurie Drabble, PhD; Brian Gothberg; and Thomas Moriarty, EdD, for their contributions to this study.
1. Tyndall DE, Flinchbaugh KB, Caswell NI, Scott ES. Threshold concepts in doctoral education: a framework for writing development in novice nurse scientists. Nurse Educ
. 2019;44(1):38–42. doi:10.1097/NNE.0000000000000535
2. Franks AM. Design and evaluation of a longitudinal faculty development program to advance scholarly writing among pharmacy practice faculty. Am J Pharm Educ
. 2018;82(6):6556. doi:10.5688/ajpe6556
3. Al-Imari L, Yang J, Pimlott N. Peer-support writing group in a community family medicine teaching unit: facilitating professional development. Can Fam Physician
4. Lemay M, Encandela J, Sanders L, Reisman A. Writing well: the long-term effect on empathy, observation, and physician writing through a residency writers' workshop. J Grad Med Educ
. 2017;9(3):357–360. doi:10.4300/JGME-D-16-00366.1
5. Fernandez E, Garcia AM, Seres E, Bosch F. Students' satisfaction and perceived impact on knowledge, attitudes and skills after a 2-day course in scientific writing: a prospective longitudinal study in Spain. BMJ Open
. 2018;8(1):e018657. doi:10.1136/bmjopen-2017-018657
6. Vishwanatha JK, Jones HP. Implementation of the Steps Toward Academic Research (STAR) fellowship program to promote underrepresented minority faculty into health disparity research. Ethn Dis
. 2018;28(1):3–10. doi:10.18865/ed.28.1.3
7. Durham ML, Yingling C, Hershberger PE. Accelerating improvement of a doctor of nursing practice project proposal course using quality improvement methods. J Nurs Educ
. 2019;58(5):306–311. doi:10.3928/01484834-20190422-11
8. McMillan LR, Raines K. Using the “write” resources: nursing student evaluation of an interdisciplinary collaboration using a professional writing assignment. J Nurs Educ
. 2011;50(12):697–702. doi:10.3928/01484834-20110930-01
9. Sailsman S, Rutherford M, Tovin M, Cianelli R. Cultural integration online: the lived experience of English-as-a-second-language RN-BSN nursing students learning in an online environment. Nurs Educ Perspect
. 2018;39(4):221–224. doi:10.1097/01.NEP.0000000000000301
10. Tai HC, Pan MY, Lee BO. Effects of attributional retraining on writing performance and perceived competence of Taiwanese university nursing students. Nurse Educ Today
. 2016;44:66–73. doi:10.1016/j.nedt.2016.05.008
11. Weaver R, Jackson D. Evaluating an academic writing program for nursing students who have English as a second language. Contemp Nurse
. 2011;38(1–2):130–138. doi:10.5172/conu.2011.38.1-2.130
12. Gopen G, Swan J. The science of scientific writing. Am Sci
13. Dols JD, Hernandez C, Miles H. The DNP Project: quandaries for nursing scholars. Nurs Outlook
. 2017;65(1):84–93. doi:10.1016/j.outlook.2016.07.009
14. Bickes JT, Schim SM. Righting writing: strategies for improving nursing student papers. Int J Nurs Educ Scholarsh
. 2010;7: Article 8. doi: 10.2202/1548-923X.1964
15. El Tantawi M, Al-Ansari A, Sadaf S, AlHumaid J. Evaluating the English language scientific writing skills of Saudi dental students at entry level. East Mediterr Health J
. 2016;22(2):148–153. doi:10.26719/2016.22.2.148
16. Bazrafkan L, Shokrpour N, Yousefi A, Yamani N. Management of stress and anxiety among PhD students during thesis writing: a qualitative study. Health Care Manag (Frederick)
. 2016;35(3):231–240. doi:10.1097/HCM.0000000000000120
17. Giles TM, Gilbert S, McNeill L. Nursing students' perceptions regarding the amount and type of written feedback required to enhance their learning. J Nurs Educ
. 2014;53(1):23–30. doi:10.3928/01484834-20131209-02
18. Agius NM, Wilkinson A. Students' and teachers' views of written feedback at undergraduate level: a literature review. Nurse Educ Today
. 2014;34(4):552–559. doi:10.1016/j.nedt.2013.07.005
19. Gazza EA, Shellenbarger T, Hunker DF. Developing as a scholarly writer: the experience of students enrolled in a PhD in nursing program in the United States. Nurse Educ Today
. 2013;33(3):268–274. doi:10.1016/j.nedt.2012.04.019
20. Sethares KA, Morris NS. Learning about and benefiting from peer review: a course assignment for doctoral students at two different universities. J Nurs Educ
. 2016;55(6):342–344. doi:10.3928/01484834-20160516-07
21. Hirschey R, Rodgers C, Hockenberry M. A program to enhance writing skills for advanced practice nurses. J Contin Educ Nurs
. 2019;50(3):109–114. doi:10.3928/00220124-20190218-05
22. Miller LC, Russell CL, Cheng AL, Zembles S. Testing the efficacy of a scaffolded writing intervention with online degree-completion nursing students: a quasi-experimental design. Nurse Educ Pract
. 2018;32:115–121. doi:10.1016/j.nepr.2018.06.011
23. Minnich M, Kirkpatrick AJ, Goodman JT, et al. Writing across the curriculum: reliability testing of a standardized rubric. J Nurs Educ
. 2018;57(6):366–370. doi:10.3928/01484834-20180522-08
24. Cyr PR, Smith KA, Broyles IL, Holt CT. Developing, evaluating and validating a scoring rubric for written case reports. Int J Med Educ
. 2014;5:18–23. doi:10.5116/ijme.52c6.d7ef
25. Turbow DJ, Werner TP, Lowe E, Vu HQ. Norming a written communication rubric in a graduate health science course. J Allied Health
26. Roush K, Tesoro M. An examination of the rigor and value of final scholarly projects completed by DNP nursing students. J Prof Nurs
. 2018;34(6):437–443. doi:10.1016/j.profnurs.2018.03.003
27. City University of New York Assessment Test in Writing (CATW). Student Handbook
. New York, NY: City University of New York; 2010.
28. Faul F, Erdfelder E, Buchner A, Lang AG. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behav Res Methods
29. Sparks J, Song Y, Brantley W, Liu O. Assessing Written Communication in Higher Education: Review and Recommendations for Next-Generation Assessment. Princeton, NJ: Educational Testing Service; 2014. doi: 10.1002/ets2.12035, 2014, 1–52.
30. Jonsson A, Svingby G. The use of scoring rubrics: reliability, validity and educational resources. Educ Res Rev
. 2007;2:131–141. doi:10.1016/j.edurev.2007.05.002
31. Huot B. Reliability, validity, and holistic scoring: what we know and what we need to know. Coll Compos Comm
32. Gazza EA, Hunker DF. Facilitating scholarly writer development: the writing scaffold. Nurs Forum
. 2012;47(4):278–285. doi:10.1111/j.1744-6198.2012.00275.x
33. Oermann MH, Leonardelli AK, Turner KM, Hawks SJ, Derouin AL, Hueckel RM. Systematic review of educational programs and strategies for developing students' and nurses' writing skills. J Nurs Educ
. 2015;54(1):28–34. doi:10.3928/01484834-20141224-01
34. Tuvesson H, Borglin G. The challenge of giving written thesis feedback to nursing students. Nurse Educ Today
. 2014;34(11):1343–1345. doi:10.1016/j.nedt.2014.07.003