The purpose of this descriptive study was to evaluate use of a previously validated, online, interactive wound assessment and wound care clinical pathway in a group of RNs. Specific aims were to (a) evaluate the proportions of correct, partially correct, and incorrect algorithmic decisions and dressing selections, (b) compare response rates between nurses who are and who are not wound care certified, and (c) evaluate its ease of use, educational value, and applicability in clinical practice.
Participants were recruited using convenience and snowball sampling methods. Four hundred eighteen nurses completed all 15 assessments; nearly half held a bachelors' degree in nursing (189, 45%), more than two-thirds worked in an inpatient acute care settings (277, 68%), and 293 (70%) were not certified in wound care.
After providing written informed consent and completing the participant demographics form, participants assessed 15 photographs of wounds with accompanying moisture descriptions and completed an algorithm and dressing selection for each. All responses were anonymously collected by the program. Existing, retrospective, program data were also downloaded and data from nurses who completed all assessments were extracted and analyzed. Descriptive statistics were used to analyze all variables. Selection outcomes and survey responses between nurses who were and who were not wound care certified were compared using a 2-sample Student t test assuming unequal variances. Individual responses for the first 6 wounds were compared to the last 6 wounds using a paired t test.
The mean (M) proportions of fully or partially correct (operationally defined as safe but not fully correct) algorithm and dressing choice were 81% (SE: 0.88, 95% confidence level: 1.73) and 78.1% (SE: 0.70, 95% confidence level: 1.39), respectively. Wound care–certified nurses had higher mean algorithm scores than those who were not certified (M: 89.2%, SE: 1.27 vs M: 77.8%, SE: 1.10, P < .001). Most incorrect/partially correct choices were attributable to incorrect necrotic tissue assessment (n = 845, 58%). The difference between fully correct first 6 and last 6 algorithm choices was statistically significant (M: 310, SE: 0.02 vs M: 337, SE: 9.32, P = .04). On a Likert scale of 1 (not at all) to 5 (very), average scores for ease of program and algorithm use, educational value, and usefulness for clinicians ranged from M: 4.14, SE: 0.08 to M: 4.22, SE: 0.08.
Results suggest that the algorithm is valid and has potential educational value. Initial evaluation also suggests that program refinements are needed. Evaluation of participant responses indicated potential problems with the definitions used for necrotic tissue or assessment knowledge deficits. Results also substantiate the importance of instructional design and testing online education programs. More research is needed to uncover potential gaps in nurses' wound care knowledge that may hamper evidence-based practices adoption and the need to develop effective, evidence-based education-delivery techniques.
The CE test for this article is available online only at the journal website, jwocnonline.com, and the test can be taken online at NursingCenter.com/CE/JWOCN.
Lia van Rijswijk, DNP, RN, CWCN, Associate Dean, Undergraduate Programs, Thomas Edison State University, W. Cary Edwards School of Nursing, and Clinical Editor, Wound Management and Prevention, Newtown, Pennsylvania.
Correspondence: Lia van Rijswijk, DNP, RN, CWCN, Associate Dean, Undergraduate Programs, Thomas Edison State University, W. Cary Edwards School of Nursing, 111 West State Street, Trenton, NJ 08608 (email@example.com).
The author received a scholarship grant from the Independence Blue Cross Foundation and Web support to conduct the study from ConvaTec.