Skin tears (STs) are often painful, acute wounds resulting from trauma to the skin, and they are largely preventable.1–6 When assessing STs, it is important to classify the extent of injury to guide management. Payne and Martin7 established the first classification system; however, this system failed to become universally accepted. Almost 2 decades later, Carville et al8 established the Skin Tear Audit Research system. Yet, neither of these systems gained widespread acceptance. An international survey in 2011 by LeBlanc et al9 indicated a preference by healthcare professionals for a user-friendly, simple classification system.
In an effort to redirect awareness toward this largely unheeded healthcare issue, an International Skin Tear Advisory Panel (ISTAP) (Table 1) of 12 internationally recognized key opinion leaders convened to establish consensus statements on the prevention, prediction, assessment, and treatment of STs. This resulted in the development and publication of 12 key consensus statements and a definition for STs.10 A subsequent meeting of the ISTAP in December 2011 resulted in the development of a new classification system based on existing sytems.7,8 Content validity was established based on a thorough review of the ST classification literature.
The ISTAP consensus panel defined ST as follows: “A skin tear is a wound caused by shear, friction, and/or blunt force, resulting in separation of skin layers. A skin tear can be partial-thickness (separation of the epidermis from the dermis) or full-thickness (separation of both the epidermis and dermis from underlying structures).”10
Initially, the ISTAP group developed 12 consensus statements for the prevention, prediction, assessment, and treatment of STs.10 This was supplemented with the development of the ISTAP classification system. To achieve the goal of simplicity, 3 types of STs were identified and described (Figure 1).
The panel members submitted 74 ST photographs. The ISTAP members collected the digital photographs with consent, based on their healthcare setting’s policies and procedures. Individual subjects or their spokesperson consented to the photographs being used for the classification validation study and for teaching purposes in the future. No individual identifiers were visible in the photographs. Photographs that were previously taken with informed consent and that were the property of an ISTAP team member were accepted into the photograph bank with copyright assigned to the ISTAP group. One researcher (K.L.) selected the highest-quality photographs with equal representation of the 3 types to test the internal validity of the classification system (n = 30).
The photographs were then distributed to the panel members to internally validate the proposed Skin Tear Classification System. The panel was directed to classify the 30 test photographs by type of STs without referring to the classification document to blind the participants and test the simplicity of the classification system (time point 1 [TP1]). The intrarater and retest reliability was undertaken 2 months later using the same photographs and procedure (time point 2 [TP2]). The external validity of the system was then tested on a sample of 327 individuals. The participants were again directed to classify the same 30 photographs by type of STs without referring to the classification document.
Analyses of the data were performed to examine the percentage level of agreement for the type of STs depicted on the photographs. Test-retest and intrarater reliability was established using the Fleiss κ test.11 Interrater reliability was tested using the Cohen κ test. This test was interpreted as satisfactory or not satisfactory, with the point of discrimination being 0.70.12
The ISTAP consensus panel consisted of 12 international healthcare professionals. At TP1 and TP2, the data indicated a substantial agreement on the classification of the 30 images by ST type, according to the Landis and Koch interpretation11,13 (Fleiss κ TP1 = 0.619, TP2 = 0.653). Test-retest or intrarater reliability indicated satisfactory agreement between TP1 and TP2 (Cohen κ = 0.877).
Following this step, the tool and photographs were sent to a study group of 327 participants to include 303 healthcare professionals and 24 nonnursing subjects (Table 2). The sample consisted of nurses with the credentials of registered nurse, registered practical nurse/licensed vocational nurse/licensed practical nurse/certified nursing assistant, and nonnurses from Canada, the United States, Brazil, the United Kingdom, and China.
There were only 24 nonnursing subjects in the sample; therefore, they were excluded from further analysis. Of the 303 healthcare professionals, complete data were available for 190 subjects (62.7%). The data indicated a moderate level of agreement on classification of STs by type (Fleiss κ = 0.545).
Interrater reliability based on wound care expertise was established using the Fleiss κ statistic. The level of agreement for the ISTAP on the 30 test ST photographs was substantial (Fleiss κ = 0.653).12 A moderate level of agreement was demonstrated for both the RN group and the non–registered nurse group (Fleiss κ = 0.555 and 0.480, respectively). Only a fair level of agreement was found for the nonnursing subjects (Fleiss κ = 0.338) (Table 3).
The primary objective of the ISTAP was to develop and validate a widely accepted Skin Tear Classification System and establish a common language for the documentation of STs. Such developments are paramount to future research related to the prediction, prevention, assessment, and treatment of these unique, yet understudied, wounds. This is particularly true as STs can often be incorrectly diagnosed as pressure ulcers.
The results from the validation study demonstrated substantial intrarater reliability for the expert panel. Moderate interrater reliability was demonstrated for the licensed nurses, and fair for the nonnursing subjects. The expert panel demonstrated a higher level of agreement than did the healthcare professional group, who in turn, demonstrated higher agreement than did the nonhealthcare group. These differences were attributed to the level of expertise and familiarity with the classification system, although further investigation of this would be required to explain this finding.
It is proposed that if individuals were given access to the classification system as a reference, the levels of agreement would be even greater. The high level of agreement would appear to be a testament to the simplicity and ease of use of the classification system.
At present, the classification system is available and has been tested only in the English language. Given that the study was conducted in a variety of countries, the researchers presumed a fair degree of generalizability. It is acknowledged, however, that further testing and validation with larger numbers of both healthcare and nonhealthcare professionals across different settings and countries are required. In addition, translation into a variety of commonly used languages will facilitate implementation globally.
From this study, it was apparent that there were a number of other gaps in the literature. Therefore, the ISTAP recommends that further research be conducted. Examples could include prevalence studies across different healthcare settings to determine the true extent of ST prevalence and to firmly establish the need for the wound care community to focus on these complex acute wounds. Also, the development of a valid and reliable risk assessment tool applicable to STs in all healthcare settings is needed. Studies to determine the best practices for the prevention and treatment of STs are also warranted. In addition, it would be helpful to identify unpreventable ST situations as a protective measure to the healthcare systems.
The expert panel established the ISTAP Skin Tear Classification System with the goal of raising the global healthcare community’s awareness of STs. It is envisioned that the acceptance and utilization of a common language and classification system for STs will facilitate best practices and research in this area. Development of an internationally recognized and validated classification system for STs is an important first step to facilitate the development of international guidelines for the prevention, prediction, assessment, and management of STs.
1. LeBlanc K, Christensen D, Cook J, Culhane B, Gutierrez O. Pilot study of the prevalence of skin tears
in a long term care facility in Eastern Ontario, Canada. 2011. J Wound Ostomy Continence Nurs. In press
2. White M, Karam S, Cowell B. Skin tears
in frail elders: a practical approach to prevention. Geriatr Nurs 1994; 15 (2): 95–9.
3. LeBlanc K, Christensen D, Orstead H, Keast D. Best practice recommendations for the prevention and treatment of skin tears
. Wound Care Canada 2008; 6 (8): 14–32.
4. Carville K, Lewin G. Caring in the community: a prevalence study. Prim Intent 1998; 6: 54–62.
5. Malone M, Rozario N, Gavinski M, Goodwin J. The epidemiology of skin tears
in the institutionalized elderly. J Am Geriatr Soc 1991; 39: 591–5.
6. Carville K, Smith JA. Report on the effectiveness of comprehensive wound assessment and documentation in the community. Prim Intent 2004; 12: 41–8.
7. Payne RL, Martin ML. The epidemiology and management of skin tears
in older adults. Ostomy Wound Manage 1990; 26: 26–37.
8. Carville K, Lewin G, Newall N, et al. STAR: a consensus for skin tear classification
. Prim Intent 2007; 15 (1): 18–28.
9. LeBlanc K, Baranoski S, Regan M. International 2010 Skin Tear Survey, presented at the International Skin Tear Advisory Panel meeting, January 27-28, 2011, Orlando, Florida.
10. LeBlanc K, Baranoski SSkin Tear Consensus Panel Members. Skin tears
: state of the science: consensus statements for the prevention, prediction, assessment, and treatment of skin tears
. Adv Skin Wound Care 2011; 24 (9 Suppl): 2–15.
11. Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull 1971; 76: 378–82.
12. Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull 1968; 70: 213–20.
13. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159–74.