Secondary Logo

Journal Logo

Social Media: Is the Message Reaching the Plastic Surgery Audience?

Chen, Austin D.; Ruan, Qing Zhao M.D.; Bucknor, Alexandra M.B.B.S., M.Sc.; Chattha, Anmol S. M.D.; Bletsis, Patrick P. B.Sc.; Furnas, Heather J. M.D.; Lee, Bernard T. M.D., M.B.A., M.P.H.; Lin, Samuel J. M.D., M.B.A.

Plastic and Reconstructive Surgery: September 2019 - Volume 144 - Issue 3 - p 773-781
doi: 10.1097/PRS.0000000000005988
Plastic Surgery Focus: Special Topics
Free

Background: The aim of this study was to assess readability of articles shared on Twitter and analyze differences between them to determine whether messages and written posts are at reading levels comprehended by the general public.

Methods: Top-rated #PlasticSurgery tweets (per Twitter algorithm) in January of 2017 were reviewed retrospectively. Text from tweeted links to full, open-access, and society/institutional patient information articles were extracted. Readability was analyzed using the following established tests: Coleman-Liau, Flesch-Kincaid, FORCAST Readability Formula, Fry Graph, Gunning Fog Index, New Dale-Chall Formula, New Fog Count, Raygor Readability Estimate, and Simple Measure of Gobbledygook Readability Formula. Ease-of-reading was analyzed using the Flesch Reading Ease Index.

Results: Of 234 unique articles, there were 101 full journal (43 percent), 65 open-access journal (28 percent), and 68 patient information (29 percent) articles. When compared using the Simple Measure of Gobbledygook Readability Formula, full and open-access journal articles attained similar mean reading levels of 17.7 and 17.5, respectively (p = 0.475). In contrast, patient information articles had a significantly lower mean readability level of 13.9 (p < 0.001). Plastic surgeons posted 128 articles (55 percent) and non–plastic surgeon individuals posted 106 articles (45 percent). Mean readability levels between the two were 16.2 and 16.9, respectively (p < 0.001). All tweeted articles were above the sixth-grade recommended reading level.

Conclusions:: Readability of #PlasticSurgery articles may not be appropriate for many American adults. Consideration should be given to improving readability of articles targeted toward the general public to optimize delivery of social media messages.

Boston, Mass.; and Stanford, Calif.

From the Division of Plastic and Reconstructive Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School; and the Division of Surgery, Department of Surgery, Stanford University.

Received for publication January 27, 2018; accepted February 14, 2019.

Presented at Plastic Surgery The Meeting 2017, Annual Meeting of the American Society of Plastic Surgeons, in Orlando, Florida, October 6 through 10, 2017.

Disclosure:The authors have no financial disclosures to make.

Samuel J. Lin, M.D., M.B.A., 110 Francis Street, Suite 5A, Boston, Mass. 02215, sjlin@bidmc.harvard.edu, Instagram: @drsamuellin, Twitter: @Dr_SamuelLin

The integration of social media into everyday life is accompanied by an unprecedented rise in their use by the medical profession.1 As a result of the seemingly limitless reach of these digital, Web-based platforms, this use has come in the form of education, research, and patient care, including campaigns to raise awareness of cancer.2,3 Various aspects of plastic surgery have been known to receive widespread media attention, and in recent years, social media have been recognized as a tool to facilitate engagement with the general public.4–7 Platforms including Facebook (Facebook, Inc., Menlo Park, Calif.), Twitter (Twitter, Inc., San Francisco, Calif.), Snapchat (Snap, Inc., Santa Monica, Calif.), and Instagram (Instagram, Menlo Park, Calif.) each possess unique ways to share visual and text content.

The publication “#PlasticSurgery” outlined the use of the Twitter hashtag in the specialty, noting that 46.5 percent of tweets were posted by plastic surgeons, with posts most often related to celebrity use of plastic surgery or aesthetic plastic surgery.4 The authors also found that the Twitter user population was interested in evidence-based medicine research articles and patient safety information, primarily used open-access journals to find plastic surgery information, and felt that the primary role of plastic surgeons on social media should be education.4 Given these findings, the plastic surgery community, inclusive of board-certified plastic surgeons, the American Society of Plastic Surgeons, and journal Plastic and Reconstructive Surgery, has an opportunity to unite in providing members of the public a steady stream of information through actively sharing articles under #PlasticSurgery.

Previous studies assessing the readability of patient information articles written for the lay reader have described the grade levels of the articles greatly exceeding the recommended sixth-grade reading level for medical information.8–17 In the field of plastic surgery, studies such as those by Vargas et al. on patient information regarding breast reconstruction and liposuction reported that the material was too difficult for patients to comprehend.10,11 With the potential increase in sharing of articles and research on social media, it is important to ensure that information shared is understandable by a wide readership.

We aimed to analyze the readability of articles posted under #PlasticSurgery on Twitter in the month following publication of the article entitled “#PlasticSurgery,” published in January of 2017, by investigating overall readability, with a breakdown of readability for full journal, open-access journal, and patient information articles. A secondary aim was to analyze the readability of a passage of our article, demonstrating the feasibility of improving reading difficulty.

Back to Top | Article Outline

METHODS

An Internet search was conducted for all consecutive top tweets containing #PlasticSurgery on Twitter in January of 2017. This period was selected because it coincided with the timing after publication of the article entitled “#PlasticSurgery.”4 In this retrospective review, top tweets were selected through an algorithm designed by Twitter to represent the most relevant posts regarding the subject matter based on popularity of tweet, among other factors. We conducted the search after turning off filters and disabling location, cookies, and user account information to minimize bias, as previously performed by other Web-based search studies.10–15 Tweets with unique links to non–open-access journal, open-access journal, and patient information articles from plastic surgery societies or institutions were extracted. Articles were downloaded into plain text in separate Microsoft Word 2011 documents (Microsoft Corp., Redmond, Wash.); each was edited to exclude images, videos, figures, captions, advertisements, references, links, disclaimers, and acknowledgements.

Readability software has been widely used to assess delivery of health care information, with increasingly sophisticated analysis of readability grade level for text-based content, which enables the analysis of patient information content in relation to patient literacy rates.18 The national recommendations are that the sixth-grade reading level is most appropriate for health care–related material presented to the American general public, as patients are often found to read three grade levels below their educational level.16,17 The American Medical Association has noted limited literacy in one-quarter of all patients.19

Readability data were extracted using Readability Studio Professional Edition v2012.1 software (Oleander Software, Ltd., Vandalia, Ohio) and grouped by content and identity of the person tweeting the link (“tweeter”). Readability grade level and ease-of-reading scores were analyzed and subcategorized into full journal, open-access, and patient information article types. Readability grade level is representative of the educational level an individual needs to read the selected text. The ease-of-reading is another metric, with a higher scored text meaning a lower difficulty text. The scores for articles posted by plastic surgeons were compared to those of non–plastic surgeon individuals to better isolate articles meant for the general public, given that the latter would more likely share articles meant for the lay reader. The reading grade level and ease-of-reading scores for the first paragraph of our discussion section were also analyzed, along with the scores after editing it to improve readability.

Readability grade level was assessed using established tests: Coleman-Liau, Flesch-Kincaid, FORCAST Readability Formula, Fry Graph, Gunning Fog Index, New Dale-Chall Formula, New Fog Count, Raygor Readability Estimate, and Simple Measure of Gobbledygook Readability Formula.10–15 The Simple Measure of Gobbledygook Readability Formula test, with a scoring scale of 0 to 19 and higher for reading grade level, was used to compare the readability between article types and identity of tweeter. The Flesch Reading Ease Index, with a scale of 0 to 100, was used to compare ease-of-reading between different article types and different tweeting identity, with a lower score representing more difficult ease-of-reading. The Flesch-Kincaid test for reading grade level and Flesch Reading Ease Index for ease-of-reading were used to assess the scores for the first paragraph of our Discussion, before and after editing. The 0- to 100-point scale can be broken down into 0 to 29 (very difficult), 30 to 49 (difficult), 50 to 59 (fairly difficult), 60 to 69 (standard), 70 to 79 (fairly easy), 80 to 89 (easy), and 90 to 100 (very easy).

Statistical analyses were performed using IBM SPSS Version 21.0 (IBM Corp., Armonk, N.Y.). Comparisons of reading grade levels and ease of reading were conducted using analysis of variance and independent t tests for overall differences and specific differences between groups, respectively. A value of p < 0.05 was deemed statistically significant.

Back to Top | Article Outline

RESULTS

Overall Readability

In total, 234 unique articles were extracted from Twitter in January of 2017, including 101 (43 percent) full journal, 65 (28 percent) open-access journal, and 68 (29 percent) patient information articles (Fig. 1). Figure 2 shows the breakdown of readability scores by specific test and by article type, with significant differences in scores across the readability tests for full journal (p < 0.001), open-access journal (p < 0.001), and patient information articles (p < 0.001). Readability scores of full journal, open-access journal, and patient information articles were 15.9 (FORCAST, 12.1; Simple Measure of Gobbledygook Readability Formula, 17.7), 15.8 (FORCAST, 12.0; Simple Measure of Gobbledygook Readability Formula, 17.5), and 12.5 (New Fog Count, 10.3; Simple Measure of Gobbledygook Readability Formula, 13.9), respectively. Within the distribution of Simple Measure of Gobbledygook Readability Formula scores, there were significant differences between full journal, open-access journal, and patient information articles (p < 0.001) (Fig. 3). Full and open-access journal articles attained similar mean reading levels of 17.7 ± 1.5 and 17.5 ± 1.5, respectively (p = 0.475). In contrast to full and open-access journal articles, patient information articles had significantly lower mean readability levels of 13.9 ± 1.9 (p < 0.001 and p < 0.001, respectively).

Fig. 1.

Fig. 1.

Fig. 2.

Fig. 2.

Fig. 3.

Fig. 3.

Back to Top | Article Outline

Overall Ease-of-Reading

There were significant differences in Flesch Reading Ease Index between full journal, open-access journal, and patient information articles (p < 0.001). Full and open-access journal articles attained a similar mean ease-of-reading: 20.7 ± 11.4 (very difficult) versus 22.0 ± 11.6 (very difficult) (p = 0.500). In contrast, mean readability level for patient information articles was significantly lower, at 45.2 ± 12.2 (difficult), when compared to that of full and open-access journal articles (p < 0.001 and p < 0.001, respectively) (Fig. 4).

Fig. 4.

Fig. 4.

Back to Top | Article Outline

Readability for Tweets by Plastic Surgeons versus Non–Plastic Surgeon Individuals

Of the total unique articles, 128 articles (55 percent) were posted by plastic surgeons and 106 articles (45 percent) were posted by non–plastic surgeon individuals. The distribution of article types tweeted by plastic surgeons and non–plastic surgeon individuals was 48 (38 percent) versus 51 (48 percent) full journal articles, 31 (24 percent) versus 35 (33 percent) open-access journal articles, and 49 (38 percent) versus 20 (19 percent) patient information articles, respectively (Figs. 5 and 6). Comparisons of the readability scores, determined by using the Simple Measure of Gobbledygook Readability Formula, revealed the readability of articles tweeted by plastic surgeons to be 16.2 ± 2.6 and that of those tweeted by non–plastic surgeon individuals to be 16.9 ± 1.9 (p = 0.021). Readability of full journal, open-access, and patient information articles, determined by using the Simple Measure of Gobbledygook Readability Formula, for comparisons between plastic surgeon versus non–plastic surgeon individuals’ tweets was found to be 17.7 ± 1.7 versus 17.7 ± 1.3 (p = 0.907), 17.9 ± 1.4 versus 17.2 ± 1.5 (p = 0.093), and 13.8 ± 2.0 versus 14.2 ± 1.4 (p = 0.432), respectively (Fig. 7).

Fig. 5.

Fig. 5.

Fig. 6.

Fig. 6.

Fig. 7.

Fig. 7.

Back to Top | Article Outline

Ease-of-Reading for Tweets by Plastic Surgeons versus Non–Plastic Surgeon Individuals

Overall ease-of-reading, as determined by the Flesch Reading Ease Index, was 30.1 ± 18.1 (difficult) and 25.9 ± 12.8 (very difficult) (p = 0.045) for plastic surgeon and non–plastic surgeon individual tweets, respectively. For full journal, open-access, and patient information articles, comparisons between plastic surgeon versus non–plastic surgeon individuals’ tweets were found to be 20.5 ± 13.4 (very difficult) versus 20.9 ± 9.4 (very difficult) (p = 0.854), 19.7 ± 11.8 (very difficult) versus 23.9 ± 11.2 (very difficult) (p = 0.145), and 45.9 ± 13.4 (difficult) versus 43.5 ± 8.3 (difficult) (p = 0.477), respectively (Fig. 8).

Fig. 8.

Fig. 8.

Back to Top | Article Outline

Reading Grade Level and Ease-of-Reading for a Passage of Our Article

As determined by Flesch-Kincaid and Flesch Reading Ease Index scores, reading grade level and ease-of-reading scores of the first paragraph of our Discussion were 19 and 13 (very difficult), respectively (Fig. 9). After editing the passage, reading grade level and ease-of-reading scores were 8.1 and 56 (fairly difficult), respectively (Fig. 10).

Fig. 9.

Fig. 9.

Fig. 10.

Fig. 10.

Back to Top | Article Outline

DISCUSSION

Plastic surgeons have made use of a range of social media platforms to advance the specialty and improve patient care. Although most popular social media platforms encourage the sharing of information using images and videos, Twitter has been particularly noted for its ability to rapidly disseminate a range of information through hashtags, short messages, and links, while also initiating discussion.4–7 Respondents to a Twitter poll expressed great interest in plastic surgery journal articles and patient information,4 but studies have shown that health care information is often too difficult for the public to understand.8–17 It has become apparent that popular types of posts include before-and-after photographs, doctors’ blogs, and videos of treatments, underlining the importance of creating patient education content that the lay public can easily understand.20 In our study, we found that information shared under #PlasticSurgery, whether full journal, open-access, or patient information articles, is largely too difficult for the general public to comprehend, as determined by readability grade level and ease-of-reading scores. This finding held true regardless of whether articles were shared by plastic surgeons or non–plastic surgeon individuals.

In our analysis, peer-reviewed articles were equally difficult to read, whereas patient information articles, consisting of those from online blogs, newspapers, magazines, or institutional websites, were significantly easier. These findings may be expected, given that patient-oriented content is tailored toward the general public, whereas journal articles with technically challenging language are tailored toward health care professionals. Still, it is important to recognize that, although patient information readability is significantly lower than that of journal articles, it remains much higher than the recommended sixth-grade reading level. This is consistent with the readability scores of online patient information as reported in multiple previously studied plastic surgery topics.10–13

Although Twitter has been mentioned as a key tool to educate the public, plastic surgeons may find the platform particularly valuable for sharing information with other health professionals and may consequently tweet a high proportion of journal articles.5,7 Interestingly, plastic surgeons were found to tweet similar distributions of article types, whereas non–plastic surgeon individuals were found to tweet a higher proportion of full journal articles than open-access journal articles. As the number of board-certified plastic surgeons tweeting #PlasticSurgery information has grown, it has been promising to see the relatively even proportion of shared article types, potentially signifying that plastic surgeons are targeting both the lay public and medical professionals with equal consideration. More surprising is the fact that non–plastic surgeon individuals appear to be active sharers of journal article links. This fact reemphasizes the public interest in plastic surgery evidence-based literature. It also begs the question of whether the general public members of social media involved with the plastic surgery community may have a higher education level and ability to comprehend higher grade level material.

The lower distribution of tweeted full journal articles by plastic surgeons compared with non–plastic surgeon individuals may explain the difficult readability associated with articles tweeted by the latter. When considering the readability of articles posted by plastic surgeons and non–plastic surgeon individuals within each article type, no discernible differences were detected. These findings indicate that the difficult reading level of material posted on social media is likely related to the source material rather than to the tweeting individual. If so, the onus is on plastic surgeons to thoughtfully use their outreach and present information that can be understood by the general public.

The question then becomes whether it is possible to write readable material while maintaining the same message. As demonstrated using a passage of our own article, we were able to do so following the readability software suggestions, primarily through shortening sentences and using simpler words with fewer syllables. It is important to note that each readability test is unique; therefore, different tests may yield different scores, as seen in our results. For reference, the Simple Measure of Gobbledygook Readability Formula has been recognized as most useful for health care information, whereas the Flesch-Kincaid tests have been recognized as some of the oldest and most commonly used for general text.21 Plastic surgeons may consider using the software as a guide to write patient information articles at appropriate reading levels.8,17,22 They may also use it to write patient-oriented summaries for scientific articles, whether with the original article or as an adjunct to a social media post, now that patients have become so involved in the decision-making process of their health care.23 Another potential application could be in the form of creating open-access online platforms that present simplified scientific findings for general consumption.

However, as seen in our edited passage, the language is simplistic. Moreover, although we reached the eighth grade recommended reading level for general material, we were still unable to reach the recommended sixth grade reading level for health care information despite not including complex medical terminology. Given the challenges in creating readable material, it may be important to also stress other options. These may include presenting information for the public in a different form, such as images and videos, especially with platforms such as YouTube and Instagram being reported as most popular in the cosmetic patient population.20 It remains to be seen whether these media may be better understood by the general public and whether there is a way to better present them in a more comprehensible manner.

Back to Top | Article Outline

Limitations

Limitations of our study include potential human errors in a manual search through Twitter for data collection. Furthermore, we did not subanalyze articles based on topic of interest because of limitations of the total number of collected articles and loss of power if subanalyses were performed. With regard to general public (or patient) literacy, the social media and plastic surgery populations may be different than those described in national guidelines. In addition, readability is only one aspect of assessing whether the message is reaching the audience; other factors include whether shared links are actually clicked and whether the linked articles are, in fact, read. Although these could potentially be studied, the data are only accessible through individual Twitter accounts.

Back to Top | Article Outline

CONCLUSIONS

The plastic surgery community has taken bold steps to revolutionize the delivery of health care information through tweeting article links featuring evidence-based medicine and patient information articles. However, there may be a gap between what is being shared and how likely the intended audience is to understand it. Difficult readability may limit the message’s ability to reach the audience; future studies may consider how best to tackle this problem, whether by using readability software to guide in writing or using other media such as images or videos.

Back to Top | Article Outline

REFERENCES

1. Prestin A, Vieux SN, Chou WY. Is online health activity alive and well or flatlining? Findings from 10 years of the Health Information National Trends Survey. J Health Commun. 2015;20:790–798.
2. Katz MS, Utengen A, Anderson PF, et al. Disease-specific hashtags for online communication about cancer care. JAMA Oncol. 2016;2:392–394.
3. Falisi AL, Wiseman KP, Gaysynsky A, Scheideler JK, Ramin DA, Chou WS. Social media for breast cancer survivors: A literature review. J Cancer Surviv. 2017;11:808–821.
4. Branford OA, Kamali P, Rohrich RJ, et al. #PlasticSurgery. Plast Reconstr Surg. 2016;138:1354–1365.
5. Humphries LS, Curl B, Song DH. #SocialMedia for the academic plastic surgeon: Elevating the brand. Plast Reconstr Surg Glob Open 2016;4:e599.
6. Rohrich RJ. So, do you want to be Facebook friends? How social media have changed plastic surgery and medicine forever. Plast Reconstr Surg. 2017;139:1021–1026.
7. Gould DJ, Grant Stevens W, Nazarian S. A primer on social media for plastic surgeons: What do I need to know about social media and how can it help my practice? Aesthet Surg J. 2017;37:614–619.
8. Powers BJ, Trinh JV, Bosworth HB. Can this patient read and understand written health information? JAMA 2010;304:76–84.
9. Sentell TL, Halpin HA. Importance of adult literacy in understanding health disparities. J Gen Intern Med. 2006;21:862–866.
10. Vargas CR, Kantak NA, Chuang DJ, Koolen PG, Lee BT. Assessment of online patient materials for breast reconstruction. J Surg Res. 2015;199:280–286.
11. Vargas CR, Ricci JA, Chuang DJ, Lee BT. Online patient resources for liposuction: A comparative analysis of readability. Ann Plast Surg. 2016;76:349–354.
12. Seth AK, Vargas CR, Chuang DJ, Lee BT. Readability assessment of patient information about lymphedema and its treatment. Plast Reconstr Surg. 2016;137:287e–295e.
13. Ricci JA, Vargas CR, Chuang DJ, Lin SJ, Lee BT. Readability assessment of online patient resources for breast augmentation surgery. Plast Reconstr Surg. 2015;135:1573–1579.
14. Phillips NA, Vargas CR, Chuang DJ, Lee BT. Readability assessment of online patient abdominoplasty resources. Aesthetic Plast Surg. 2015;39:147–153.
15. Aliu O, Chung KC. Readability of ASPS and ASAPS educational web sites: An analysis of consumer impact. Plast Reconstr Surg. 2010;125:1271–1278.
16. Weiss B. Health Literacy: A Manual for Clinicians. 2003.Chicago: American Medical Association and American Medical Foundation;
17. MedlinePlus. How to write easy-to-read health materials. Available at: http://www.nlm.nih.gov/medlineplus/etr.html. Accessed April 23, 2017.
18. Friedman DB, Hoffman-Goetz L. A systematic review of readability and comprehension instruments used for print and web-based cancer information. Health Educ Behav. 2006;33:352–373.
19. Health literacy: Report of the Council on Scientific Affairs. Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, American Medical Association. JAMA 1999;281:552–557.
20. Sorice SC, Li AY, Gilstrap J, Canales FL, Furnas HJ. Social media and the plastic surgery patient. Plast Reconstr Surg. 2017;140:1047–1056.
21. Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: Application, results, and recommendations. Res Social Adm Pharm. 2013;9:503–516.
22. Wallace LS, Cassada DC, Rogers ES, et al. Can screening items identify surgery patients at risk of limited health literacy? J Surg Res. 2007;140:208–213.
23. Matros E, Yueh JH, Bar-Meir ED, Slavin SA, Tobias AM, Lee BT. Sociodemographics, referral patterns, and Internet use for decision-making in microsurgical breast reconstruction. Plast Reconstr Surg. 2010;125:1087–1094.
Copyright © 2019 by the American Society of Plastic Surgeons