Abstract
Abstract
Introduction
Patients frequently turn to online information for decision-making factors about aesthetic procedures. The quality of online medical content is an essential supplement to clinical education. These resources assist patients in understanding the risks, benefits, and appropriateness of their desired procedure. This study examines the breadth and readability of online blepharoplasty information, elucidating its educational utility.
Methods
A depersonalized Google search was conducted using the Startpage Search Engine, investigating key phrases, “blepharoplasty decision making factors”, “eye lift decision making factors”, and “eyelid lift decision making factors”. The first three pages of results for each search term, totaling 90 links were screened. Data were extracted for various decision-making factors, subspecialty, gender, and readability.
Results
Twenty-six websites met inclusion for analysis. Thirteen websites were plastic surgery based, five otolaryngology (ENT), five ophthalmology/oculoplastic, one oral-maxillofacial (OMFS), and two mixed-based practices. Most blepharoplasty webpages identified were that of private practice and male surgeons. Half were subspecialties other than plastic surgery. Thirteen common decision-making factors were identified. The most common factors addressed across all texts were recovery followed by cosmetic and functional goals. The least discussed were genetic factors. Average Readability exceeded the 12th grade. There were no significant differences in readability means among subspecialties.
Conclusion
This study examines the online blepharoplasty sphere among US-based practices providing clinical education to patients. No appreciable differences among gender, subspecialty, and readability on decision-making factors were found, highlighting a consistency among surgeons. Most websites fell short of readability standards, however, emphasizing a need for clearer information to patients.
No Level Assigned
This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Publisher
Springer Science and Business Media LLC