BACKGROUND
Pressure ulcers (PUs), also called pressure injuries (PIs) bring a negative health impact on patients and pose a substantial economic burden on society. Accurate staging is the key to the treatment of PUs. The deep learning (DL) algorithm using convolutional neural networks(CNNs) of images have achieved good classification performance in complicated of skin diseases, which also has the potential to improve diagnostic accuracy in staging PUs.
OBJECTIVE
We explored the potential of applying different CNNs algorithms, namely AlexNet, VGGNet16, GoogLeNet, ResNet 18, to PUs staging aiming to provide an effective tool to assist in evaluation.
METHODS
PU images from patients, including stageⅠ, stageⅡ, stage Ⅲ, stage Ⅳ, unstageable, and suspected deep tissue injury (SDTI), were collected at a tertiary hospital in China. To ensure sample balance, we randomly selected an equal number of images in each stage to form image dataset. Additionally, we augmented the sample size through data enhancement. The collected images were then divided into training, validation, and test sets in a ratio of 6:2:2. Subsequently, we trained them using AlexNet, GoogLeNet, VGGNet16, and ResNet 18 to develop staging models.
RESULTS
We collected 821 raw PU images and with the following distribution across stages: stage Ⅰ (113), stage Ⅱ (113), stage Ⅲ (186), stage Ⅳ (108), unstageable (118), and suspected deep tissue injury (SDTI) (113). In addition, 100 images for each stage were selected and a total of 3000 images were obtained after augmentation. The training, validation, and test sets were didvided in a ratio of 6:2:2. Among all the CNN models, ResNet 18 demonstrated the highest accuracy (0.9333), precision (0.987), recall (0.933), and F1 score (0.959). The classification performance of AlexNet, GoogLeNet, and VGGNet16 exhibited accuracies of 0.896, 0.75, and 0.625, respectively. The precision values were 0.97, 0.95, and 0.953, while the recall values were 0.896, 0.75, and 0.953, and the F1 scores were 0.935, 0.83, and 0.953, respectively.
CONCLUSIONS
The CNNs-based models demonstrated a strong classification ability of images of PUs, which might promote high efficient, low-cost PU diagnosis and staging.