Abstract
AbstractThis paper introduces an innovative method for fine-tuning a larger multi-label model for abnormality detection, utilizing a smaller trainer and advanced knowledge distillation techniques. It delves into the effects of fine-tuning on various abnormalities, noting varied improvements based on the Original Model’s performance in specific tasks. The experimental setup, optimized for on-device inference and fine-tuning with limited computational resources, demonstrates moderate yet promising enhancements in model performance post-fine-tuning. Key insights from the study include the importance of aligning theµ-Trainer’s behavior with the Original Model and the influence of hyper-parameters like the batch size on fine-tuning outcomes. The research acknowledges limitations such as the limited exploration of loss functions in multi-label models and constraints in architectural design, suggesting potential avenues for future investigation. While the proposed Naive Continual Fine-tuning Process is in its early stages, it highlights the potential for long-term model personalization. Moreover, using weight transfer exclusively for fine-tuning amplifies user privacy protection through on-device fine-tuning, devoid of transferring data or gradients to the server. Despite modest performance improvements after fine-tuning, these layers represent a small fraction (0.7%) of the total weights in the Original Model and 1.6% in theµ-Trainer. This study establishes a foundational framework for advancing personalized model adaptation, on-device inference, and fine-tuning while emphasizing the importance of safeguarding data privacy in model development.
Publisher
Cold Spring Harbor Laboratory