Abstract
Federated deep learning is a method for training a deep learning neural network model on vast amounts of privacy-sensitive patient-related data without having to exchange the data itself. A vulnerability exposed during federated model training relies on intercepting gradients during the training process. In this work, we show that it is feasible to train a global model for segmenting an oropharyngeal tumor in a simulated federated consortium, without any sharing of gradients, only the local model weights. This work investigates the effects of federated averaging of model weights on model performance, and describes the model evolution during federated learning. We show that a federally-trained model is functionally equivalent to a centrally-trained one. In conclusion, the preferred mode of federated deep learning chosen was synchronous federated averaging of partial model at the end of every epoch. In terms of future work, we surmise that segmentation performance could be significantly improved by using multi-modality co-registered images, such as PET and CT, in federated deep learning for automated segmentation using the current work as basis.