An Effective Med-VQA Method Using a Transformer with Weights Fusion of Multiple Fine-Tuned Models
-
Published:2023-08-28
Issue:17
Volume:13
Page:9735
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Al-Hadhrami Suheer12ORCID, Menai Mohamed El Bachir1ORCID, Al-Ahmadi Saad1ORCID, Alnafessah Ahmad3ORCID
Affiliation:
1. College of Computer and Information Sciences, King Saud University, P.O. Box 2614, Riyadh 13312, Saudi Arabia 2. Computer Engineering Department, Hadhramout University, Al Mukalla 10587, Yemen 3. King Abdulaziz City for Science and Technology, Riyadh 11442, Saudi Arabia
Abstract
Visual question answering (VQA) is a task that generates or predicts an answer to a question in human language about visual images. VQA is an active field combining two AI branches: NLP and computer vision. VQA in the medical field is still at an early stage, and it needs vast efforts and exploration to reach practical usage. This paper proposes two models that are utilized in the latest vision and NLP transformers that outperform the SOTA and have not yet been utilized in medical VQA. The ELECTRA-base transformer is used for textual feature extraction, whereas SWIN is used for visual feature extraction. In the SOTA medical VQA, selecting the model is based on the model that achieves the highest validation accuracy or the last model in training. The first proposed model, the best-value-based model, is selected based on the highest validation accuracy. The second model, the greedy-soup-based model, uses a greedy soup technique based on the fusion of multiple fine-tuned models to set the model parameters. The greedy soup selects the model parameters by fusing the model parameters that have significant performance on the validation accuracy in training. The greedy-soup-based model outperforms the best-value-based model, and both proposed models outperform the SOTA, which has an accuracy of 83.49%. The greedy-soup-based model is optimized with batch size and learning rate. During the optimization, seven extra models exceed the SOTA accuracy. The best model trained with a learning rate of 1.0×10−4 and batch size 16 achieves an accuracy of 87.41%.
Funder
Deanship of Scientific Research at King Saud University through the initiative of DSR Graduate Students Research Support
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference83 articles.
1. Zhu, Y., Groth, O., Bernstein, M., and Fei-Fei, L. (2016, January 27–28). Visual7w: Grounded question answering in images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 2. Abacha, A.B., Hasan, S.A., Datla, V.V., Liu, J., Demner-Fushman, D., and Müller, H. (2019, January 9–12). VQA-Med: Overview of the Medical Visual Question Answering Task at ImageCLEF 2019. In proceeding of Working Notes of CLEF 2019, Lugano, Switzerland. 3. Abacha, A.B., Datla, V.V., Hasan, S.A., Demner-Fushman, D., and Müller, H. (2020, January 22–25). Overview of the VQA-Med Task at ImageCLEF 2020: Visual Question Answering and Generation in the Medical Domain. Proceedings of the CLEF 2020—Conference and Labs of the Evaluation Forum, Thessaloniki, Greece. 4. Liu, B., Zhan, L.M., Xu, L., Ma, L., Yang, Y., and Wu, X.M. (2021, January 13–16). SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical Visual Question Answering. Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France. 5. Tascon-Morales, S., Márquez-Neila, P., and Sznitman, R. (2022). Medical Image Computing and Computer Assisted Intervention—MICCAI 2022, Proceedings of the 25th International Conference, Singapore, 18–22 September 2022, Springer. Part VIII.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|