Funder
National Natural Science Foundation of China
Reference32 articles.
1. Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., Zhang, L., 2018. Bottom-up and top-down attention for image captioning and visual question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6077–6086.
2. Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D., 2015. VQA: Visual question answering. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2425–2433.
3. Chappuis, C., Zermatten, V., Lobry, S., Le Saux, B., Tuia, D., 2022. Prompt-RSVQA: Prompting visual context to a language model for remote sensing visual question answering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1372–1381.
4. Reference-free method for investigating classification uncertainty in large-scale land cover datasets;Chen;Int. J. Appl. Earth Obs.,2022
5. Improving visual question answering for remote sensing via alternate-guided attention and combined loss;Feng;Int. J. Appl. Earth Obs.,2023