1. Brack, M., Friedrich, F., Hintersdorf, D., Struppek, L., Schramowski, P., Kersting, K.: SEGA: instructing text-to-image models using semantic guidance. In: Thirty-seventh Conference on Neural Information Processing Systems (2023). https://openreview.net/forum?id=KIPAIy329j
2. Cugny, R., Aligon, J., Chevalier, M., Roman Jimenez, G., Teste, O.: Autoxai: a framework to automatically select the most adapted xai solution. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 315–324 (2022)
3. Galli, A., Marrone, S., Moscato, V., Sansone, C.: Reliability of explainable artificial intelligence in adversarial perturbation scenarios. In: Del Bimbo, A., Cucchiara, R., Sclaroff, S., Farinella, G.M., Mei, T., Bertini, M., Escalante, H.J., Vezzani, R. (eds.) Pattern Recognition. ICPR International Workshops and Challenges. pp. 243–256. Springer, Cham (2021)
4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015)
5. Hedström, A.: Explainable Artificial Intelligence : How to Evaluate Explanations of Deep Neural Network Predictions using the Continuity Test. Master’s thesis, KTH, School of Electrical Engineering and Computer Science (EECS) (2020)