Affiliation:
1. FATİH SULTAN MEHMET VAKIF ÜNİVERSİTESİ
Abstract
This paper introduces a novel approach aimed at efficiently extracting dominant colors from online fashion images. The method addresses challenges related to detecting overlapping objects and computationally expensive methods by combining K-means clustering and graph-cut techniques into a framework. This framework incorporates an adaptive weighting strategy to enhance color extraction accuracy. Additionally, it introduces a two-phase fashion apparel detection method called YOLOv4, which utilizes U-Net architecture for clothing segmentation to precisely separate clothing items from the background or other elements. Experimental results show that K-means with YOLOv4 outperforms K-means with the U-Net model. These findings suggest that the U-Net architecture and YOLOv4 models can be effective methods for complex image segmentation tasks in online fashion retrieval and image processing, particularly in the rapidly evolving e-commerce environment.
Reference41 articles.
1. Agrawal, S., Panda, R., Choudhury, P., & Abraham, A. (2022). Dominant color component and adaptive whale optimization algorithm for multilevel thresholding of color images. Knowledge-Based Systems, 240, 108172. https://doi.org/10.1016/j.knosys.2021.108172.
2. Agrawal, S., Panda, R., Choudhury, P., & Abraham, A. (2022). Dominant color component and adaptive whale optimization algorithm for multilevel thresholding of color images. Knowledge-Based Systems, 240, 108172.
3. Bu, Q., Zeng, K., Wang, R., & Feng, J. (2020). Multi-depth dilated network for fashion landmark detection with batch-level online hard keypoint mining. Image and Vision Computing, 99, 103930. https://doi.org/10.1016/j.imavis.2019.103930
4. Chang, Y., & Mukai, N. (2022). Color feature based dominant color extraction. IEEE Access, 10, 93055-93061. https://doi.org/10.1109/ACCESS.2022.3202632.
5. Chang, Y., Iida, T., and Mukai, N. (2015). Dominant color extraction method from natural images. Proceedings of the International Conference on Image Processing, 44, 637-643.