Affiliation:
1. College of Computer Science and Technology Hengyang Normal University Hengyang China
2. Hunan Provincial Key Laboratory of Intelligent Information Processing and Application Hengyang China
Abstract
AbstractText‐based hair design (TBHD) represents an innovative approach that utilizes text instructions for crafting hairstyle and colour, renowned for its flexibility and scalability. However, enhancing TBHD algorithms to improve generation quality and editing accuracy remains a current research difficulty. One important reason is that existing models fall short in alignment and fusion designs. Therefore, we propose a new layered multimodal fusion network called ETBHD‐HMF, which decouples the input image and hair text information into layered hair colour and hairstyle representations. Within this network, the channel enhancement separation (CES) module is proposed to enhance important signals and suppress noise for text representation obtained from CLIP, thus improving generation quality. Based on this, we develop the weighted mapping fusion (WMF) sub‐networks for hair colour and hairstyle. This sub‐network applies the mapper operations to input image and text representations, acquiring joint information. The WMF then selectively merges image representation and joint information from various style layers using weighted operations, ultimately achieving fine‐grained hairstyle designs. Additionally, to enhance editing accuracy and quality, we design a modality alignment loss to refine and optimize the information transmission and integration of the network. The experimental results of applying the network to the CelebA‐HQ dataset demonstrate that our proposed model exhibits superior overall performance in terms of generation quality, visual realism, and editing accuracy. ETBHD‐HMF (27.8 PSNR, 0.864 IDS) outperformed HairCLIP (26.9 PSNR, 0.828 IDS), with a 3% higher PSNR and a 4% higher IDS.