版权说明 操作指南
首页 > 成果 > 详情

HairManip: High quality hair manipulation via hair element disentangling

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
Zhao, Huihuang;Zhang, Lin;Rosin, Paul L.;Lai, Yu-Kun;Wang, Yaonan
通讯作者:
Zhao, HH
作者机构:
[Zhao, Huihuang; Zhao, HH; Zhang, Lin] Hengyang Normal Univ, Sch Comp Sci & Technol, Hengyang 421002, Peoples R China.
[Zhao, Huihuang; Wang, Yaonan] Hunan Univ, Natl Engn Lab Robot Visual Percept & Control Techn, Changsha, Peoples R China.
[Lai, Yu-Kun; Rosin, Paul L.] Cardiff Univ, Sch Comp Sci & Informat, Cardiff, Wales.
通讯机构:
[Zhao, HH ] H
Hengyang Normal Univ, Sch Comp Sci & Technol, Hengyang 421002, Peoples R China.
语种:
英文
关键词:
Hair editing networks;Image generation;Hair manipulation;Generative adversarial networks;Deep learning
期刊:
Pattern Recognition
ISSN:
0031-3203
年:
2024
卷:
147
页码:
110132
基金类别:
National Natural Science Foundation of China [61772179]; Hunan Provincial Natural Science Foundation of China [2020JJ4152, 2022JJ50016]; The 14th Five-Year Plan"Key Disciplines and Application-oriented Special Disciplines of Hunan Province; Postgraduate Scientific Research Innovation Project of Hunan Province [CX20231265, Xiangjiaotong [2022] 351]; [CX20221285]
机构署名:
本校为第一且通讯机构
摘要:
Hair editing is challenging due to the complexity and variety of hair materials and shapes. Existing methods employ reference images or user-painted masks to edit hair and have achieved promising results. However, discrepancies in color and shape between the source and target hair can occasionally result in unrealistic results. Therefore, we propose a new hair editing method named HairManip, which decouples the hair information from the input source image into shape and color components. We then train hairstyle and hair color editing sub-networ...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com