版权说明 操作指南
首页 > 成果 > 详情

One-Shot Motion Talking Head Generation with Audio-Driven Model

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
Peng Tang;Huihuang Zhao;Weiliang Meng;Yaonan Wang
作者机构:
[Peng Tang; Huihuang Zhao] Hengyang Normal University, College of Computer Science and Technology, Hengyang, 421002, Hunan, China
[Weiliang Meng] University of Chinese Academy of Sciences, School of Artificial Intelligence, Beijing, 100049, China
[Yaonan Wang] Hunan University, National Engineering Laboratory for Robot Visual Perception and Control Technology, Changsha, 410082, Hunan, China
语种:
英文
期刊:
Expert Systems with Applications
ISSN:
0957-4174
年:
2025
页码:
129344
基金类别:
CRediT authorship contribution statement Peng Tang: Conceptualization, Methodology, Software, Formal analysis, Writing – original draft. Huihuang Zhao: Supervision, Project administration, acquisition, Writing – review & editing. Weiliang Meng: Data curation, Validation, Visualization, Investigation. Yaonan Wang: Supervision, Resources, Writing – review & editing.
机构署名:
本校为第一机构
院系归属:
计算机科学与技术学院
摘要:
Exciting achievements have been made in audio-driven face-talking head generation. However, while existing methods excel at generating a speaking head from a frontal identity image, generating a speaking head with a head pose does not yield satisfactory results when the identity is based on a side image of the face. To address this limitation, a concise and effective approach is proposed in this work.Our method generates efficient talking head videos using a side face image as the identity.It uses facial features and head posture to predict frontal keypoints, during which facial expression fea...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com