胡旭阳1,
朱凡2,
唐俊1,,
1.安徽大学电子信息工程学院 合肥 230031
2.起源人工智能研究院 阿布扎比 51133
基金项目:国家自然科学基金(61772032)
详细信息
作者简介:王年:男,1966年生,教授,博士,主要从事模式识别与图像处理等方面的研究
胡旭阳:男,1995年生,硕士生,研究方向为图像生成和3维重建
朱凡:男,1987年生,博士,主要从事计算机视觉方面的研究
唐俊:男,1977年生,教授,博士,主要从事模式识别与计算机视觉等方面的研究
通讯作者:唐俊 tangjunahu@163.com
中图分类号:TN911.73; TP301.6计量
文章访问数:1367
HTML全文浏览量:503
PDF下载量:96
被引次数:0
出版历程
收稿日期:2019-12-09
修回日期:2020-05-26
网络出版日期:2020-06-22
刊出日期:2020-12-08
Single-view 3D Reconstruction Algorithm Based on View-aware
Nian WANG1,Xuyang HU1,
Fan ZHU2,
Jun TANG1,,
1. School of Electronic Information Engineering, Anhui University, Hefei 230031, China
2. Inception Institute of Artificial Intelligence, Abu Dhabi 51133, United Arab Emirates
Funds:The National Nature Science Foundation of China (61772032)
摘要
摘要:尽管由于丢弃维度将3维(3D)形状投影到2维(2D)视图看似是不可逆的,但是从可视化到计算机辅助几何设计,各个垂直行业对3维重建技术的兴趣正迅速增长。传统基于物体深度图或者RGB图的3维重建算法虽然可以在一些方面达到令人满意的效果,但是它们仍然面临若干问题:(1)粗鲁的学习2D视图与3D形状之间的映射;(2)无法解决物体不同视角下外观差异所带来的的影响;(3)要求物体多个观察视角下的图像。该文提出一个端到端的视图感知3维(VA3D)重建网络解决了上述问题。具体而言,VA3D包含多邻近视图合成子网络和3D重建子网络。多邻近视图合成子网络基于物体源视图生成多个邻近视角图像,且引入自适应融合模块解决了视角转换过程中出现的模糊或扭曲等问题。3D重建子网络使用循环神经网络从合成的多视图序列中恢复物体3D形状。通过在ShapeNet数据集上大量定性和定量的实验表明,VA3D有效提升了基于单视图的3维重建结果。
关键词:视图感知/
3维重建/
视角转换/
端到端神经网络/
自适应融合
Abstract:While projecting 3D shapes to 2D images is irreversible due to the abandoned dimension amid the projection process, there are rapidly growing interests across various vertical industries for 3D reconstruction techniques, from visualization purposes to computer aided geometric design. The traditional 3D reconstruction approaches based on depth map or RGB image can synthesize visually satisfactory 3D objects, while they generally suffer from several problems: (1)The 2D to 3D learning strategy is brutal-force; (2)Unable to solve the effects of differences in appearance from different viewpoints of objects; (3)Multiple images from distinctly different viewpoints are required. In this paper, an end-to-end View-Aware 3D (VA3D) reconstruction network is proposed to address the above problems. In particular, the VA3D includes a multi-neighbor-view synthesis sub-network and a 3D reconstruction sub-network. The multi-neighbor-view synthesis sub-network generates multiple neighboring viewpoint images based on the object source view, while the adaptive fusional module is added to resolve the blurry and distortion issues in viewpoint translation. The 3D reconstruction sub-network introduces a recurrent neural network to recover the object 3D shape from multi-view sequence. Extensive qualitative and quantitative experiments on the ShapeNet dataset show that the VA3D effectively improves the 3D reconstruction results based on single-view.
Key words:View-aware/
3D reconstruction/
Viewpoint translation/
End-to-end neural network/
Adaptive fusional
PDF全文下载地址:
https://jeit.ac.cn/article/exportPdf?id=1704bbfa-7736-479f-97a5-57ac90fad3c5