王玉梅,,
王安红,
赵贤凌
太原科技大学电子信息工程学院 太原 030024
基金项目:国家自然科学基金(61601318),山西省青年科技研究基金(201601D021078),山西省重点学科建设经费,山西省互联网+3D打印协同创新中心,山西省1331工程重点创新团队,山西省科技创新团队(201705D131025),太原科技大学博士启动基金(20132023),国家留学基金
详细信息
作者简介:武迎春:女,1984年生,副教授,研究方向为光场信息获取与处理、光学3维传感
王玉梅:女,1995年生,硕士生,研究方向为光信息获取与处理
王安红:女,1972年生,教授,研究方向为视频通信、图像识别、3D数据分析理解
赵贤凌:女,1978年生,讲师,研究方向为光场信息获取与处理、光学3维传感
通讯作者:王玉梅 1954569241@qq.com
中图分类号:TN911.73计量
文章访问数:704
HTML全文浏览量:251
PDF下载量:44
被引次数:0
出版历程
收稿日期:2019-09-17
修回日期:2020-07-13
网络出版日期:2020-07-22
刊出日期:2020-09-27
Light Field All-in-focus Image Fusion Based on Edge Enhanced Guided Filtering
Yingchun WU,Yumei WANG,,
Anhong WANG,
Xianling ZHAO
School of Electronic and Information Engineering, Taiyuan University of Science and Technology, Taiyuan 030024, China
Funds:The National Natural Science Foundation of China (61601318), The Shanxi Science Foundation of Applied Foundational Research (201601D021078), The Fund of Shanxi Key Subjects Construction, The Collaborative Innovation Center of Internet+3D Printing in Shanxi Province, The Key Innovation Team of Shanxi 1331 Project, The Scientific and Technological Innovation Team of Shanxi Province (201705D131025), The Youth Foundation of Taiyuan University of Science and Technology (20132023), The Foundation of China Scholarship Council
摘要
摘要:受光场相机微透镜几何标定精度的影响,4D光场在角度方向上的解码误差会造成积分后的重聚焦图像边缘信息损失,从而降低全聚焦图像融合的精度。该文提出一种基于边缘增强引导滤波的光场全聚焦图像融合算法,通过对光场数字重聚焦得到的多幅重聚焦图像进行多尺度分解、特征层决策图引导滤波优化来获得最终全聚焦图像。与传统融合算法相比,该方法对4D光场标定误差带来的边缘信息损失进行了补偿,在重聚焦图像多尺度分解过程中增加了边缘层的提取来实现图像高频信息增强,并建立多尺度图像评价模型实现边缘层引导滤波参数优化,可获得更高质量的光场全聚焦图像。实验结果表明,在不明显降低融合图像与原始图像相似性的前提下,该方法可有效提高全聚焦图像的边缘强度和感知清晰度。
关键词:4D光场/
全聚焦图像融合/
引导滤波/
边缘增强/
参数优化
Abstract:Affected by the micro-lens geometric calibration accuracy of the light field camera, the decoding error of the 4D light field in the angular direction will cause the edge information loss of the integrated refocused image, which will reduce the accuracy of the all-in-focus image fusion. In this paper, a light field all-in-focus image fusion algorithm based on edge-enhanced guided filtering is proposed. Through multi-scale decomposition of the digital refocused images and guided filtering optimization of the feature layer decision map, the final all-in-focus image is obtained. Compared with the traditional fusion algorithm, the edge information loss caused by the 4D light field calibration error is compensated in the presented method. In the step of multi-scale decomposition of the refocused image, the edge layer extraction is added to accomplish the high-frequency information enhancement. Then the multi-scale evaluation model is established to optimize the edge layer’s guided filtering parameters to obtain a better light field all-in-focus image. The experimental results show that the edge intensity and the perceptual sharpness of the all-in-focus image can be improved without significantly reducing the similarity between the all-in-focus image and the original image.
Key words:4D light field/
All-in-focus image fusion/
Guided filtering/
Edge enhancement/
Parameter optimization
PDF全文下载地址:
https://jeit.ac.cn/article/exportPdf?id=b267b7cf-d9d0-4ad7-aa62-870451596a37