裴晓鹏2,
张静3,
陈泽华1,,
1.太原理工大学大数据学院 ??太原 ??030024
2.太原理工大学电气与动力工程学院 ??太原 ??030024
3.太原理工大学信息与计算机学院 ??太原 ??030024
基金项目:国家自然科学基金(61703299, 61402319, 61403273),山西省自然科学基金(201601D202044)
详细信息
作者简介:刘帆:女,1982年生,博士,讲师,从事遥感图像处理、机器学习的研究
裴晓鹏:男,1991年生,硕士生,研究方向为粒计算、遥感图像处理
张静:女,1994年生,硕士生,研究方向为遥感图像处理
陈泽华:女,1974年生,博士,教授,从事粒计算与知识工程、智能信息处理与智能控制、机器视觉与工业大数据等方向的研究
通讯作者:陈泽华 zehuachen@163.com
中图分类号:TP751计量
文章访问数:1108
HTML全文浏览量:395
PDF下载量:86
被引次数:0
出版历程
收稿日期:2018-03-21
修回日期:2018-08-13
网络出版日期:2018-08-31
刊出日期:2018-12-01
Remote Sensing Image Fusion Based on Optimized Dictionary Learning
Fan LIU1,Xiaopeng PEI2,
Jing ZHANG3,
Zehua CHEN1,,
1. College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China
2. College of Electrical and Power Engineering, Taiyuan University of Technology, Taiyuan 030024, China
3. College of Information and Computer, Taiyuan University of Technology, Taiyuan 030024, China
Funds:The National Natural Science Foundation of China (61703299, 61402319, 61403273), The Shanxi Province Natural Science Foundation (201601D202044)
摘要
摘要:为提升全色图像和多光谱图像的融合效果,该文提出基于优化字典学习的遥感图像融合方法。首先将经典图像库中的图像分块作为训练样本,对其进行K均值聚类,根据聚类结果适度裁减数量较多且相似度较高的图像块,减少训练样本个数。接着对裁减后的训练样本进行训练,得到通用性字典,并标记出相似字典原子和较少使用的字典原子。然后用与原稀疏模型差异最大的全色图像块规范化后替换相似字典原子和较少使用的字典原子,得到自适应字典。使用自适应字典对多光谱图像经IHS变换后获取的亮度分量和源全色图像进行稀疏表示,把每一个图像块稀疏系数中的模极大值系数分离,得到极大值稀疏系数,将剩下的稀疏系数称为剩余稀疏系数。针对极大值稀疏系数和剩余稀疏系数分别选择不同的融合规则进行融合,以保留更多的光谱信息和空间细节信息,最后进行IHS逆变换获得融合图像。实验结果表明,与传统方法相比所提方法得到的融合图像主观视觉效果较好,且客观评价指标更优。
关键词:遥感图像融合/
K均值聚类/
自适应字典/
稀疏表示/
融合规则
Abstract:In order to improve the fusion quality of panchromatic image and multi-spectral image, a remote sensing image fusion method based on optimized dictionary learning is proposed. Firstly, K-means cluster is applied to image blocks in the image database, and then image blocks with high similarity are removed partly in order to improve the training efficiency. While obtaining a universal dictionary, the similar dictionary atoms and less used dictionary atoms are marked for further research. Secondly, similar dictionary atoms and less used dictionary atoms are replaced by panchromatic image blocks with the largest difference from the original sparse model to obtain an adaptive dictionary. Furthermore the adaptive dictionary is used to sparse represent the intensity component and panchromatic image, the modulus maxima coefficients in the sparse coefficients of each image blocks are separated to obtain maximal sparse coefficients, and the remaining sparse coefficients are called residual sparse coefficients. Then, each part is fused by different fusion rules to preserve more spectral and spatial detail information. Finally, inverse IHS transform is employed to obtain the fused image. Experiments demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than its counterparts.
Key words:Remote sensing image fusion/
K-means cluster/
Adaptive dictionary/
Sparse represent/
Fusion rule
PDF全文下载地址:
https://jeit.ac.cn/article/exportPdf?id=6d6f37c4-6b94-4b58-90c5-123b2958809c