删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

基于自注意力和自编码器的少样本学习

本站小编 Free考研考试/2022-01-16

冀 中,柴星亮
AuthorsHTML:冀 中,柴星亮
AuthorsListE:Ji Zhong,Chai Xingliang
AuthorsHTMLE:Ji Zhong,Chai Xingliang
Unit:天津大学电气自动化与信息工程学院,天津 300072
Unit_EngLish:School of Electrical and Information Engineering,Tianjin University,Tianjin 300072,China
Abstract_Chinese:少样本学习旨在提高模型泛化能力,使用少量样本完成对新类别的分类,显著降低深度学习中样本的搜集 标注成本和模型的训练成本.目前大多数基于度量学习的少样本学习方法关注模型对某一度量空间的适应,而很少 关注提高样本特征的特异性表达.当样本数量较少时,充分挖掘样本中的信息变得更加重要.基于不同特征图对同 一类别的表征能力不同,提出一种通道自注意力的方法,将更具类别表现力的特征通道赋予更大的权重,完成特征 图平衡,以提高样本特征表示的可鉴别性.为充分挖掘容易获取样本的更多信息,提出“空间原型”的概念.同 时,受自编码器思想的启发,设计一种利用全体样本信息校正类别原型的方法来提高类别原型的准确性.作为一种 无参数的增强型特征提取器,所提通道自注意力方法能有效避免少样本学习中广泛存在的模型迁移能力弱问题并且 兼容于多种现有少样本学习方法,进一步提高其性能,展现出较好的泛化能力.将两种方法用于原型网络,所提方 法在两个少样本分类主流数据集 miniImageNet 和 CUB 的 3 种分类场景下相对原方法均带来较大性能提升.特别 地,当训练集和测试集领域跨度较大时,所提方法相对原方法可获得 10.23%的绝对性能提升和 17.04%的相对性能 提升,这充分展现出所提方法的有效性.
Abstract_English:The aim of few-shot learning(FSL) is to improve the generalization ability of a learning model,so that new categories can be classified with a small number of available samples. It leads to significant reduction in both annotation and model training cost. Most of the existing metric learning methods pay attention only to find an appropriate metric space rather than improving the discrimination of the feature vectors. It is important to get maximum information when the number of samples is very few. Based on the representation capacity of different feature maps, a channel-based self-attention method is proposed to improve the discrimination of various class samples by weighting the more important feature map with a big value. Apart from that,the conception of “space prototype” is proposed to explore the information easily obtained from the samples. Meanwhile,inspired by auto-encoder,a method leverag\u0002ing the information from all the samples is proposed. In this method,the class of prototypes are modified or altered to improve their accuracy. As a parameter-free augmented feature extractor,the proposed self-attention method alleviates the over-fitting problem that widely exists in FSL. This proposed self-attention method is a generalized and compatible one with many existing FSL methods in improving the performance. Compared to prototypical networks,the results on two few-shot learning benchmarks miniImagenet and CUB of three classification settings are improved when the proposed methods are applied on it. Specifically,when the training set largely differs from the test set,the proposed method results in absolute performance improvement by 10.23% and relative improvement by 17.04%.
Keyword_Chinese:少样本学习;图像分类;机器学习;通道自注意力;自编码器
Keywords_English:few-shot learning(FSL);image classification;machine learning;channel self-attention;auto-encoder

PDF全文下载地址:http://xbzrb.tju.edu.cn/#/digest?ArticleID=6610
相关话题/编码器 样本