删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

基于历史梯度平均方差缩减的协同参数更新方法

本站小编 Free考研考试/2022-01-03

谢涛1,,,
张春炯2,
徐永健3
1.西南大学教育学部智慧教育研究院 重庆 400715
2.同济大学电子与信息工程学院 上海 201804
3.西南大学计算机与信息科学学院 重庆 400715
基金项目:国家自然科学基金(61807027)

详细信息
作者简介:谢涛:男,1983年生,博士,副教授,研究方向为数据挖掘、自适应推荐系统、机器学习
张春炯:男,1990年生,博士生,研究方向为机器学习、无线传感网络、分布式鲁棒优化
徐永健:男,1997年生,硕士生,研究方向为图像检索、分布式系统
通讯作者:谢涛 xietao@swu.edu.cn
中图分类号:TP391

计量

文章访问数:900
HTML全文浏览量:326
PDF下载量:46
被引次数:0
出版历程

收稿日期:2020-01-16
修回日期:2020-06-20
网络出版日期:2020-07-23
刊出日期:2021-04-20

Collaborative Parameter Update Based on Average Variance Reduction of Historical Gradients

Tao XIE1,,,
Chunjiong ZHANG2,
Yongjian XU3
1. Wisdom Education Institute of College of Education, Southwest University, Chongqing 400715, China
2. College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China
3. College of Computers and Information Science, Southwest University, Chongqing 400715, China
Funds:The National Natural Science Foundation of China (61807027)


摘要
摘要:随机梯度下降算法(SGD)随机使用一个样本估计梯度,造成较大的方差,使机器学习模型收敛减慢且训练不稳定。该文提出一种基于方差缩减的分布式SGD,命名为DisSAGD。该方法采用历史梯度平均方差缩减来更新机器学习模型中的参数,不需要完全梯度计算或额外存储,而是通过使用异步通信协议来共享跨节点的参数。为了解决全局参数分发存在的“更新滞后”问题,该文采用具有加速因子的学习速率和自适应采样策略:一方面当参数偏离最优值时,增大加速因子,加快收敛速度;另一方面,当一个工作节点比其他工作节点快时,为下一次迭代采样更多样本,使工作节点有更多时间来计算局部梯度。实验表明:DisSAGD显著减少了循环迭代的等待时间,加速了算法的收敛,其收敛速度比对照方法更快,在分布式集群中可以获得近似线性的加速。
关键词:梯度下降/
机器学习/
分布式集群/
自适应采样/
方差缩减
Abstract:The Stochastic Gradient Descent (SGD) algorithm randomly picks up a sample to estimate gradients, creating big variance which reduces the convergence speed and makes the training unstable. A Distributed SGD based on Average variance reduction, called DisSAGD is proposed. The method uses the average variance reduction based on historical gradients to update parameters in the machine learning model, requiring little gradient calculation and additional storage, but using the asynchronous communication protocol to share parameters across nodes. In order to solve the “update staleness” problem of global parameter distribution, a learning rate with an acceleration factor and an adaptive sampling strategy are included: on the one hand, when the parameter deviates from the optimal value, the acceleration factor is increased to speed up the convergence; on the other hand, when one work node is faster than the other ones, more samples are sampled for the next iteration, so that the node has more time to calculate the local gradient. Experiments show that the DisSAGD reduces significantly the waiting time of loop iterations, accelerates the convergence of the algorithm being faster than that of the controlled methods, and obtains almost linear acceleration in distributed cluster environments.
Key words:Gradient descent/
Machine learning/
Distributed cluster/
Adaptive sampling/
Variance reduction



PDF全文下载地址:

https://jeit.ac.cn/article/exportPdf?id=1e03f394-9e6a-45f4-9311-d83d57d90754
相关话题/工作 西南大学 系统 计算 重庆