张经纬1,,,
李国庆1,
吴瑞霞1,
曾晓洋2
1.东南大学电子学院国家专用集成电路系统工程技术研究中心 南京 210096
2.复旦大学专用集成电路与系统国家重点实验室 上海 200433
基金项目:国家重点研发计划(2018YFB2202703),江苏省自然科学基金(BK20201145)
详细信息
作者简介:张萌:男,1964年生,研究员,研究方向为数字信号处理、深度学习算法及硬件加速
张经纬:男,1997年生,硕士生,研究方向为深度学习硬件加速器设计
李国庆:男,1991年生,博士生,研究方向为计算机视觉和深度学习硬件加速器设计
吴瑞霞:女,1996年生,硕士生,研究方向为深度学习算法
曾晓洋:男,1972年生,教授,研究方向为高能效系统芯片(SoC)
通讯作者:张经纬 zhangjingwei@seu.edu.cn
中图分类号:TN79.1计量
文章访问数:320
HTML全文浏览量:164
PDF下载量:93
被引次数:0
出版历程
收稿日期:2021-01-04
修回日期:2021-04-21
网络出版日期:2021-04-29
刊出日期:2021-06-18
Efficient Hardware Optimization Strategies for Deep Neural Networks Acceleration Chip
Meng ZHANG1,Jingwei ZHANG1,,,
Guoqing LI1,
Ruixia WU1,
Xiaoyang ZENG2
1. National ASIC Engineering Center, School of Electronic Sci. and Eng., Southeast University, Nanjing 210096, China
2. National ASIC Key Laboratory, Fudan University, Shanghai 200433, China
Funds:The National Key R&D Program of China(2018YFB2202703), Jiangsu Province of Natural Science and Technology(BK20201145)
摘要
摘要:轻量级神经网络部署在低功耗平台上的解决方案可有效用于无人机(UAV)检测、自动驾驶等人工智能(AI)、物联网(IOT)领域,但在资源有限情况下,同时兼顾高精度和低延时来构建深度神经网络(DNN)加速器是非常有挑战性的。该文针对此问题提出一系列高效的硬件优化策略,包括构建可堆叠共享计算引擎(PE)以平衡不同卷积中数据重用和内存访问模式的不一致;提出了可调的循环次数和通道增强方法,有效扩展加速器与外部存储器之间的访问带宽,提高DNN浅层网络计算效率;优化了预加载工作流,从整体上提高了异构系统的并行度。经Xilinx Ultra96 V2板卡验证,该文的硬件优化策略有效地改进了iSmart3-SkyNet和SkrSkr-SkyNet类的DNN加速芯片设计。结果显示,优化后的加速器每秒处理78.576帧图像,每幅图像的功耗为0.068 J。
关键词:深度神经网络/
目标检测/
神经网络加速器/
低功耗/
硬件优化
Abstract:Lightweight neural networks deployed on low-power platforms have proven to be effective solutions for Artificial Intelligence (AI) and Internet Of Things (IOT) domains such as Unmanned Aerial Vehicle (UAV) detection and unmanned driving. However, in the case of limited resources, it is very challenging to build Deep Neural Networks (DNN) accelerator with both high precision and low delay. In this paper, a series of efficient hardware optimization strategies are proposed, including stackable shared Processing Engine (PE) to balance the inconsistency of data reuse and memory access patterns in different convolutions; Regulable loop parallelism and channel augmentation are proposed to increase effectively the access bandwidth between accelerator and external memory. It also improve the efficiency of DNN shallow layers computing; Pre-Workflow is applied to improve the overall parallelism of heterogeneous systems. Verified by Xilinx Ultra96 V2 board, the hardware optimization strategies in this paper improve effectively the design of DNN acceleration chips like iSmart3-SkyNet and SkrSkr-SkyNet. The results show that the optimized accelerator processes 78.576 frames per second, and the power consumption of each picture is 0.068 Joules.
Key words:Deep Neural Networks (DNN)/
Object detection/
Neural network accelerator/
Low power consumption/
Hardware optimization
PDF全文下载地址:
https://jeit.ac.cn/article/exportPdf?id=e9c5238d-7319-4f09-be16-6c3d80d0af98