删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

Embracing the era of neuromorphic computing

本站小编 Free考研考试/2022-01-01

<script type="text/javascript" src="https://cdn.bootcss.com/mathjax/2.7.2-beta.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> <script type='text/x-mathjax-config'>MathJax.Hub.Config({TeX:{extensions:["AMSmath.js","AMSsymbols.js"],Macros:{Bigggl:['\\Biggl{#1}',2],Bigggr:['\\Biggr{#1}',2]}},tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]},"SVG": {scale: 120}}); </script>



In recent years, deep learning has made tremendous achievements in computer vision, natural language processing, man-machine games and so on, where artificial intelligence can reach or go beyond the level of human beings. However, behind so many glories, some serious challenges exist in the bottom hardware, hindering the further development of Artificial Intelligence. While the remarkable Moore’s Law becomes slower and computing consumption on von Neumann bottleneck can no longer be afforded, current accelerator chips are difficult to deal with demanding massive data, especially in some power-limited scenes. These significant challenges lead to a natural upsurge for exploring new computing paradigms, i.e. a computational scientific revolution[1]. Such computing paradigm is not expected to replace the von Neumann architecture that has worked well in the past, but forms an important compliment to the previous architecture that can no longer handle with more and more emerging computing problems and applications. e.g. those in big data and artificial intelligence.



Candidates for the new computation paradigm include in-memory computing, quantum computing and neuromorphic computing, which can respectively solve some important problems more successfully than classical computing systems, although they have demonstrated only limited scope of application and accuracy to date. Among them, if we want to follow up the victory that deep learning has won and further build a general, efficient and brain-like intelligence, it is suggested to develop a paradigm of neuromorphic computing, which combines architecture, algorithms, circuits and devices tightly. From this view, deep learning is only a precursor to the approaching era of neuromorphic computing.



It has been about three decades since Carver Mead got inspiration from human brain and first proposed the concept of neuromorphic computing[2]. It takes advantage of analog signals to imitate electrical properties of synapses and neurons as basic computing elements, and assembles them to functional systems following simplified brain operating rules. Our brains utilize spikes to transmit and process information, running on the edge of chaos, so they have incredibly rich computational dynamics, as well as powerful capabilities for spatiotemporal integration. Since the introduction of neuromorphic computing, many impressive exploratory works have been completed, like IBM’s TrueNorth[3] and Intel’s Loihi[4]. However, a research consensus has not been established regarding neuromorphic computing yet. From the device perspective, obviously synapses and neurons composed by multiple transistors are costly, which restricts further scaling up. Fortunately, some emerging devices such as memristors can imitate synapses and neurons directly with its inner physical dynamics in single cells, thus holding great prospect in neuromorphic hardware. These devices can be compatible with current semiconductor technology, and can be used for construction of both deep learning accelerators and neuromorphic computing systems (Fig. 1). From the algorithm perspective, spike-based neural network models are immature compared with state of the art artificial neural networks on existing benchmarks and tasks[5]. Nevertheless, it should be noticed that existing effective algorithms are all suitable for classic computing systems, and the advancement of neuromorphic computing necessitates its own algorithms and benchmarks. Thus, there is an incommensurable way between these two computing paradigms.






onerror="this.onerror=null;this.src='http://www.jos.ac.cn/fileBDTXB/journal/article/jos/2021/1/PIC/Comments-202101-1.jpg'"
class="figure_img" id="Figure1"/>



Download



Larger image


PowerPoint slide






Figure1.
(Color online) A possible roadmap of neuromorphic computing.




Neuromorphic devices are memristive devices essentially that can change resistances through internal physical states and external electrical stimulations, which naturally correspond to synapses with adjustable weights. It has been proved that various emerging devices based on ion migration, phase transition, spin and ferroelectricity can obtain excellent modulation effects. For deep learning accelerators, ideal neuromorphic devices should have high state precision, low variation, long retention, linearity, as well as large dynamic range. However, current neuromorphic devices cannot combine all aspects of the abovementioned performances. For example, memristors based on ion migration have inevitable variations, and devices based on phase transition suffer from conductivity drift. In some interesting cases, these imperfections can be used as computing resource instead. The nonlinearity in conductance modulation can accelerate simulated annealing process in transiently chaotic neural network for the solution of various optimization problems[6]. Moreover, the stochasticity in devices conductance can be utilized as a random matrix in direct feedback alignment, reducing the training cost of neural networks (Fig. 1)[7].



For spike-driven neuromorphic computing involving the coding and representation of time information, neuromorphic devices should have capabilities to process sequential spikes and behave distinctly. Spike timing dependent plasticity (STDP), which attached computational significance to synapses, can be locally realized by a pair of connected volatile and nonvolatile memristors[8]. Furthermore, the leaky integrate and fire dynamics, which are symbolic computing functions of neurons, can be realized by a volatile device and its intrinsic capacitance[9]. The inner dynamics of devices, especially in Mott phase transition, can bring powerful computational functions, such as chaotic neurons[10] and high-order dynamics[11]. These devices are naturally suitable for the implementation of spiking neural networks, and it will be appealing to further find out how far the computing complexity can go ultimately relying on device dynamics. Such efforts may transform the simplicity of computing elements in Turing machine framework to the sufficient complexity in neuromorphic computing framework from practice.



Recently, it has been proved that a machine with neuromorphic completeness can solve any Turing-computable problems through approximation[12]. Its basic computing operations are vector-matrix multiplication and threshold. A crossbar array integrated by neuromorphic devices can calculate the vector matrix multiplication with great efficiency, which is the most computationally intensive part. This computing method relies on Ohm's law and Kirchhoff's law, and the nonvolatile nature of memristors can help avoid frequent memory access. However, there are still three major problems to be settled. First, current external circuits, such as analog/digital converters, are not efficient enough, which may eat into the advantages neuromorphic devices bring. It is therefore necessary to design specialized analog/digital convertors for a specific class of applications in neuromorphic computing. Second, there is no real spatial architecture driven by data flow for neuromorphic devices yet. Third, it is recognized that neuromorphic device array has little possibility of hardware multiplexing, since the storage is always waiting to be used. It is often accompanied by mixed-precision quantization, when mapping a specific neural network on limited hardware resources. It is thus suggested to develop EDA tools for automated deployment of different neural network models, where different layers are arranged for hardware multiplexing.



As neuromorphic computing can have higher efficiency on some tasks and has Turing equivalence with existing computing paradigms, the unique superiority of neuromorphic computing itself is still unclear. Some studies have made preliminary explorations. Since spike-based representations can efficiently encode time, the neuromorphic hardware can detect the synchronization of spike sequences in fine time scale among noisy signals[9]. Furthermore, volatile and oscillating devices can be assumed as neuron groups for reservoir computing in automatic generation of patterns[13, 14]. The oscillating devices can also realize microwave neural processing and broadcasting with great robustness[15]. More complex functions in brain, such as consciousness, emotion and attention, are still important research topics in computational neuroscience and their mechanisms remain elusive. Among others, working memory is a dynamic mode of information storage and processing in the brain, which is assumed as the basis for future advanced functions like attention and can be implemented on neuromorphic hardware (Fig. 1). Proper symmetric distribution of synaptic weights can form a continuous attractor neural network, where the neuronal population coding is always representing states in a continuous curve or plane. Therefore, it has significant superiority in processing the dynamic spatiotemporal information, compared with classical discrete storage. By introduction of working memory into neuromorphic hardware, the computing paradigm may expand the integration of storage and computing at device level to a structured storage at the system level. Furthermore, it is promising to replace a mathematically complex function, such as attention in Transformer[16], with inner dynamics of single devices. The exploration on dynamic spatiotemporal intelligence is beneficial for efficient combination of algorithms with physical devices, similar with what our brain is doing (Fig. 1).



In the more than Moore era, it is meaningful to make a transition in computing paradigm, figure out tightly entangled theories, methods and standards and set benchmarks. As for neuromorphic computing, we believe the emerging neuromorphic devices will trigger a radical shift in computing paradigm eventually. The new computing paradigm may first play a role in selected areas, e.g. edge computation with ultra low power consumption, but eventually lead to more capable computing systems and higher intelligence as well as vast applications.




Acknowledgements




This work was supported by the National Key R&D Program of China (2017YFA0207600), National Outstanding Youth Science Fund Project of National Natural Science Foundation of China (61925401), PKU-Baidu Fund Project (2019BD002), National Natural Science Foundation of China (92064004, 61927901, 61421005, 61674006), and the 111 Project (B18001). Y. Y. acknowledges the support from the Fok Ying-Tong Education Foundation, Beijing Academy of Artificial Intelligence (BAAI), and the Tencent Foundation through the XPLORER PRIZE.



相关话题/Embracing neuromorphic computin