您好,欢迎访问北京市农林科学院 机构知识库!

An FPGA implementation of Bayesian inference with spiking neural networks

文献类型: 外文期刊

作者: Li, Haoran 1 ; Wan, Bo 2 ; Fang, Ying 4 ; Li, Qifeng 6 ; Liu, Jian K. 7 ; An, Lingling 1 ;

作者机构: 1.Xidian Univ, Guangzhou Inst Technol, Guangzhou, Peoples R China

2.Xidian Univ, Sch Comp Sci & Technol, Xian, Peoples R China

3.Key Lab Smart Human Comp Interact & Wearable Techn, Xian, Peoples R China

4.Fujian Normal Univ, Coll Comp & Cyber Secur, Fuzhou, Peoples R China

5.Fujian Normal Univ, Digital Fujian Internet Of Thing Lab Environm Moni, Fuzhou, Peoples R China

6.Beijing Acad Agr & Forestry Sci, Res Ctr Informat Technol, Natl Engn Res Ctr Informat Technol Agr, Beijing, Peoples R China

7.Univ Birmingham, Sch Comp Sci, Birmingham, England

关键词: spiking neural networks; probabilistic graphical models; Bayesian inference; importance sampling; FPGA

期刊名称:FRONTIERS IN NEUROSCIENCE ( 影响因子:4.3; 五年影响因子:5.2 )

ISSN:

年卷期: 2024 年 17 卷

页码:

收录情况: SCI

摘要: Spiking neural networks (SNNs), as brain-inspired neural network models based on spikes, have the advantage of processing information with low complexity and efficient energy consumption. Currently, there is a growing trend to design hardware accelerators for dedicated SNNs to overcome the limitation of running under the traditional von Neumann architecture. Probabilistic sampling is an effective modeling approach for implementing SNNs to simulate the brain to achieve Bayesian inference. However, sampling consumes considerable time. It is highly demanding for specific hardware implementation of SNN sampling models to accelerate inference operations. Hereby, we design a hardware accelerator based on FPGA to speed up the execution of SNN algorithms by parallelization. We use streaming pipelining and array partitioning operations to achieve model operation acceleration with the least possible resource consumption, and combine the Python productivity for Zynq (PYNQ) framework to implement the model migration to the FPGA while increasing the speed of model operations. We verify the functionality and performance of the hardware architecture on the Xilinx Zynq ZCU104. The experimental results show that the hardware accelerator of the SNN sampling model proposed can significantly improve the computing speed while ensuring the accuracy of inference. In addition, Bayesian inference for spiking neural networks through the PYNQ framework can fully optimize the high performance and low power consumption of FPGAs in embedded applications. Taken together, our proposed FPGA implementation of Bayesian inference with SNNs has great potential for a wide range of applications, it can be ideal for implementing complex probabilistic model inference in embedded systems.

  • 相关文献
作者其他论文 更多>>