Skip to main content

LUT神经网络

背景

  1. FPGA的基本单元是LUT(查找表),如果把LUT看成一种逻辑运算单元

    1. 查找表的真值表可以表示静态权重
    2. 查找表的部份输入表示动态权重
  2. FPGA的可重构特性,相对于AI处理器
    1. 可以把部份或者全部的动态权重(操作数)固化成静态的真值表,大大提高PPA
    2. 不同的模型和参数,可以通过重新配置FPGA的逻辑单元
    3. 只适合推理,理论上训练的权重不能静态化,PPA优势不大
  3. 把FPGA的LUT作为AI芯片的核心算力单元
    1. 有利于硬件的标准化

参考

  1. https://github.com/ryuz/BinaryBrain
  2. バイナリニューラルネットとハードウェアの関係
    https://www.slideshare.net/kentotajiri/ss-77136469

  3. BinaryConnect: Training Deep Neural Networks with binary weights during propagations
    https://arxiv.org/pdf/1511.00363.pdf

  4. Binarized Neural Networks
    https://arxiv.org/abs/1602.02505

  5. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
    https://arxiv.org/abs/1602.02830

  6. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
    https://arxiv.org/abs/1603.05279

  7. Xilinx UltraScale Architecture Configurable Logic Block User Guide
    https://japan.xilinx.com/support/documentation/user_guides/ug574-ultrascale-clb.pdf