FPGA based low latency, low power stream processing AI

BIB
Domenik Helms, Mark Kettner, Behnam Razi Perjikolaei, Lukas Einhaus, Christopher Ringhofer, Gregor Schiele
European Workshop on On-Board Data Processing
The timing and power of an embedded neural network application is usually dominated by the access time and the energy cost per memory access. From a technical point of view, the hundreds of thousands of look-up tables (LUT) of a field programmable gate array (FPGA) circuit are nothing more than small but fast and energy-efficiently accessible memory blocks. If the accesses to the block memory can be reduced or, as in our case, avoided altogether, the resulting neural network would compute much faster and with far lower energy costs.
June / 2021
article
LUTNet
An energy-efficient AI network of elementary lookup tables