Implementation of Artificial Neural network (ANN) in hardware is needed to fully utilize the inherent parallelism. Presented work focuses on the configuration of Field-Programmable Gate Array (FPGA) to realize the activation function utilized in ANN. The computation of a nonlinear activation function (AF) is one of the factors that constraint the area or the computation time. The most popular AF is the log-sigmoid function, which has different possibilities of realizing in digital hardware. Equation approximation, Lookup Table (LUT) based approach and Piecewise Linear Approximation (PWL) are a few to mention. A two-fold approach to optimize the resource requirement is presented here. Primarily, Fixed-point computation (FXP) that needs minimal hardware, as against floating-point computation (FLP) is followed. Secondly, the PWL approximation of AF with more precision is proved to consume lesser Si area when compared to LUT based AF. Experimental results are presented for computation.
cited By (since 1996)4; Conference of org.apache.xalan.xsltc.dom.DOMAdapter@1fd2eca1 ; Conference Date: org.apache.xalan.xsltc.dom.DOMAdapter@50ad4e59 Through org.apache.xalan.xsltc.dom.DOMAdapter@21f2de70; Conference Code:76231
Va Saichand, Dr. Nirmala Devi M., Arumugam, Sc, and N Mohankumar, “FPGA Realization of Activation Function for Artificial Neural Networks”, IEEE 8th International Conference on Intelligent Systems Design and Applications, ISDA 2008, vol. 3. IEEE, Taiwan, pp. 159-164, 2008.