Methods originally developed for communication systems are widely used in computational neuroscience to understand the information representation and processing performed by neurons and neural circuits in the brain. Information theoretic quantities Entropy and Mutual Information (MI) have been used in neuroscience as a metric to estimate the efficiency of information representation by neurons. These quantities are used here to measure the stimulus discrimination reliability of the cerebellar granule neurons using simulated response trains produced by a multi-compartmental model of Wistar rat neuron. With 1011 granule neurons in the cerebellum, understanding spatio-temporal processing in such structures demands efficient, fast algorithms. Since the serial version of the algorithm had multiple estimation loops which increased the process time considerably with the problem size, we re-implemented the MI algorithm in GPGPU hardware as an efficient way of parallelizing the MI computations. Task-level parallelism and GPU optimizations were used to improve the process time. Estimates on GPGPUs showed 15X time efficiency compared to the CPU version of the algorithm. In order to understand learning inside the cerebellar circuit, synaptic plasticity conditions were simulated in the neuron model. We were able to quantify the stimulus discrimination reliability of granule neurons under control, LTP and LTD conditions and the analysis revealed that stimulus discrimination capability of the neuron was increased during high plasticity state.
Manjusha Nair, Prasanth Madhu, Vyshnav Mohan, Arathi G Rajendran, Dr. Bipin G. Nair, and Dr. Shyam Diwakar, “GPGPU implementation of information theoretic algorithms for the analysis of granular layer neurons”, in 2015 International Conference on Computing and Network Communications (CoCoNet), IEEE, 2015.