Publication Type:

Conference Paper

Source:

2015 International Conference on Computing and Network Communications (CoCoNet), IEEE, IEEE (2015)

URL:

https://ieeexplore.ieee.org/document/7411162/

Keywords:

Brain, cerebellar circuit, Cerebellar granule neurons, Communication systems, Complexity theory, Computational neuroscience, entropy, Firing, GPGPU, GPGPU hardware, GPGPU implementation, GPU optimizations, granular layer neuron analysis, graphics processing units, Information processing, information representation, information theoretic algorithms, information theoretic quantities, Instruction Sets, LTD conditions, LTP conditions, MI computation algorithm, multicompartmental model, mutual information, neural chips, neural circuits, neural nets, Neurons, neuroscience, Parallel processing, process time, simulated response trains, spatiotemporal phenomena, spatiotemporal processing, spike train analysis, stimulus discrimination reliability, synaptic plasticity conditions, task-level parallelism, Wistar rat neuron

Abstract:

Methods originally developed for communication systems are widely used in computational neuroscience to understand the information representation and processing performed by neurons and neural circuits in the brain. Information theoretic quantities Entropy and Mutual Information (MI) have been used in neuroscience as a metric to estimate the efficiency of information representation by neurons. These quantities are used here to measure the stimulus discrimination reliability of the cerebellar granule neurons using simulated response trains produced by a multi-compartmental model of Wistar rat neuron. With  1011 granule neurons in the cerebellum, understanding spatio-temporal processing in such structures demands efficient, fast algorithms. Since the serial version of the algorithm had multiple estimation loops which increased the process time considerably with the problem size, we re-implemented the MI algorithm in GPGPU hardware as an efficient way of parallelizing the MI computations. Task-level parallelism and GPU optimizations were used to improve the process time. Estimates on GPGPUs showed 15X time efficiency compared to the CPU version of the algorithm. In order to understand learning inside the cerebellar circuit, synaptic plasticity conditions were simulated in the neuron model. We were able to quantify the stimulus discrimination reliability of granule neurons under control, LTP and LTD conditions and the analysis revealed that stimulus discrimination capability of the neuron was increased during high plasticity state.

Cite this Research Publication

Manjusha Nair, Prasanth Madhu, Vyshnav Mohan, Arathi G Rajendran, Dr. Bipin G. Nair, and Dr. Shyam Diwakar, “GPGPU implementation of information theoretic algorithms for the analysis of granular layer neurons”, in 2015 International Conference on Computing and Network Communications (CoCoNet), IEEE, 2015.