Back close

Course Detail

Course Name Emerging Architectures for Machine Learning
Course Code 25VL756
Program M. Tech. in VLSI Design
Credits 3
Campus Amritapuri, Coimbatore, Bengaluru, Chennai

Syllabus

Unit 1

Overview of Machine Learning and Deep Learning Models – Algorithm to hardware translation – Bitwidth – Fixed Point and Floating Point representations – Precision Effects

Unit 2

Least Mean Square Algorithm – Case Studies – Neural Network Implementations – Trade-offs

Unit 3

Advanced algorithms – Deep Learning implementations – Neuromorphic Architectures – Sparsity – Irregular Computations – Introduction to neuromorphic Architectures.

Objectives and Outcomes

Course Objectives

  • To introduce new paradigms in computing.
  • To familiarize various aspects and issues in implementation of machine learning systems.
  • To impart background on application of FPGAs and unconventional computing platforms for machine learning.
  • To provide exposure to using state of the art computing tools.

Course Outcomes: At the end of the course, the student should be able to

  • CO1: Ability to understand high performance machine learning architectures.
  • CO2: Ability to apply computing paradigms for machine intelligence problems.
  • CO3: Ability to suggest solutions and platforms for dataflow intensive problems.
  • CO4: Ability to evaluate the use of diverse technologies to design efficient applications.

CO-PO Mapping:

CO/PO PO1 PO2 PO3 PSO1 PSO2 PSO3
CO1 2 3 3 2
CO2 2 3 3 2
CO3 2 3 3 2
CO4 2 3 3 2

Skills Acquired: Ability to develop architectures for Machine Learning.

Reference(s)

  1. Shiho Kim and Ganesh Chandra Deka, Advances in Computers, Volume 122: Hardware Accelerator Systems for Artificial Intelligence and Machine Learning, ScienceDirect, 2021
  2. Andres Rodriguez, Deep Learning Systems: Algorithms, Compilers, and Processors for Large-Scale Production.Synthesis Lectures on Computer Architecture. Morgan & Claypool Publishers. Oct. 2020.
  3. Sze, Designing Hardware for Machine Learning, in IEEE Solid-State Circuits Magazine, vol. 9, no. 4, pp. 46-54, Fall 2017 (paper)
  4. Lei Deng, Guoqi Li, Song Han, Luping Shi, and Yuan Xie, Model compression and hardware acceleration for neural networks: A comprehensive survey. Proceedings of the IEEE 108, no. 4 (2020): 485-532. (paper)
  5. Shail Dave, Riyadh Baghdadi, Tony Nowatzki, Sasikanth Avancha, Aviral Shrivastava, and Baoxin Li., Hardware acceleration of sparse and irregular tensor computations of ML models: A survey and insights. Proceedings of the IEEE 109, no. 10 (2021): 1706-1752. (paper)

DISCLAIMER: The appearance of external links on this web site does not constitute endorsement by the School of Biotechnology/Amrita Vishwa Vidyapeetham or the information, products or services contained therein. For other than authorized activities, the Amrita Vishwa Vidyapeetham does not exercise any editorial control over the information you may find at these locations. These links are provided consistent with the stated purpose of this web site.

Admissions Apply Now