A hands-on session on installation, configuration and network setup in Hadoop Distributed Environment was conducted at Amrita Vishwa Vidyapeetham, Mysuru campus, on October 16, 2017. A. Bharanidharan, Assistant Professor, SNS College of Engineering, Coimbatore, was the resource person. Over 100 students of various sections of UG and PG programs attended the open workshop and had an exposure to Hadoop distributed platform and distributed computing environment.
The session began with the introduction of Hadoop Architecture and various components of the Hadoop File system like Name Node, Job Tracker and Map reduce operations. He also discussed the importance of Hadoop File system and its usage in Big Data processing. During the first session, the resource person gave detailed explanation of the architecture- HDFS a master/slave architecture, HDFS cluster of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients with a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is split into one or more blocks and these blocks are stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode.
The second session was a hands-on session that focused on installing and configuring the Hadoop File system, troubleshooting various query operations in Hadoop, configuring Hadoop sites and Master and a single source system configuration were introduced to the students. The session was full-fledged practical exposure with hands-on over a virtualized computing platform.