HDFS is the distributed file system used in Hadoop and helps to achieve the purpose of storing very larger files on a commodity Hardware. While working on Hadoop and BigData in general it is very important to understand the basic concepts of they underlying file system, i.e. HDFS in case of Hadoop. When you are appearing in BigData Interviews , it is important to know these concepts. Let’s see some of the basic HDFS interview questions –
- What is HDFS?
- How write works in HDFS ?
- What are they key processes in HDFS ?
- What is a NameNode and Datanode?
- What is Block caching?
- What is HDFS federation?
- What is HDFS high-availability?
- What is difference between NAS and HDFS?
- What is a block in HDFS ? What is the importance of blocks?
- What is the command to change replication factor of a file in HDFS?
- What is the command to copy a file form local filesystem to HDFS?
- What is the command to copy a file from HDFS to local file system?
- What is the underlying mechanism involved to write a file in HDFS?
- What is the underlying mechanism involved to read a file in HDFS?
- What is distcp ? Where it can be used?
Also read about basic HDFS commands at Hadoop Shell Commands.