Hadoop-hdfs-post

Top 10 Hadoop Shell Commands to manage HDFS

Basically, our goal is to organize the world’s information and to make it universally accessible and useful.-Larry Page

So you already know what Hadoop is? Why it is used for ? and  What problems you can solve with it?  and you want to know how you can deal with files on HDFS ?  Don’t worry, you are at the right place.

In this article I will present Top 10 basic Hadoop HDFS operations managed through shell commands which are useful to manage files on HDFS clusters, For testing purposes ,you can invoke this commands using either some of the VMs from Cloudera, Hortonworks etc or if you have your own setup of pseudo distributed cluster .

Let’s get started.

1. Create a directory in HDFS at given path(s).

2.  List the contents of a directory.

3. Upload and download a file in HDFS.

Upload:

hadoop fs -put:

Copy single src file, or multiple src files from local file system to the Hadoop data file system

Download:

hadoop fs -get:

Copies/Downloads files to the local file system

4. See contents of a file

Same as unix cat command:

5. Copy a file from source to destination

This command allows multiple sources as well in which case the destination must be a directory.

6. Copy a file from/To Local file system to HDFS

copyFromLocal

Similar to put command, except that the source is restricted to a local file reference.

copyToLocal

Similar to get command, except that the destination is restricted to a local file reference.

7.Move file from source to destination.

Note:- Moving files across filesystem is not permitted.

8. Remove a file or directory in HDFS.

Remove files specified as argument. Deletes directory only when it is empty

Recursive version of delete.

9. Display last few lines of a file.

Similar to tail command in Unix.

10.Display the aggregate length of a file.

 

Please comment which of these commands you found most useful while dealing with Hadoop /HDFS.


 Related articles :