Tag: hadoop

Getting Started with Hadoop : Free Online Hadoop Trainings

Hadoop Trainings

Oh Yes! It’s Free !!!

With the rising popularity , increase in demand and lack of experts in Big Data and Hadoop technologies, various  paid training courses and certifications are available from various Enterprise Hadoop providers like Cloudera, Hortonworks , IBM, MapR etc .But if you don’t want to shell out some money and want to learn at your comfort and pace, you should definitely take a look at these online courses available online and explore the world of Hadoop and Big Data. Happy Hadooping 🙂

  1. Intro to Hadoop and MapReduce  by  Udacity – This course by Cloudera provides a nice explanation of the core concepts and internal working of  Hadoop components embedded with quizzes around each concept and some good hands on exercises. They also provide VM for training purpose, which can be used to run example questions and to solve quizzes and exams for the course.

Goals –

  • How Hadoop fits into the world (recognize the problems it solves)
  • Understand the concepts of HDFS and MapReduce (find out how it solves the problems)
  • Write MapReduce programs (see how we solve the problems)
  • Practice solving problems on your own

Prerequisites –

Some basic programming knowledge and a good interest in learning 🙂

2. Introduction to Mapreduce Programming  by BigDataUniversity-

This is a good course on understanding basics of Map and Reduce and how MapReduce applications works.

3. Moving Data in to Hadoop

4. Introduction to Yarn and Mapreduce 2  Excellent webinar covering the how Yarn can change the way distributed processing works.


Related Articles :


Recommended Readings for Hadoop

I am writing this series to mention some of the recommended reading to understand Hadoop , its architecture, minute details of cluster setup etc.

Understanding Hadoop Cluster Setup and Network – Brad Hedlund, with his expertise in Networks, provide minute details of cluster setup, data exchange mechanisms of a typical Hadoop Cluster Setup.

MongoDB and Hadoop – Webinar by Mike O’Brien,Software Engineer, MongoDB on how MongoDB and Hadoop can be used together , using core MapReduce and Pig and Hive as well.

Please post comments if you have come across some great article/webinar link, which explains things in great details with ease.

Top 10 Hadoop Shell Commands to manage HDFS

Basically, our goal is to organize the world’s information and to make it universally accessible and useful.-Larry Page

So you already know what Hadoop is? Why it is used for ? and  What problems you can solve with it?  and you want to know how you can deal with files on HDFS ?  Don’t worry, you are at the right place.

In this article I will present Top 10 basic Hadoop HDFS operations managed through shell commands which are useful to manage files on HDFS clusters, For testing purposes ,you can invoke this commands using either some of the VMs from Cloudera, Hortonworks etc or if you have your own setup of pseudo distributed cluster .

Let’s get started.

1. Create a directory in HDFS at given path(s).

2.  List the contents of a directory.

3. Upload and download a file in HDFS.


hadoop fs -put:

Copy single src file, or multiple src files from local file system to the Hadoop data file system


hadoop fs -get:

Copies/Downloads files to the local file system

4. See contents of a file

Same as unix cat command:

5. Copy a file from source to destination

This command allows multiple sources as well in which case the destination must be a directory.

6. Copy a file from/To Local file system to HDFS


Similar to put command, except that the source is restricted to a local file reference.


Similar to get command, except that the destination is restricted to a local file reference.

7.Move file from source to destination.

Note:- Moving files across filesystem is not permitted.

8. Remove a file or directory in HDFS.

Remove files specified as argument. Deletes directory only when it is empty

Recursive version of delete.

9. Display last few lines of a file.

Similar to tail command in Unix.

10.Display the aggregate length of a file.


Please comment which of these commands you found most useful while dealing with Hadoop /HDFS.

 Related articles : 

%d bloggers like this: