Hadoop : Getting Started with Pig

What is Apache Pig?

Apache Pig is a high level scripting language that is used with Apache Hadoop. It enables data analysts to write complex data transformations without knowing Java. It’s simple SQL-like scripting language is called Pig Latin, and appeals to developers already familiar with scripting languages and SQL.Pig Scripts are converted into MapReduce Jobs which runs on data stored in HDFS (refer to the diagram below).

Through the User Defined Functions(UDF) facility in Pig, It can invoke code in many languages like JRuby, Jython and Java. You can also embed Pig scripts in other languages. The result is that you can use it as a component to build larger and more complex applications that tackle real business problems.

Architecture

Pig Achitecture

How It is being Used ?

  • Rapid prototyping of algorithms for processing large data sets.
  • Data Processing for web search platforms.
  • Ad Hoc queries across large data sets.
  • Web log processing.

Pig Elements

It consists of three elements –

  • Pig Latin
    • High level scripting language
    • No Schema
    • Translated to MapReduce Jobs
  • Pig Grunt Shell
    • Interactive shell for executing pig commands.
  • PiggyBank
    • Shared repository for User defined functions (explained later).

Pig Latin Statements 

Pig Latin statements are the basic constructs you use to process data using Pig. A Pig Latin statement is an operator that takes a relation as input and produces another relation as output(except LOAD and STORE statements).

Pig Latin statements are generally organized as follows:

  • A LOAD statement to read data from the file system.
  • A series of “transformation” statements to process the data.
  • A DUMP statement to view results or a STORE statement to save the results.

Note that a DUMP or STORE statement is required to generate output.

  • In this example Pig will validate, but not execute, the LOAD and FOREACH statements.
  • In this example, Pig will validate and then execute the LOAD, FOREACH, and DUMP statements.

Storing Intermediate Results

Pig stores the intermediate data generated between MapReduce jobs in a temporary location on HDFS. This location must already exist on HDFS prior to use. This location can be configured using the pig.temp.dir property.

Storing Final Results

Use the STORE operator and the load/store functions to write results to the file system ( PigStorage is the default store function).

Note: During the testing/debugging phase of your implementation, you can use DUMP to display results to your terminal screen. However, in a production environment you always want to use the STORE operator to save your results.

Debugging Pig Latin

Pig Latin provides operators that can help you debug your Pig Latin statements:

  • Use the DUMP operator to display results to your terminal screen.
  • Use the DESCRIBE operator to review the schema of a relation.
  • Use the EXPLAIN operator to view the logical, physical, or map reduce execution plans to compute a relation.
  • Use the ILLUSTRATE operator to view the step-by-step execution of a series of statements.

What are Pig User Defined Functions (UDFs) ?

Pig provides extensive support for user-defined functions (UDFs) as a way to specify custom processing. Functions can be a part of almost every operator in Pig. UDF is very powerful functionality to do many complex operations on data.The Piggy Bank is a place for users to share their functions(UDFs).

Example:

I hope now you know some basic Pig Concepts already !

Happy Learning !!

References :-


 Related articles :

Top 20 Hadoop and Big Data Books

Big Data Books

Hadoop: The Definitive Guide

i

Hadoop: The Definitive Guides the ideal guide for anyone who wants to know about the Apache Hadoop  and all that can be done with it.Good book on basics of Hadoop (HDFS, MapReduce & other related technologies). This book provides all necessary details to start work with Hadoop, program using it

“Now you have the opportunity to learn about Hadoop from a master-not only of the technology, but also of common sense and plain talk.” — Doug Cutting, Hadoop Founder, Yahoo!

Latest version 4th Edition is available here  – Hadoop – The Definitive Guide 4e

(more…)

How-To :Become a Hadoop Certified Developer ?

Hadoop Certified

Hadoop

 Apache Hadoop is an open source framework for distributed storing and processing of large sets of data on commodity hardware. Hadoop enables businesses to gain insight from massive amounts of structured and unstructured data quickly.

Hadoop and Big Data are the hot trends of the Industry these days. Most of the companies are already implementing these or they have at least started to show interest to remain competitive in the market. Big Data and Analytic are certainly one of the great concepts for current and forthcoming IT generation as most of the innovation is driven by vast amount of data that is being generated exponentially.

There are many vendors for Enterprise Hadoop in the Industry – Cloudera, HortonWorks (forked out of Yahoo), MapR, IBM are some of the few front runners among them. They all have their own Hadoop Distributions which differs in one way or other in terms of features keeping Hadoop to its core. They provide training on various Hadoop and Big Data technologies and as an Industry trend are coming out to provide certifications around these technologies too.

In this article I am going to list down all the latest available certifications for Hadoop by different vendors in the industry. Certifications are helpful to your career or not , that’s altogether a different debate and out of scope of this article. It may be useful for some of the folks out there who thinks they have done enough reading about it and now they want to judge themselves or those who are looking to add values to their  portfolios.

Cloudera        


CCAH (Administrator) Exams

Cloudera Certified Administrator for Apache Hadoop (CCA-410)

There are three versions for this exam currently –

CCAH CDH 4 Exam
Exam Code: CCA-410
Number of Questions: 60 questions
Time Limit: 90 minutes
Passing Score: 70%
Language: English, Japanese
Price: USD $295

CCAH CDH 5 Exam
Exam Code: CCA-500
Number of Questions: 60 questions
Time Limit: 90 minutes
Passing Score: 70%
Language: English, Japanese (forthcoming)
Price: USD $295

CCAH CDH 5 Upgrade Exam
Exam Code: CCA-505
Number of Questions: 45 questions
Time Limit: 90 minutes
Passing Score: 70%
Language: English, Japanese (forthcoming)
Price: USD $125

CCAH Practice Test

CCAH Study Guide

CCDH (Developer) Exams

Cloudera Certified Developer for Apache Hadoop (CCD-410)

Exam Code: CCD-410
Number of Questions: 50 – 55 live questions
Time Limit: 90 minutes
Passing Score: 70%
Language: English, Japanese
Price: USD $295

Study Guide :- Available at Cloudera site.

Practice Tests :- Available at Cloudera site.

Hortonworks


For 2.x Certifications

1) Hadoop 2.0 Java Developer Certification

This certification is intended for developers who design, develop and architect Hadoop-based solutions written in the Java programming language.

Time Limit  : 90 minutes

Number of Questions : 50

Passing Score : 75%

Price : $150 USD

Practice tests can be taken by registering at certification site.

2) Hadoop 2.0 Developer Certification

The Certified Apache Hadoop 2.0 Developer certification is intended for developers who design, develop and architect Hadoop-based solutions, consultants who create Hadoop project proposals and Hadoop development instructors.

Time Limit  : 90 minutes.

Number of Questions :50

Passing Score : 75%

Price : $150 USD

Practice tests can be taken by registering at certification site.

3) Hortonworks Certified Apache Hadoop 2.0 Administrator

This is intended for administrators who deploy and manage Apache Hadoop 2.0 clusters, teaches students how to install,configure, maintain and scale the Hadoop 2.0 environment.

Time Limit  : 90 minutes.

Number of Questions :48

Passing Score : 75%

Price : $150 USD

For 1.x Certifications

1) Hadoop 1.0 Developer Certification

Time Limit  : 90 minutes.

Number of Questions :53

Passing Score : 75%

Price : $150 USD

2) Hadoop 1.0 Administrator Certification

Time Limit  : 60 minutes.

Number of Questions :41

Passing Score : 75%

Price : $150 USD

 


Related Articles :

 

String Interning – What ,Why and When ?

What is String Interning 

String Interning is a method of storing only one copy of each distinct String Value, which must be immutable.

In Java String class has a public method intern() that returns a canonical representation for the string object. Java’s String class privately maintains a pool of strings, where String literals are automatically interned.

When the intern() method is invoked on a String object it looks the string contained by this String object in the pool, if the string is found there then the string from the pool is returned. Otherwise, this String object is added to the pool and a reference to this String object is returned.

The intern() method helps in comparing two String objects with == operator by looking into the pre-existing pool of string literals, no doubt it is faster than equals() method. The pool of strings in Java is maintained for saving space and for faster comparisons. Normally Java programmers are advised to use equals(), not ==, to compare two strings. This is because == operator compares memory locations, while equals() method compares the content stored in two objects.

Why and When to Intern ?

Thought Java automatically interns all Remember that we only need to intern strings when they are not constants, and we want to be able to quickly compare them to other interned strings. The intern() method should be used on strings constructed with new String() in order to compare them by == operator.

Let’s take a look at the following Java program to understand the intern() behavior.

SOAP Webservices Using Apache CXF : Adding Custom Object as Header in Outgoing Requests

What is Apache CXF?

Apache CXF is an open source services framework. CXF helps you build and develop services using frontend programming APIs, like JAX-WS and JAX-RS. These services can speak a variety of protocols such as SOAP, XML/HTTP, RESTful HTTP, or CORBA and work over a variety of transports such as HTTP, JMS etc.

How CXF Works?

As you can see  here,  as how CXF service calls are processed,most of the functionality in the Apache CXF runtime is implemented by interceptors. Every endpoint created by the Apache CXF runtime has potential interceptor chains for processing messages. The interceptors in the these chains are responsible for transforming messages between the raw data transported across the wire and the Java objects handled by the endpoint’s implementation code.

Interceptors in CXF

When a CXF client invokes a CXF server, there is an outgoing interceptor chain for the client and an incoming chain for the server. When the server sends the response back to the client, there is an outgoing chain for the server and an incoming one for the client. Additionally, in the case of SOAPFaults, a CXF web service will create a separate outbound error handling chain and the client will create an inbound error handling chain.

The interceptors are organized into phases to ensure that processing happens on the proper order.Various phases involved during the Interceptor chains are listed in CXF documentation here.

Adding your custom Interceptor involves extending one of the Abstract Intereceptor classes that CXF provides, and providing a phase when that interceptor should be invoked.

AbstractPhaseInterceptor class – This abstract class provides implementations for the phase management methods of the PhaseInterceptor interface. The AbstractPhaseInterceptor class also provides a default implementation of the handleFault() method.

Developers need to provide an implementation of the handleMessage() method. They can also provide a different implementation for the handleFault() method. The developer-provided implementations can manipulate the message data using the methods provided by the generic org.apache.cxf.message.Message interface.

For applications that work with SOAP messages, Apache CXF provides an AbstractSoapInterceptor class. Extending this class provides the handleMessage() method and the handleFault() method with access to the message data as an org.apache.cxf.binding.soap.SoapMessage object. SoapMessage objects have methods for retrieving the SOAP headers, the SOAP envelope, and other SOAP metadata from the message.

Below piece of code will show, how we can add a Custom Object as Header to an outgoing request –

Spring Configuration

Interceptor :-

 


You may also like :

Getting Started with Hadoop : Free Online Hadoop Trainings

Hadoop Trainings

Oh Yes! It’s Free !!!

With the rising popularity , increase in demand and lack of experts in Big Data and Hadoop technologies, various  paid training courses and certifications are available from various Enterprise Hadoop providers like Cloudera, Hortonworks , IBM, MapR etc .But if you don’t want to shell out some money and want to learn at your comfort and pace, you should definitely take a look at these online courses available online and explore the world of Hadoop and Big Data. Happy Hadooping 🙂

  1. Intro to Hadoop and MapReduce  by  Udacity – This course by Cloudera provides a nice explanation of the core concepts and internal working of  Hadoop components embedded with quizzes around each concept and some good hands on exercises. They also provide VM for training purpose, which can be used to run example questions and to solve quizzes and exams for the course.

Goals –

  • How Hadoop fits into the world (recognize the problems it solves)
  • Understand the concepts of HDFS and MapReduce (find out how it solves the problems)
  • Write MapReduce programs (see how we solve the problems)
  • Practice solving problems on your own

Prerequisites –

Some basic programming knowledge and a good interest in learning 🙂

2. Introduction to Mapreduce Programming  by BigDataUniversity-

This is a good course on understanding basics of Map and Reduce and how MapReduce applications works.

3. Moving Data in to Hadoop

4. Introduction to Yarn and Mapreduce 2  Excellent webinar covering the how Yarn can change the way distributed processing works.

 


Related Articles :

 

%d bloggers like this: