Ambari, big data analytics, big data architecture, big data certification, big data cloud, big data concept, big data course online, big data download, big data example, big data for beginners, big data for dummies pdf, big data hadoop, big data hive, big data news, big data ppt, big data problems, big data programming, big data tools, big data tutorial, big data university, BigData, bigdata and hadoop training, BigData VM, Cloudera VM, download VM, Falcon, File System, Hadoop, HBase, HDFS, HDP, Hive, Hive-Hcatalog, hortonworks, Hortonworks VM, Hue, IBM BigInsight VM, Java, Jive, Knox, MapReduce, Oozie, Oracle Bigdata Lite, Pig, running hortonworks, [Edit]Tags10 big data
Pig was initially developed at Yahoo! to allow people using Apache Hadoop® to focus more on analyzing large data sets and spend less time having to write mapper and reducer programs. Like actual pigs, who eat almost anything, the Pig programming language is designed to handle any kind of data—hence the name!
Pig is made up of two components: the first is the language itself, which is called PigLatin (yes, people naming various Hadoop projects do tend to have a sense of humor associated with their naming conventions), and the second is a runtime environment where PigLatin programs are executed. Think of the relationship between a Java Virtual Machine (JVM) and a Java application. In this section, we’ll just refer to the whole entity as Pig.
The programming language
Let’s first look at the programming language itself so you can see how it’s significantly easier than having to write mapper and reducer programs.
1. The first step in a Pig program is to LOAD the data you want to manipulate from HDFS.
2. Then you run the data through a set of transformations (which, under the covers, are translated into a set of mapper and reducer tasks).
3. Finally, you DUMP the data to the screen or you STORE the results in a file somewhere.
As is the case with all the Hadoop features, the objects that are being worked on by Hadoop are stored in HDFS. In order for a Pig program to access this data, the program must first tell Pig what file (or files) it will use, and that’s done through the LOAD ‘data_file’ command (where ‘data_file’ specifies either an HDFS file or directory). If a directory is specified, all the files in that directory will be loaded into the program. If the data is stored in a file format that is not natively accessible to Pig, you can optionally add the USING function to the LOAD statement to specify a user-defined function that can read in and interpret the data.
The transformation logic is where all the data manipulation happens. Here you can FILTER out rows that are not of interest, JOIN two sets of data files, GROUP data to build aggregations, ORDER results, and much more.
DUMP and STORE
If you don’t specify the DUMP or STORE command, the results of a Pig pro¬gram are not generated. You would typically use the DUMP command, which sends the output to the screen, when you are debugging your Pig programs. When you go into production, you simply change the DUMP call to a STORE call so that any results from running your programs are stored in a file for further processing or analysis. Note that you can use the DUMP command anywhere in your program to dump intermediate result sets to the screen, which is very useful for debugging purposes.