Hortonworks Data Platform Certified Developer

Hortonworks Data Platform Certified Developer

Question No : 1 –
Review the following data and Pig code:
What command to define B would produce the output (M,62,95l02) when invoking the
DUMP operator on B?
A. B = FILTER A BY (zip = = ‘95102’ AND gender = = M”);
B. B= FOREACH A BY (gender = = ‘M’ AND zip = = ‘95102’);
C. B = JOIN A BY (gender = = ‘M’ AND zip = = ‘95102’);
D. B= GROUP A BY (zip = = ‘95102’ AND gender = = ‘M’);
Answer : A
Question No : 2 –
To process input key-value pairs, your mapper needs to lead a 512 MB data file in memory.
What is the best way to accomplish this?
A. Serialize the data file, insert in it the JobConf object, and read the data into memory in the configure method of the mapper.
B. Place the data file in the DistributedCache and read the data into memory in the map method of the mapper.
C. Place the data file in the DataCache and read the data into memory in the configure method of the mapper.
D. Place the data file in the DistributedCache and read the data into memory in the configure method of the mapper.
Answer : C
Question No : 3 –
A client application creates an HDFS file named foo.txt with a replication factor of 3. Identify
which best describes the file access rules in HDFS if the file has a single block that is
stored on data nodes A, B and C?
A. The file will be marked as corrupted if data node B fails during the creation of the file.
B. Each data node locks the local file to prohibit concurrent readers and writers of the file.
C. Each data node stores a copy of the file in the local file system with the same name as the HDFS file.
D. The file can be accessed if at least one of the data nodes storing the file is available.
Answer : D
Explanation: HDFS keeps three copies of a block on three different datanodes to protect against true data corruption. HDFS also tries to distribute these three replicas on more than one rack to protect against data availability issues. The fact that HDFS actively monitors any failed datanode(s) and upon failure detection immediately schedules re-replication of blocks (if needed) implies that three copies of data on three different nodes is sufficient to avoid corrupted files. Note: HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. HDFS uses rack-aware replica placement policy. In default configuration there are total 3 copies of a datablock on HDFS, 2 copies are stored on datanodes on same rack and 3rd copy on a different rack. Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers , How the HDFS Blocks are replicated?
Question No : 4 –
In Hadoop 2.2, which one of the following statements is true about a standby NameNode?
The Standby NameNode:
A. Communicates directly with the active NameNode to maintain the state of the active NameNode.
B. Receives the same block reports as the active NameNode.
C. Runs on the same machine and shares the memory of the active NameNode.
D. Processes all client requests and block reports from the appropriate DataNodes.
Answer : B
Question No : 5 –
Given a directory of files with the following structure: line number, tab character, string:
Example:
1abialkjfjkaoasdfjksdlkjhqweroij
2kadfjhuwqounahagtnbvaswslmnbfgy
3kjfteiomndscxeqalkzhtopedkfsikj
You want to send each line as one record to your Mapper. Which InputFormat should you
use to complete the line: conf.setInputFormat (____.class) ; ?
A. SequenceFileAsTextInputFormat
B. SequenceFileInputFormat
C. KeyValueFileInputFormat
D. BDBInputFormat
1
Answer : C
Explanation: http://stackoverflow.com/questions/9721754/how-to-parse-customwritable-from-text-in- hadoop
Question No : 6 –
Which one of the following classes would a Pig command use to store data in a table
defined in HCatalog?
A. org.apache.hcatalog.pig.HCatOutputFormat
B. org.apache.hcatalog.pig.HCatStorer
C. No special class is needed for a Pig script to store data in an HCatalog table
D. Pig scripts cannot use an HCatalog table
Answer : B
Question No : 7 –
Examine the following Hive statements:
Assuming the statements above execute successfully, which one of the following
statements is true?
A. Hive reformats File1 into a structure that Hive can access and moves into to/user/joe/x/
B. The file named File1 is moved to to/user/joe/x/
C. The contents of File1 are parsed as comma-delimited rows and loaded into /user/joe/x/
D. The contents of File1 are parsed as comma-delimited rows and stored in a database
Answer : B
Question No : 8 –
For each input key-value pair, mappers can emit:
A. As many intermediate key-value pairs as designed. There are no restrictions on the types of those key-value pairs (i.e., they can be heterogeneous).
B. As many intermediate key-value pairs as designed, but they cannot be of the same type as the input key-value pair.
C. One intermediate key-value pair, of a different type.
D. One intermediate key-value pair, but of the same type.
E. As many intermediate key-value pairs as designed, as long as all the keys have the same types and all the values have the same type.
Answer : E
Explanation: Mapper maps input key/value pairs to a set of intermediate key/value pairs. Maps are the individual tasks that transform input records into intermediate records. The transformed intermediate records do not need to be of the same type as the input records. A given input pair may map to zero or many output pairs. Reference: Hadoop Map-Reduce Tutorial
Question No : 9 –
How are keys and values presented and passed to the reducers during a standard sort and
shuffle phase of MapReduce?
A. Keys are presented to reducer in sorted order; values for a given key are not sorted.
B. Keys are presented to reducer in sorted order; values for a given key are sorted in ascending order.
C. Keys are presented to a reducer in random order; values for a given key are not sorted.
D. Keys are presented to a reducer in random order; values for a given key are sorted in ascending order.
Answer : A
Explanation: Reducer has 3 primary phases: 1. Shuffle The Reducer copies the sorted output from each Mapper using HTTP across the network. 2. Sort The framework merge sorts Reducer inputs by keys (since different Mappers may have output the same key). The shuffle and sort phases occur simultaneously i.e. while outputs are being fetched they are merged. SecondarySort To achieve a secondary sort on the values returned by the value iterator, the application should extend the key with the secondary key and define a grouping comparator. The keys will be sorted using the entire key, but will be grouped using the grouping comparator to decide which keys and values are sent in the same call to reduce. 3. Reduce In this phase the reduce(Object, Iterable, Context) method is called for each <key, (collection of values)> in the sorted inputs. The output of the reduce task is typically written to a RecordWriter via TaskInputOutputContext.write(Object, Object). The output of the Reducer is not re-sorted. Reference: org.apache.hadoop.mapreduce, Class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
Question No : 10 –
When can a reduce class also serve as a combiner without affecting the output of a
MapReduce program?
A. When the types of the reduce operations input key and input value match the types of the reducers output key and output value and when the reduce operation is both communicative and associative.
B. When the signature of the reduce method matches the signature of the combine method.
C. Always. Code can be reused in Java since it is a polymorphic object-oriented programming language.
D. Always. The point of a combiner is to serve as a mini-reducer directly after the map phase to increase performance.
E. Never. Combiners and reducers must be implemented separately because they serve different purposes.
Answer : A
Question No : 11 –
You have the following key-value pairs as output from your Map task:
(the, 1)
(fox, 1)
(faster, 1)
(than, 1)
(the, 1)
(dog, 1)
How many keys will be passed to the Reducers reduce method?
A. Six
B. Five
C. Four
D. Two
E. One
F. Three
Answer : B
Explanation: Only one key value pair will be passed from the two (the, 1) key value pairs.
Question No : 12 –
Examine the following Pig commands:
Which one of the following statements is true?
A. The SAMPLE command generates an “unexpected symbol” error
B. Each MapReduce task will terminate after executing for.2 minutes
C. The reducers will only output the first 20% of the data passed from the mappers
D. A random sample of approximately 20% of the data will be output
Answer : D
Question No : 13 –
MapReduce v2 (MRv2/YARN) splits which major functions of the JobTracker into separate
daemons? Select two.
A. Heath states checks (heartbeats)
B. Resource management
C. Job scheduling/monitoring
D. Job coordination between the ResourceManager and NodeManager
E. Launching tasks
F. Managing file system metadata
G. MapReduce metric reporting
H. Managing tasks
Answer : B,C
Explanation: The fundamental idea of MRv2 is to split up the two major functionalities of the JobTracker, resource management and job scheduling/monitoring, into separate daemons. The idea is to have a global ResourceManager (RM) and per-application ApplicationMaster (AM). An application is either a single job in the classical sense of Map- Reduce jobs or a DAG of jobs. Note: The central goal of YARN is to clearly separate two things that are unfortunately smushed together in current Hadoop, specifically in (mainly) JobTracker: / Monitoring the status of the cluster with respect to which nodes have which resources available. Under YARN, this will be global. / Managing the parallelization execution of any specific job. Under YARN, this will be done separately for each job. Reference: Apache Hadoop YARN Concepts & Applications
Question No : 14 –
Which one of the following statements describes the relationship between the
NodeManager and the ApplicationMaster?
A. The ApplicationMaster starts the NodeManager in a Container
B. The NodeManager requests resources from the ApplicationMaster
C. The ApplicationMaster starts the NodeManager outside of a Container
D. The NodeManager creates an instance of the ApplicationMaster
Answer : D
Question No : 15 –
Which HDFS command displays the contents of the file x in the user’s HDFS home
directory?
A. hadoop fs -Is x
B. hdfs fs -get x
C. hadoop fs -cat x
D. hadoop fs -cp x
Answer : C
Question No : 16 –
Which one of the following Hive commands uses an HCatalog table named x?
A. SELECT * FROM x;
B. SELECT x.-FROM org.apache.hcatalog.hive.HCatLoader(‘x’);
C. SELECT * FROM org.apache.hcatalog.hive.HCatLoader(‘x’);
D. Hive commands cannot reference an HCatalog table
Answer : C
Question No : 17 –
What types of algorithms are difficult to express in MapReduce v1 (MRv1)?
A. Algorithms that require applying the same mathematical function to large numbers of individual binary records.
B. Relational operations on large amounts of structured and semi-structured data.
C. Algorithms that require global, sharing states.
D. Large-scale graph algorithms that require one-step link traversal.
E. Text analysis algorithms on large collections of unstructured text (e.g, Web crawls).
Answer : C
Explanation: See 3) below. Limitations of Mapreduce where not to use Mapreduce While very powerful and applicable to a wide variety of problems, MapReduce is not the answer to every problem. Here are some problems I found where MapReudce is not suited and some papers that address the limitations of MapReuce. 1. Computation depends on previously computed values If the computation of a value depends on previously computed values, then MapReduce cannot be used. One good example is the Fibonacci series where each value is summation of the previous two values. i.e., f(k+2) = f(k+1) + f(k). Also, if the data set is small enough to be computed on a single machine, then it is better to do it as a single reduce(map(data)) operation rather than going through the entire map reduce process. 2. Full-text indexing or ad hoc searching The index generated in the Map step is one dimensional, and the Reduce step must not generate a large amount of data or there will be a serious performance degradation. For example, CouchDBs MapReduce may not be a good fit for full-text indexing or ad hoc searching. This is a problem better suited for a tool such as Lucene. 3. Algorithms depend on shared global state Solutions to many interesting problems in text processing do not require global synchronization. As a result, they can be expressed naturally in MapReduce, since map and reduce tasks run independently and in isolation. However, there are many examples of algorithms that depend crucially on the existence of shared global state during processing, making them difficult to implement in MapReduce (since the single opportunity for global synchronization in MapReduce is the barrier between the map and reduce phases of processing) Reference: Limitations of Mapreduce where not to use Mapreduce
Question No : 18 –
Which one of the following is NOT a valid Oozie action?
A. mapreduce
B. pig
C. hive
D. mrunit
Answer : D
Question No : 19 –
Given the following Pig command:
logevents = LOAD ‘input/my.log’ AS (date:chararray, levehstring, code:int,
message:string);
Which one of the following statements is true?
A. The logevents relation represents the data from the my.log file, using a comma as the parsing delimiter
B. The logevents relation represents the data from the my.log file, using a tab as the parsing delimiter
C. The first field of logevents must be a properly-formatted date string or table return an error
D. The statement is not a valid Pig command
Answer : B
Question No : 20 –
You have written a Mapper which invokes the following five calls to the
OutputColletor.collect method:
output.collect (new Text (Apple), new Text (Red) ) ;
output.collect (new Text (Banana), new Text (Yellow) ) ;
output.collect (new Text (Apple), new Text (Yellow) ) ;
output.collect (new Text (Cherry), new Text (Red) ) ;
output.collect (new Text (Apple), new Text (Green) ) ;
How many times will the Reducers reduce method be invoked?
A. 6
B. 3
C. 1
D.
E. 5
Answer : B
Explanation: reduce() gets called once for each [key, (list of values)] pair. To explain, let’s say you called: out.collect(new Text(“Car”),new Text(“Subaru”); out.collect(new Text(“Car”),new Text(“Honda”); out.collect(new Text(“Car”),new Text(“Ford”); out.collect(new Text(“Truck”),new Text(“Dodge”); out.collect(new Text(“Truck”),new Text(“Chevy”); Then reduce() would be called twice with the pairs reduce(Car, <Subaru, Honda, Ford>) reduce(Truck, <Dodge, Chevy>) Reference: Mapper output.collect()?
Question No : 21 –
You want to Ingest log files Into HDFS, which tool would you use?
A. HCatalog
B. Flume
C. Sqoop
D. Ambari
Answer : B
Question No : 22 –
You use the hadoop fs put command to write a 300 MB file using and HDFS block size of
64 MB. Just after this command has finished writing 200 MB of this file, what would another
user see when trying to access this life?
A. They would see Hadoop throw an ConcurrentFileAccessException when they try to access this file.
B. They would see the current state of the file, up to the last bit written by the command.
C. They would see the current of the file through the last completed block.
D. They would see no content until the whole file written and closed.
Answer : C
a
Question No : 23 –
You need to create a job that does frequency analysis on input data. You will do this by
writing a Mapper that uses TextInputFormat and splits each value (a line of text from an
input file) into individual characters. For each one of these characters, you will emit the
character as a key and an InputWritable as the value. As this will produce proportionally
more intermediate data than input data, which two resources should you expect to be
bottlenecks?
A. Processor and network I/O
B. Disk I/O and network I/O
C. Processor and RAM
D. Processor and disk I/O
Answer : B
Question No : 24 –
You want to understand more about how users browse your public website, such as which
pages they visit prior to placing an order. You have a farm of 200 web servers hosting your
website. How will you gather this data for your analysis?
A. Ingest the server web logs into HDFS using Flume.
B. Write a MapReduce job, with the web servers for mappers, and the Hadoop cluster nodes for reduces.
C. Import all users’ clicks from your OLTP databases into Hadoop, using Sqoop.
D. Channel these clickstreams inot Hadoop using Hadoop Streaming.
E. Sample the weblogs from the web servers, copying them into Hadoop using curl.
Answer : A
Question No : 25 –
Can you use MapReduce to perform a relational join on two large tables sharing a key?
Assume that the two tables are formatted as comma-separated files in HDFS.
A. Yes.
B. Yes, but only if one of the tables fits into memory
C. Yes, so long as both tables fit into memory.
D. No, MapReduce cannot perform relational operations.
E. No, but it can be done with either Pig or Hive.
Answer : A
Explanation: Note: * Join Algorithms in MapReduce A) Reduce-side join B) Map-side join C) In-memory join / Striped Striped variant variant / Memcached variant * Which join to use? / In-memory join > map-side join > reduce-side join / Limitations of each? In-memory join: memory Map-side join: sort order and partitioning Reduce-side join: general purpose
Question No : 26 –
In the reducer, the MapReduce API provides you with an iterator over Writable values.
What does calling the next () method return?
A. It returns a reference to a different Writable object time.
B. It returns a reference to a Writable object from an object pool.
C. It returns a reference to the same Writable object each time, but populated with different data.
D. It returns a reference to a Writable object. The API leaves unspecified whether this is a reused object or a new object.
E. It returns a reference to the same Writable object if the next value is the same as the previous value, or a new Writable object otherwise.
Answer : C
Explanation: Calling Iterator.next() will always return the SAME EXACT instance of IntWritable, with the contents of that instance replaced with the next value. Reference: manupulating iterator in mapreduce
Question No : 27 –
Analyze each scenario below and indentify which best describes the behavior of the default
partitioner?
A. The default partitioner assigns key-values pairs to reduces based on an internal random number generator.
B. The default partitioner implements a round-robin strategy, shuffling the key-value pairs to each reducer in turn. This ensures an event partition of the key space.
C. The default partitioner computes the hash of the key. Hash values between specific ranges are associated with different buckets, and each bucket is assigned to a specific reducer.
D. The default partitioner computes the hash of the key and divides that valule modulo the number of reducers. The result determines the reducer assigned to process the key-value pair.
E. The default partitioner computes the hash of the value and takes the mod of that value with the number of reducers. The result determines the reducer assigned to process the key-value pair.
Answer : D
Explanation: The default partitioner computes a hash value for the key and assigns the partition based on this result. The default Partitioner implementation is called HashPartitioner. It uses the hashCode() method of the key objects modulo the number of partitions total to determine which partition to send a given (key, value) pair to. In Hadoop, the default partitioner is HashPartitioner, which hashes a records key to determine which partition (and thus which reducer) the record belongs in.The number of partition is then equal to the number of reduce tasks for the job. Reference: Getting Started With (Customized) Partitioning
Question No : 28 –
Which one of the following statements describes a Pig bag. tuple, and map, respectively?
A. Unordered collection of maps, ordered collection of tuples, ordered set of key/value pairs
B. Unordered collection of tuples, ordered set of fields, set of key value pairs
C. Ordered set of fields, ordered collection of tuples, ordered collection of maps
D. Ordered collection of maps, ordered collection of bags, and unordered set of key/value pairs
Answer : B
Question No : 29 –
You want to run Hadoop jobs on your development workstation for testing before you
submit them to your production cluster. Which mode of operation in Hadoop allows you to
most closely simulate a production cluster while using a single machine?
A. Run all the nodes in your production cluster as virtual machines on your development workstation.
B. Run the hadoop command with the –jt local and the –fs file:///options.
C. Run the DataNode, TaskTracker, NameNode and JobTracker daemons on a single machine.
D. Run simldooop, the Apache open-source software for simulating Hadoop clusters.
Answer : C
Question No : 30 –
You want to count the number of occurrences for each unique word in the supplied input
data. Youve decided to implement this by having your mapper tokenize each word and
emit a literal value 1, and then have your reducer increment a counter for each literal 1 it
receives. After successful implementing this, it occurs to you that you could optimize this by
specifying a combiner. Will you be able to reuse your existing Reduces as your combiner in
this case and why or why not?
A. Yes, because the sum operation is both associative and commutative and the input and output types to the reduce method match.
B. No, because the sum operation in the reducer is incompatible with the operation of a Combiner.
C. No, because the Reducer and Combiner are separate interfaces.
D. No, because the Combiner is incompatible with a mapper which doesnt use the same data type for both the key and value.
E. Yes, because Java is a polymorphic object-oriented language and thus reducer code can be reused as a combiner.
Answer : A
Explanation: Combiners are used to increase the efficiency of a MapReduce program. They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers. You can use your reducer code as a combiner if the operation performed is commutative and associative. The execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if required it may execute it more then 1 times. Therefore your MapReduce jobs should not depend on the combiners execution. Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers, What are combiners? When should I use a combiner in my MapReduce Job?
Question No : 31 –
In a MapReduce job with 500 map tasks, how many map task attempts will there be?
A. It depends on the number of reduces in the job.
B. Between 500 and 1000.
C. At most 500.
D. At least 500.
E. Exactly 500.
Answer : D
Explanation: From Cloudera Training Course: Task attempt is a particular instance of an attempt to execute a task There will be at least as many task attempts as there are tasks If a task attempt fails, another will be started by the JobTracker Speculative execution can also result in more task attempts than completed tasks
Question No : 32 –
Table metadata in Hive is:
A. Stored as metadata on the NameNode.
B. Stored along with the data in HDFS.
C. Stored in the Metastore.
D. Stored in ZooKeeper.
Answer : C
Explanation: By default, hive use an embedded Derby database to store metadata information. The metastore is the “glue” between Hive and HDFS. It tells Hive where your data files live in HDFS, what type of data they contain, what tables they belong to, etc. The Metastore is an application that runs on an RDBMS and uses an open source ORM layer called DataNucleus, to convert object representations into a relational schema and vice versa. They chose this approach as opposed to storing this information in hdfs as they need the Metastore to be very low latency. The DataNucleus layer allows them to plugin many different RDBMS technologies. Note: * By default, Hive stores metadata in an embedded Apache Derby database, and other client/server databases like MySQL can optionally be used. * features of Hive include: Metadata storage in an RDBMS, significantly reducing the time to perform semantic checks during query execution. Reference: Store Hive Metadata into RDBMS
Question No : 33 –
You wrote a map function that throws a runtime exception when it encounters a control
character in input data. The input supplied to your mapper contains twelve such characters
totals, spread across five file splits. The first four file splits each have two control characters
and the last split has four control characters.
Indentify the number of failed task attempts you can expect when you run the job with
mapred.max.map.attempts set to 4:
A. You will have forty-eight failed task attempts
B. You will have seventeen failed task attempts
C. You will have five failed task attempts
D. You will have twelve failed task attempts
E. You will have twenty failed task attempts
Answer : E
Explanation: There will be four failed task attempts for each of the five file splits. Note:
Question No : 34 –
Which one of the following statements regarding the components of YARN is FALSE?
A. A Container executes a specific task as assigned by the ApplicationMaster
B. The ResourceManager is responsible for scheduling and allocating resources
C. A client application submits a YARW job to the ResourceManager
D. The ResourceManager monitors and restarts any failed Containers
Answer : D
Question No : 35 –
You are developing a MapReduce job for sales reporting. The mapper will process input
keys representing the year (IntWritable) and input values representing product indentifies
(Text).
Indentify what determines the data types used by the Mapper for a given job.
A. The key and value types specified in the JobConf.setMapInputKeyClass and JobConf.setMapInputValuesClass methods
B. The data types specified in HADOOP_MAP_DATATYPES environment variable
C. The mapper-specification.xml file submitted with the job determine the mappers input key and value types.
D. The InputFormat used by the job determines the mapper’s input key and value types.
Answer : D
Explanation: The input types fed to the mapper are controlled by the InputFormat used. The default input format, “TextInputFormat,” will load data in as (LongWritable, Text) pairs. The long value is the byte offset of the line in the file. The Text object holds the string contents of the line of the file. Note: The data types emitted by the reducer are identified by setOutputKeyClass() andsetOutputValueClass(). The data types emitted by the reducer are identified by setOutputKeyClass() and setOutputValueClass(). By default, it is assumed that these are the output types of the mapper as well. If this is not the case, the methods setMapOutputKeyClass() and setMapOutputValueClass() methods of the JobConf class will override these. Reference: Yahoo! Hadoop Tutorial, THE DRIVER METHOD
Question No : 36 –
What are the TWO main components of the YARN ResourceManager process? Choose 2
answers
A. Job Tracker
B. Task Tracker
C. Scheduler
D. Applications Manager
Answer : C,D
Question No : 37 –
In Hadoop 2.2, which TWO of the following processes work together to provide automatic
failover of the NameNode? Choose 2 answers
A. ZKFailoverController
B. ZooKeeper
C. QuorumManager
D. JournalNode
Answer : A,D
Question No : 38 –
Indentify which best defines a SequenceFile?
A. A SequenceFile contains a binary encoding of an arbitrary number of homogeneous Writable objects
B. A SequenceFile contains a binary encoding of an arbitrary number of heterogeneous Writable objects
C. A SequenceFile contains a binary encoding of an arbitrary number of WritableComparable objects, in sorted order.
D. A SequenceFile contains a binary encoding of an arbitrary number key-value pairs. Each key must be the same type. Each value must be the same type.
Answer : D
Explanation: SequenceFile is a flat file consisting of binary key/value pairs. There are 3 different SequenceFile formats: Uncompressed key/value records. Record compressed key/value records – only ‘values’ are compressed here. Block compressed key/value records – both keys and values are collected in ‘blocks’ separately and compressed. The size of the ‘block’ is configurable. Reference: http://wiki.apache.org/hadoop/SequenceFile
Question No : 39 –
Which TWO of the following statements are true regarding Hive? Choose 2 answers
A. Useful for data analysts familiar with SQL who need to do ad-hoc queries
B. Offers real-time queries and row level updates
C. Allows you to define a structure for your unstructured Big Data
D. Is a relational database
Answer : A,C
Question No : 40 –
You need to move a file titled weblogs into HDFS. When you try to copy the file, you cant.
You know you have ample space on your DataNodes. Which action should you take to
relieve this situation and store more files in HDFS?
A. Increase the block size on all current files in HDFS.
B. Increase the block size on your remaining files.
C. Decrease the block size on your remaining files.
D. Increase the amount of memory for the NameNode.
E. Increase the number of disks (or size) for the NameNode.
F. Decrease the block size on all current files in HDFS.
Answer : C
Question No : 41 –
Consider the following two relations, A and B.
Which Pig statement combines A by its first field and B by its second field?
A. C = DOIN B BY a1, A by b2;
B. C = JOIN A by al, B by b2;
C. C = JOIN A a1, B b2;
D. C = JOIN A SO, B $1;
Answer : B
Question No : 42 –
Which best describes how TextInputFormat processes input files and line breaks?
A. Input file splits may cross line breaks. A line that crosses file splits is read by the RecordReader of the split that contains the beginning of the broken line.
B. Input file splits may cross line breaks. A line that crosses file splits is read by the RecordReaders of both splits containing the broken line.
C. The input file is split exactly at the line breaks, so each RecordReader will read a series of complete lines.
D. Input file splits may cross line breaks. A line that crosses file splits is ignored.
E. Input file splits may cross line breaks. A line that crosses file splits is read by the RecordReader of the split that contains the end of the broken line.
Answer : A
Reference: How Map and Reduce operations are actually carried out
Question No : 43 –
You want to perform analysis on a large collection of images. You want to store this data in
HDFS and process it with MapReduce but you also want to give your data analysts and
data scientists the ability to process the data directly from HDFS with an interpreted high-
level programming language like Python. Which format should you use to store this data in
HDFS?
A. SequenceFiles
B. Avro
C. JSON
D. HTML
E. XML
F. CSV
Answer : B
Reference: Hadoop binary files processing introduced by image duplicates finder
Question No : 44 –
What is the disadvantage of using multiple reducers with the default HashPartitioner and
distributing your workload across you cluster?
A. You will not be able to compress the intermediate data.
B. You will longer be able to take advantage of a Combiner.
C. By using multiple reducers with the default HashPartitioner, output files may not be in globally sorted order.
D. There are no concerns with this approach. It is always advisable to use multiple reduces.
Answer : C
Explanation: Multiple reducers and total ordering If your sort job runs with multiple reducers (either because mapreduce.job.reduces in mapred-site.xml has been set to a number larger than 1, or because youve used the -r option to specify the number of reducers on the command-line), then by default Hadoop will use the HashPartitioner to distribute records across the reducers. Use of the HashPartitioner means that you cant concatenate your output files to create a single sorted output file. To do this youll need total ordering, Reference: Sorting text files with MapReduce
Question No : 45 –
Your clusters HDFS block size in 64MB. You have directory containing 100 plain text files,
each of which is 100MB in size. The InputFormat for your job is TextInputFormat.
Determine how many Mappers will run?
A. 64
B. 100
C. 200
D. 640
Answer : C
Explanation: Each file would be split into two as the block size (64 MB) is less than the file size (100 MB), so 200 mappers would be running. Note: If you’re not compressing the files then hadoop will process your large files (say 10G), with a number of mappers related to the block size of the file. Say your block size is 64M, then you will have ~160 mappers processing this 10G file (160*64 ~= 10G). Depending on how CPU intensive your mapper logic is, this might be an acceptable blocks size, but if you find that your mappers are executing in sub minute times, then you might want to increase the work done by each mapper (by increasing the block size to 128, 256, 512m – the actual size depends on how you intend to process the data). Reference: http://stackoverflow.com/questions/11014493/hadoop-mapreduce-appropriate- input-files-size (first answer, second paragraph)
Question No : 46 –
Which one of the following statements is true about a Hive-managed table?
A. Records can only be added to the table using the Hive INSERT command.
B. When the table is dropped, the underlying folder in HDFS is deleted.
C. Hive dynamically defines the schema of the table based on the FROM clause of a SELECT query.
D. Hive dynamically defines the schema of the table based on the format of the underlying data.
Answer : B
Question No : 47 –
Which describes how a client reads a file from HDFS?
A. The client queries the NameNode for the block location(s). The NameNode returns the block location(s) to the client. The client reads the data directory off the DataNode(s).
B. The client queries all DataNodes in parallel. The DataNode that contains the requested data responds directly to the client. The client reads the data directly off the DataNode.
C. The client contacts the NameNode for the block location(s). The NameNode then queries the DataNodes for block locations. The DataNodes respond to the NameNode, and the NameNode redirects the client to the DataNode that holds the requested data block(s). The client then reads the data directly off the DataNode.
D. The client contacts the NameNode for the block location(s). The NameNode contacts the DataNode that holds the requested data block. Data is transferred from the DataNode to the NameNode, and then from the NameNode to the client.
Answer : A
Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers, How the Client communicates with HDFS?
Question No : 48 –
Given the following Pig commands:
Which one of the following statements is true?
A. The $1 variable represents the first column of data in ‘my.log’
B. The $1 variable represents the second column of data in ‘my.log’
C. The severe relation is not valid
D. The grouped relation is not valid
Answer : B
Question No : 49 –
Assuming the following Hive query executes successfully:
Which one of the following statements describes the result set?
A. A bigram of the top 80 sentences that contain the substring “you are” in the lines column of the input data A1 table.
B. An 80-value ngram of sentences that contain the words “you” or “are” in the lines column of the inputdata table.
C. A trigram of the top 80 sentences that contain “you are” followed by a null space in the lines column of the inputdata table.
D. A frequency distribution of the top 80 words that follow the subsequence “you are” in the lines column of the inputdata table.
Answer : D
Question No : 50 –
On a cluster running MapReduce v1 (MRv1), a TaskTracker heartbeats into the JobTracker
on your cluster, and alerts the JobTracker it has an open map task slot.
What determines how the JobTracker assigns each map task to a TaskTracker?
A. The amount of RAM installed on the TaskTracker node.
B. The amount of free disk space on the TaskTracker node.
C. The number and speed of CPU cores on the TaskTracker node.
D. The average system load on the TaskTracker node over the past fifteen (15) minutes.
E. The location of the InsputSplit to be processed in relation to the location of the node.
Answer : E
Explanation: The TaskTrackers send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated. When the JobTracker tries to find somewhere to schedule a task within the MapReduce operations, it first looks for an empty slot on the same server that hosts the DataNode containing the data, and if not, it looks for an empty slot on a machine in the same rack. Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers, How JobTracker schedules a task?
Question No : 51 –
In a large MapReduce job with m mappers and n reducers, how many distinct copy
operations will there be in the sort/shuffle phase?
A. mXn (i.e., m multiplied by n)
B. n
C. m
D. m+n (i.e., m plus n)
E. mn (i.e., m to the power of n)
Answer : A
Explanation: A MapReduce job with m mappers and r reducers involves up to m * r distinct copy operations, since each mapper may have intermediate output going to every reducer.
Question No : 52 –
A combiner reduces:
A. The number of values across different keys in the iterator supplied to a single reduce method call.
B. The amount of intermediate data that must be transferred between the mapper and reducer.
C. The number of input files a mapper must process.
D. The number of output files a reducer must produce.
Answer : B
Explanation: Combiners are used to increase the efficiency of a MapReduce program. They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers. You can use your reducer code as a combiner if the operation performed is commutative and associative. The execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if required it may execute it more then 1 times. Therefore your MapReduce jobs should not depend on the combiners execution. Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers, What are combiners? When should I use a combiner in my MapReduce Job?
Question No : 53 –
Which process describes the lifecycle of a Mapper?
A. The JobTracker calls the TaskTrackers configure () method, then its map () method and finally its close () method.
B. The TaskTracker spawns a new Mapper to process all records in a single input split.
C. The TaskTracker spawns a new Mapper to process each key-value pair.
D. The JobTracker spawns a new Mapper to process all records in a single file.
Answer : B
Explanation: For each map instance that runs, the TaskTracker creates a new instance of your mapper. Note: * The Mapper is responsible for processing Key/Value pairs obtained from the InputFormat. The mapper may perform a number of Extraction and Transformation functions on the Key/Value pair before ultimately outputting none, one or many Key/Value pairs of the same, or different Key/Value type. * With the new Hadoop API, mappers extend the org.apache.hadoop.mapreduce.Mapper class. This class defines an ‘Identity’ map function by default – every input Key/Value pair obtained from the InputFormat is written out. Examining the run() method, we can see the lifecycle of the mapper: /** * Expert users can override this method for more complete control over the * execution of the Mapper. * @param context * @throws IOException */ public void run(Context context) throws IOException, InterruptedException { setup(context); while (context.nextKeyValue()) { map(context.getCurrentKey(), context.getCurrentValue(), context); } cleanup(context); } setup(Context) – Perform any setup for the mapper. The default implementation is a no-op method. map(Key, Value, Context) – Perform a map operation in the given Key / Value pair. The default implementation calls Context.write(Key, Value) cleanup(Context) – Perform any cleanup for the mapper. The default implementation is a no-op method. Reference: Hadoop/MapReduce/Mapper
Question No : 54 –
You need to perform statistical analysis in your MapReduce job and would like to call
methods in the Apache Commons Math library, which is distributed as a 1.3 megabyte
Java archive (JAR) file. Which is the best way to make this library available to your
MapReducer job at runtime?
A. Have your system administrator copy the JAR to all nodes in the cluster and set its location in the HADOOP_CLASSPATH environment variable before you submit your job.
B. Have your system administrator place the JAR file on a Web server accessible to all cluster nodes and then set the HTTP_JAR_URL environment variable to its location.
C. When submitting the job on the command line, specify the libjars option followed by the JAR file path.
D. Package your code and the Apache Commands Math library into a zip file named JobJar.zip
Answer : C
Explanation: The usage of the jar command is like this, Usage: hadoop jar <jar> [mainClass] args… If you want the commons-math3.jar to be available for all the tasks you can do any one of these 1. Copy the jar file in $HADOOP_HOME/lib dir or 2. Use the generic option -libjars.
Question No : 55 –
Given the following Hive commands:
Which one of the following statements Is true?
A. The file mydata.txt is copied to a subfolder of /apps/hive/warehouse
B. The file mydata.txt is moved to a subfolder of /apps/hive/warehouse
C. The file mydata.txt is copied into Hive’s underlying relational database.
D. The file mydata.txt does not move from Its current location in HDFS
Answer : A
Question No : 56 –
Examine the following Hive statements:
Assuming the statements above execute successfully, which one of the following
statements is true?
A. Each reducer generates a file sorted by age
B. The SORT BY command causes only one reducer to be used
C. The output of each reducer is only the age column
D. The output is guaranteed to be a single file with all the data sorted by age
Answer : A
Question No : 57 –
Which one of the following statements is false about HCatalog?
A. Provides a shared schema mechanism
B. Designed to be used by other programs such as Pig, Hive and MapReduce
C. Stores HDFS data in a database for performing SQL-like ad-hoc queries
D. Exists as a subproject of Hive
Answer : C
Question No : 58 –
You write MapReduce job to process 100 files in HDFS. Your MapReduce algorithm uses
TextInputFormat: the mapper applies a regular expression over input values and emits key-
values pairs with the key consisting of the matching text, and the value containing the
filename and byte offset. Determine the difference between setting the number of reduces
to one and settings the number of reducers to zero.
A. There is no difference in output between the two settings.
B. With zero reducers, no reducer runs and the job throws an exception. With one reducer, instances of matching patterns are stored in a single file on HDFS.
C. With zero reducers, all instances of matching patterns are gathered together in one file on HDFS. With one reducer, instances of matching patterns are stored in multiple files on HDFS.
D. With zero reducers, instances of matching patterns are stored in multiple files on HDFS. With one reducer, all instances of matching patterns are gathered together in one file on HDFS.
Answer : D
Explanation: * It is legal to set the number of reduce-tasks to zero if no reduction is desired. In this case the outputs of the map-tasks go directly to the FileSystem, into the output path set by setOutputPath(Path). The framework does not sort the map-outputs before writing them out to the FileSystem. * Often, you may want to process input data using a map function only. To do this, simply set mapreduce.job.reduces to zero. The MapReduce framework will not create any reducer tasks. Rather, the outputs of the mapper tasks will be the final output of the job. Note: Reduce In this phase the reduce(WritableComparable, Iterator, OutputCollector, Reporter) method is called for each <key, (list of values)> pair in the grouped inputs. The output of the reduce task is typically written to the FileSystem via OutputCollector.collect(WritableComparable, Writable). Applications can use the Reporter to report progress, set application-level status messages and update Counters, or just indicate that they are alive. The output of the Reducer is not sorted.
Question No : 59 –
Review the following data and Pig code.
M,38,95111
F,29,95060
F,45,95192
M,62,95102
F,56,95102
A = LOAD ‘data’ USING PigStorage(‘.’) as (gender:Chararray,
age:int, zlp:chararray);
B = FOREACH A GENERATE age;
Which one of the following commands would save the results of B to a folder in hdfs named
myoutput?
A. STORE A INTO &apos;myoutput&apos; USING PigStorage(&apos;,&apos;);
B. DUMP B using PigStorage(&apos;myoutput&apos;);
C. STORE B INTO &apos;myoutput&apos;;
D. DUMP B INTO &apos;myoutput&apos;;
Answer : C
Question No : 60 –
What is the term for the process of moving map outputs to the reducers?
A. Reducing
B. Combining
C. Partitioning
D. Shuffling and sorting
Answer : D
Question No : 61 –
What does Pig provide to the overall Hadoop solution?
A. Legacy language Integration with MapReduce framework
B. Simple scripting language for writing MapReduce programs
C. Database table and storage management services
D. C++ interface to MapReduce and data warehouse infrastructure
Answer : B
Question No : 62 –
A NameNode in Hadoop 2.2 manages ______________.
A. Two namespaces: an active namespace and a backup namespace
B. A single namespace
C. An arbitrary number of namespaces
D. No namespaces
Answer : B
Question No : 63 –
To use a lava user-defined function (UDF) with Pig what must you do?
A. Define an alias to shorten the function name
B. Pass arguments to the constructor of UDFs implementation class
C. Register the JAR file containing the UDF
D. Put the JAR file into the user&apos;s home folder in HDFS
Answer : C
Question No : 64 –
Consider the following two relations, A and B.
A Pig JOIN statement that combined relations A by its first field and B by its second field
would produce what output?
A. 2 Jim Chris 2 3 Terry 3 4 Brian 4
B. 2 cherry 2 cherry 3 orange 4 peach
C. 2 cherry Jim, Chris 3 orange Terry 4 peach Brian
D. 2 cherry Jim 2 2 cherry Chris 2 3 orange Terry 3 4 peach Brian 4
Answer : D
Question No : 65 –
Given the following Hive command:
Which one of the following statements is true?
A. The files in the mydata folder are copied to a subfolder of /apps/hlve/warehouse
B. The files in the mydata folder are moved to a subfolder of /apps/hive/wa re house
C. The files in the mydata folder are copied into Hive’s underlying relational database
D. The files in the mydata folder do not move from their current location In HDFS
Answer : D
Question No : 66 –
Which Two of the following statements are true about hdfs? Choose 2 answers
A. An HDFS file that is larger than dfs.block.size is split into blocks
B. Blocks are replicated to multiple datanodes
C. HDFS works best when storing a large number of relatively small files
D. Block sizes for all files must be the same size
Answer : A,B
Question No : 67 –
Given the following Hive command:
INSERT OVERWRITE TABLE mytable SELECT * FROM myothertable;
Which one of the following statements is true?
A. The contents of myothertable are appended to mytable
B. Any existing data in mytable will be overwritten
C. A new table named mytable is created, and the contents of myothertable are copied into mytable
D. The statement is not a valid Hive command
Answer : B
Question No : 68 –
Which one of the following statements describes a Hive user-defined aggregate function?
A. Operates on multiple input rows and creates a single row as output
B. Operates on a single input row and produces a single row as output
C. Operates on a single input row and produces a table as output
D. Operates on multiple input rows and produces a table as output
Answer : A
Question No : 69 –
Identify the tool best suited to import a portion of a relational database every day as files
into HDFS, and generate Java classes to interact with that imported data?
A. Oozie
B. Flume
C. Pig
D. Hue
E. Hive
F. Sqoop
G. fuse-dfs
Answer : F
Explanation: Sqoop (SQL-to-Hadoop) is a straightforward command-line tool with the following capabilities: Imports individual tables or entire databases to files in HDFS Generates Java classes to allow you to interact with your imported data Provides the ability to import from SQL databases straight into your Hive data warehouse Note: Data Movement Between Hadoop and Relational Databases Data can be moved between Hadoop and a relational database as a bulk data transfer, or relational tables can be accessed from within a MapReduce map function. Note: * Cloudera’s Distribution for Hadoop provides a bulk data transfer tool (i.e., Sqoop) that imports individual tables or entire databases into HDFS files. The tool also generates Java classes that support interaction with the imported data. Sqoop supports all relational databases over JDBC, and Quest Software provides a connector (i.e., OraOop) that has been optimized for access to data residing in Oracle databases. Reference: http://log.medcl.net/item/2011/08/hadoop-and-mapreduce-big-data-analytics- gartner/ (Data Movement between hadoop and relational databases, second paragraph)
Question No : 70 –
Which HDFS command copies an HDFS file named foo to the local filesystem as localFoo?
A. hadoop fs -get foo LocalFoo
B. hadoop -cp foo LocalFoo
C. hadoop fs -Is foo
D. hadoop fs -put foo LocalFoo
Answer : A
Question No : 71 –
What is a SequenceFile?
A. A SequenceFile contains a binary encoding of an arbitrary number of homogeneous writable objects.
B. A SequenceFile contains a binary encoding of an arbitrary number of heterogeneous writable objects.
C. A SequenceFile contains a binary encoding of an arbitrary number of WritableComparable objects, in sorted order.
D. A SequenceFile contains a binary encoding of an arbitrary number key-value pairs. Each key must be the same type. Each value must be same type.
Answer : D
Explanation: SequenceFile is a flat file consisting of binary key/value pairs. There are 3 different SequenceFile formats: Uncompressed key/value records. Record compressed key/value records – only ‘values’ are compressed here. Block compressed key/value records – both keys and values are collected in ‘blocks’ separately and compressed. The size of the ‘block’ is configurable. Reference: http://wiki.apache.org/hadoop/SequenceFile
Question No : 72 –
Workflows expressed in Oozie can contain:
A. Sequences of MapReduce and Pig. These sequences can be combined with other actions including forks, decision points, and path joins.
B. Sequences of MapReduce job only; on Pig on Hive tasks or jobs. These MapReduce sequences can be combined with forks and path joins.
C. Sequences of MapReduce and Pig jobs. These are limited to linear sequences of actions with exception handlers but no forks.
D. Iterntive repetition of MapReduce jobs until a desired answer or state is reached.
Answer : A
Explanation: Oozie workflow is a collection of actions (i.e. Hadoop Map/Reduce jobs, Pig jobs) arranged in a control dependency DAG (Direct Acyclic Graph), specifying a sequence of actions execution. This graph is specified in hPDL (a XML Process Definition Language). hPDL is a fairly compact language, using a limited amount of flow control and action nodes. Control nodes define the flow of execution and include beginning and end of a workflow (start, end and fail nodes) and mechanisms to control the workflow execution path ( decision, fork and join nodes). Workflow definitions Currently running workflow instances, including instance states and variables Reference: Introduction to Oozie Note: Oozie is a Java Web-Application that runs in a Java servlet-container – Tomcat and uses a database to store:
Question No : 73 –
Which one of the following files is required in every Oozie Workflow application?
A. job.properties
B. Config-default.xml
C. Workflow.xml
D. Oozie.xml
Answer : C
Question No : 74 –
You have a directory named jobdata in HDFS that contains four files: _first.txt, second.txt,
.third.txt and #data.txt. How many files will be processed by the
FileInputFormat.setInputPaths () command when it’s given a path object representing this
directory?
A. Four, all files will be processed
B. Three, the pound sign is an invalid character for HDFS file names
C. Two, file names with a leading period or underscore are ignored
D. None, the directory cannot be named jobdata
E. One, no special characters can prefix the name of an input file
Answer : C
Explanation: Files starting with ‘_’ are considered ‘hidden’ like unix files starting with ‘.’. # characters are allowed in HDFS file names.
Question No : 75 –
Which two of the following statements are true about Pig’s approach toward data? Choose
2 answers
A. Accepts only data that has a key/value pair structure
B. Accepts data whether it has metadata or not
C. Accepts only data that is defined by metadata tables stored in a database
D. Accepts tab-delimited text data only
E. Accepts any data: structured or unstructured
Answer : B,E
Question No : 76 –
In a MapReduce job, the reducer receives all values associated with same key. Which
statement best describes the ordering of these values?
A. The values are in sorted order.
B. The values are arbitrarily ordered, and the ordering may vary from run to run of the same MapReduce job.
C. The values are arbitrary ordered, but multiple runs of the same MapReduce job will always have the same ordering.
D. Since the values come from mapper outputs, the reducers will receive contiguous sections of sorted values.
Answer : B
Explanation: Note: * Input to the Reducer is the sorted output of the mappers. * The framework calls the application’s Reduce function once for each unique key in the sorted order. * Example: For the given sample input the first map emits: < Hello, 1> < World, 1> < Bye, 1> < World, 1> The second map emits: < Hello, 1> < Hadoop, 1> < Goodbye, 1> < Hadoop, 1>
Question No : 77 –
Indentify the utility that allows you to create and run MapReduce jobs with any executable
or script as the mapper and/or the reducer?
A. Oozie
B. Sqoop
C. Flume
D. Hadoop Streaming
E. mapred
Answer : D
Explanation: Hadoop streaming is a utility that comes with the Hadoop distribution. The utility allows you to create and run Map/Reduce jobs with any executable or script as the mapper and/or the reducer. Reference: http://hadoop.apache.org/common/docs/r0.20.1/streaming.html (Hadoop Streaming, second sentence)
Question No : 78 –
MapReduce v2 (MRv2/YARN) is designed to address which two issues?
A. Single point of failure in the NameNode.
B. Resource pressure on the JobTracker.
C. HDFS latency.
D. Ability to run frameworks other than MapReduce, such as MPI.
E. Reduce complexity of the MapReduce APIs.
F. Standardize on a single MapReduce API.
Answer : A,B
Reference: Apache Hadoop YARN – Concepts & Applications
Question No : 79 –
Which project gives you a distributed, Scalable, data store that allows you random, realtime
read/write access to hundreds of terabytes of data?
A. HBase
B. Hue
C. Pig
D. Hive
E. Oozie
F. Flume
G. Sqoop
Answer : A
Explanation: Use Apache HBase when you need random, realtime read/write access to your Big Data. Note: This project’s goal is the hosting of very large tables — billions of rows X millions of columns — atop clusters of commodity hardware. Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google’s Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS. Features Linear and modular scalability. Strictly consistent reads and writes. Automatic and configurable sharding of tables Automatic failover support between RegionServers. Convenient base classes for backing Hadoop MapReduce jobs with Apache HBase tables. Easy to use Java API for client access. Block cache and Bloom Filters for real-time queries. Query predicate push down via server side Filters Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options Extensible jruby-based (JIRB) shell Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX Reference: http://hbase.apache.org/ (when would I use HBase? First sentence)
Question No : 80 –
In a MapReduce job, you want each of your input files processed by a single map task.
How do you configure a MapReduce job so that a single map task processes each input file
regardless of how many blocks the input file occupies?
A. Increase the parameter that controls minimum split size in the job configuration.
B. Write a custom MapRunner that iterates over all key-value pairs in the entire file.
C. Set the number of mappers equal to the number of input files you want to process.
D. Write a custom FileInputFormat and override the method isSplitable to always return false.
Answer : D
Explanation: FileInputFormat is the base class for all file-based InputFormats. This provides a generic implementation of getSplits(JobContext). Subclasses of FileInputFormat can also override the isSplitable(JobContext, Path) method to ensure input-files are not split-up and are processed as a whole by Mappers. Reference: org.apache.hadoop.mapreduce.lib.input, Class FileInputFormat<K,V>
Question No : 81 –
You want to populate an associative array in order to perform a map-side join. Youve
decided to put this information in a text file, place that file into the DistributedCache and
read it in your Mapper before any records are processed.
Indentify which method in the Mapper you should use to implement code for reading the file
and populating the associative array?
A. combine
B. map
C. init
D. configure
Answer : D
Reference: org.apache.hadoop.filecache , Class DistributedCache
Question No : 82 –
What data does a Reducer reduce method process?
A. All the data in a single input file.
B. All data produced by a single mapper.
C. All data for a given key, regardless of which mapper(s) produced it.
D. All data for a given value, regardless of which mapper(s) produced it.
Answer : C
Explanation: Reducing lets you aggregate values together. A reducer function receives an iterator of input values from an input list. It then combines these values together, returning a single output value. All values with the same key are presented to a single reduce task. Reference: Yahoo! Hadoop Tutorial, Module 4: MapReduce
Question No : 83 –
Which two of the following are true about this trivial Pig program’ (choose Two)
A. The contents of myfile appear on stdout
B. Pig assumes the contents of myfile are comma delimited
C. ABC has a schema associated with it
D. myfile is read from the user’s home directory in HDFS
Answer : A,D
Question No : 84 –
Identify the MapReduce v2 (MRv2 / YARN) daemon responsible for launching application
containers and monitoring application resource usage?
A. ResourceManager
B. NodeManager
C. ApplicationMaster
D. ApplicationMasterService
E. TaskTracker
F. JobTracker
Answer : B
Reference: Apache Hadoop YARN – Concepts & Applications
Question No : 85 –
Determine which best describes when the reduce method is first called in a MapReduce
job?
A. Reducers start copying intermediate key-value pairs from each Mapper as soon as it has completed. The programmer can configure in the job what percentage of the intermediate data should arrive before the reduce method begins.
B. Reducers start copying intermediate key-value pairs from each Mapper as soon as it has completed. The reduce method is called only after all intermediate data has been copied and sorted.
C. Reduce methods and map methods all start at the beginning of a job, in order to provide optimal performance for map-only or reduce-only jobs.
D. Reducers start copying intermediate key-value pairs from each Mapper as soon as it has completed. The reduce method is called as soon as the intermediate key-value pairs start to arrive.
Answer : B
Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers , When is the reducers are started in a MapReduce job?
Question No : 86 –
You need to run the same job many times with minor variations. Rather than hardcoding all
job configuration options in your drive code, youve decided to have your Driver subclass
org.apache.hadoop.conf.Configured and implement the org.apache.hadoop.util.Tool
interface.
Indentify which invocation correctly passes.mapred.job.name with a value of Example to
Hadoop?
A. hadoop “mapred.job.name=Example” MyDriver input output
B. hadoop MyDriver mapred.job.name=Example input output
C. hadoop MyDrive –D mapred.job.name=Example input output
D. hadoop setproperty mapred.job.name=Example MyDriver input output
E. hadoop setproperty (“mapred.job.name=Example”) MyDriver input output
Answer : C
Explanation: Configure the property using the -D key=value notation: -D mapred.job.name=’My Job’ You can list a whole bunch of options by calling the streaming jar with just the -info argument Reference: Python hadoop streaming : Setting a job name
Question No : 87 –
Which Hadoop component is responsible for managing the distributed file system
metadata?
A. NameNode
B. Metanode
C. DataNode
D. NameSpaceManager
Answer : A
Question No : 88 –
You have user profile records in your OLPT database, that you want to join with web logs
you have already ingested into the Hadoop file system. How will you obtain these user
records?
A. HDFS command
B. Pig LOAD command
C. Sqoop import
D. Hive LOAD DATA command
E. Ingest with Flume agents
F. Ingest with Hadoop Streaming
Answer : C
Reference: Hadoop and Pig for Large-Scale Web Log Analysis
Question No : 89 –
What does the following command do?
register ‘/piggyban):/pig-files.jar’;
A. Invokes the user-defined functions contained in the jar file
B. Assigns a name to a user-defined function or streaming command
C. Transforms Pig user-defined functions into a format that Hive can accept
D. Specifies the location of the JAR file containing the user-defined functions
Answer : D
Question No : 90 –
Which of the following tool was designed to import data from a relational database into
HDFS?
A. HCatalog
B. Sqoop
C. Flume
D. Ambari
Answer : B
Question No : 91 –
You have just executed a MapReduce job. Where is intermediate data written to after being
emitted from the Mappers map method?
A. Intermediate data in streamed across the network from Mapper to the Reduce and is never written to disk.
B. Into in-memory buffers on the TaskTracker node running the Mapper that spill over and are written into HDFS.
C. Into in-memory buffers that spill over to the local file system of the TaskTracker node running the Mapper.
D. Into in-memory buffers that spill over to the local file system (outside HDFS) of the TaskTracker node running the Reducer
E. Into in-memory buffers on the TaskTracker node running the Reducer that spill over and are written into HDFS.
Answer : C
Explanation: The mapper output (intermediate data) is stored on the Local file system (NOT HDFS) of each individual mapper nodes. This is typically a temporary directory location which can be setup in config by the hadoop administrator. The intermediate data is cleaned up after the Hadoop Job completes. Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers, Where is the Mapper Output (intermediate kay-value data) stored ?
Question No : 92 –
Which one of the following statements is true regarding a MapReduce job?
A. The job’s Partitioner shuffles and sorts all (key.value) pairs and sends the output to all reducers
B. The default Hash Partitioner sends key value pairs with the same key to the same Reducer
C. The reduce method is invoked once for each unique value
D. The Mapper must sort its output of (key.value) pairs in descending order based on value
Answer : A
Question No : 93 –
Which YARN component is responsible for monitoring the success or failure of a
Container?
A. ResourceManager
B. ApplicationMaster
C. NodeManager
D. JobTracker
Answer : A
Question No : 94 –
You are developing a combiner that takes as input Text keys, IntWritable values, and emits
Text keys, IntWritable values. Which interface should your class implement?
A. Combiner &lt;Text, IntWritable, Text, IntWritable>
B. Mapper &lt;Text, IntWritable, Text, IntWritable>
C. Reducer &lt;Text, Text, IntWritable, IntWritable>
D. Reducer &lt;Text, IntWritable, Text, IntWritable>
E. Combiner &lt;Text, Text, IntWritable, IntWritable>
Answer : D
Question No : 95 –
Which HDFS command uploads a local file X into an existing HDFS directory Y?
A. hadoop scp X Y
B. hadoop fs -localPut X Y
C. hadoop fs-put X Y
D. hadoop fs -get X Y
Answer : C
Question No : 96 –
Which best describes what the map method accepts and emits?
A. It accepts a single key-value pair as input and emits a single key and list of corresponding values as output.
B. It accepts a single key-value pairs as input and can emit only one key-value pair as output.
C. It accepts a list key-value pairs as input and can emit only one key-value pair as output.
D. It accepts a single key-value pairs as input and can emit any number of key-value pair as output, including zero.
Answer : D
Explanation: public class Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT> extends Object Maps input key/value pairs to a set of intermediate key/value pairs. Maps are the individual tasks which transform input records into a intermediate records. The transformed intermediate records need not be of the same type as the input records. A given input pair may map to zero or many output pairs. Reference: org.apache.hadoop.mapreduce Class Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
Question No : 97 –
Assuming default settings, which best describes the order of data provided to a reducers
reduce method:
A. The keys given to a reducer arent in a predictable order, but the values associated with those keys always are.
B. Both the keys and values passed to a reducer always appear in sorted order.
C. Neither keys nor values are in any predictable order.
D. The keys given to a reducer are in sorted order but the values associated with each key are in no predictable order
Answer : D
Explanation: Reducer has 3 primary phases: 1. Shuffle The Reducer copies the sorted output from each Mapper using HTTP across the network. 2. Sort The framework merge sorts Reducer inputs by keys (since different Mappers may have output the same key). The shuffle and sort phases occur simultaneously i.e. while outputs are being fetched they are merged. SecondarySort To achieve a secondary sort on the values returned by the value iterator, the application should extend the key with the secondary key and define a grouping comparator. The keys will be sorted using the entire key, but will be grouped using the grouping comparator to decide which keys and values are sent in the same call to reduce. 3. Reduce In this phase the reduce(Object, Iterable, Context) method is called for each <key, (collection of values)> in the sorted inputs. The output of the reduce task is typically written to a RecordWriter via TaskInputOutputContext.write(Object, Object). The output of the Reducer is not re-sorted. Reference: org.apache.hadoop.mapreduce, Class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
Question No : 98 –
For each intermediate key, each reducer task can emit:
A. As many final key-value pairs as desired. There are no restrictions on the types of those key-value pairs (i.e., they can be heterogeneous).
B. As many final key-value pairs as desired, but they must have the same type as the intermediate key-value pairs.
C. As many final key-value pairs as desired, as long as all the keys have the same type and all the values have the same type.
D. One final key-value pair per value associated with the key; no restrictions on the type.
E. One final key-value pair per key; no restrictions on the type.
Answer : C
Reference: Hadoop Map-Reduce Tutorial; Yahoo! Hadoop Tutorial, Module 4: MapReduce
Question No : 99 –
The Hadoop framework provides a mechanism for coping with machine issues such as
faulty configuration or impending hardware failure. MapReduce detects that one or a
number of machines are performing poorly and starts more copies of a map or reduce task.
All the tasks run simultaneously and the task finish first are used. This is called:
A. Combine
B. IdentityMapper
C. IdentityReducer
D. Default Partitioner
E. Speculative Execution
Answer : E
Explanation: Speculative execution: One problem with the Hadoop system is that by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program. For example if one node has a slow disk controller, then it may be reading its input at only 10% the speed of all the other nodes. So when 99 map tasks are already complete, the system is still waiting for the final map task to check in, which takes much longer than all the other nodes. By forcing tasks to run in isolation from one another, individual tasks do not know where their inputs come from. Tasks trust the Hadoop platform to just deliver the appropriate input. Therefore, the same input can be processed multiple times in parallel, to exploit differences in machine capabilities. As most of the tasks in a job are coming to a close, the Hadoop platform will schedule redundant copies of the remaining tasks across several nodes which do not have other work to perform. This process is known as speculative execution. When tasks complete, they announce this fact to the JobTracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the TaskTrackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first. Reference: Apache Hadoop, Module 4: MapReduce Note: * Hadoop uses “speculative execution.” The same task may be started on multiple boxes. The first one to finish wins, and the other copies are killed. Failed tasks are tasks that error out. * There are a few reasons Hadoop can kill tasks by his own decisions: a) Task does not report progress during timeout (default is 10 minutes) b) FairScheduler or CapacityScheduler needs the slot for some other pool (FairScheduler) or queue (CapacityScheduler). c) Speculative execution causes results of task not to be needed since it has completed on other place. Reference: Difference
Question No : 100 –
What does the following WebHDFS command do?
Curl -1 -L http://host:port/webhdfs/v1/foo/bar?op=OPEN
A. Make a directory /foo/bar
B. Read a file /foo/bar
C. List a directory /foo
D. Delete a directory /foo/bar
Answer : B
Consider the following two relations, A and B.
What is the output of the following Pig commands?
X = GROUP A BY S1;
DUMP X;
A. Option A
B. Option B
C. Option C
D. Option D
Answer : D
Question No : 103 –
Review the following ‘data’ file and Pig code.
Which one of the following statements is true?
A. The Output Of the DUMP D command IS (M,{(M,62.95102),(M,38,95111)})
B. The output of the dump d command is (M, {(38,95in),(62,95i02)})
C. The code executes successfully but there is not output because the D relation is empty
D. The code does not execute successfully because D is not a valid relation
Answer : A
Question No : 104 –
Your client application submits a MapReduce job to your Hadoop cluster. Identify the
Hadoop daemon on which the Hadoop framework will look for an available slot schedule a
MapReduce operation.
A. TaskTracker
B. NameNode
C. DataNode
D. JobTracker
E. Secondary NameNode
Answer : D
Explanation: JobTracker is the daemon service for submitting and tracking MapReduce jobs in Hadoop. There is only One Job Tracker process run on any hadoop cluster. Job Tracker runs on its own JVM process. In a typical production cluster its run on a separate machine. Each slave node is configured with job tracker node location. The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted. JobTracker in Hadoop performs following actions(from Hadoop Wiki:) Client applications submit jobs to the Job tracker. The JobTracker talks to the NameNode to determine the location of the data The JobTracker locates TaskTracker nodes with available slots at or near the data The JobTracker submits the work to the chosen TaskTracker nodes. The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker. A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable. When the work is completed, the JobTracker updates its status. Client applications can poll the JobTracker for information. Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers, What is a JobTracker in Hadoop? How many instances of JobTracker run on a Hadoop Cluster?
Question No : 105 –
Youve written a MapReduce job that will process 500 million input records and generated
500 million key-value pairs. The data is not uniformly distributed. Your MapReduce job will
create a significant amount of intermediate data that it needs to transfer between mappers
and reduces which is a potential bottleneck. A custom implementation of which interface is
most likely to reduce the amount of intermediate data transferred across the network?
A. Partitioner
B. OutputFormat
C. WritableComparable
D. Writable
E. InputFormat
F. Combiner
Answer : F
Explanation: Combiners are used to increase the efficiency of a MapReduce program. They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers. You can use your reducer code as a combiner if the operation performed is commutative and associative. Reference: 24 Interview Questions & Answers for Hadoop MapReduce developers, What are combiners? When should I use a combiner in my MapReduce Job?
Question No : 106 –
Which one of the following statements is FALSE regarding the communication between
DataNodes and a federation of NameNodes in Hadoop 2.2?
A. Each DataNode receives commands from one designated master NameNode.
B. DataNodes send periodic heartbeats to all the NameNodes.
C. Each DataNode registers with all the NameNodes.
D. DataNodes send periodic block reports to all the NameNodes.
Answer : A
Question No : 107 –
All keys used for intermediate output from mappers must:
A. Implement a splittable compression algorithm.
B. Be a subclass of FileInputFormat.
C. Implement WritableComparable.
D. Override isSplitable.
E. Implement a comparator for speedy sorting.
Answer : C
Explanation: The MapReduce framework operates exclusively on <key, value> pairs, that is, the framework views the input to the job as a set of <key, value> pairs and produces a set of <key, value> pairs as the output of the job, conceivably of different types. The key and value classes have to be serializable by the framework and hence need to implement the Writable interface. Additionally, the key classes have to implement the WritableComparable interface to facilitate sorting by the framework. Reference: MapReduce Tutorial
Question No : 108 –
Which one of the following statements describes the relationship between the
ResourceManager and the ApplicationMaster?
A. The ApplicationMaster requests resources from the ResourceManager
B. The ApplicationMaster starts a single instance of the ResourceManager
C. The ResourceManager monitors and restarts any failed Containers of the ApplicationMaster
D. The ApplicationMaster starts an instance of the ResourceManager within each Container
Answer : A