Data Stage Intermediate Quiz

Data Stage Intermediate Quiz




Data Stage Intermediate Quiz contains set of 103 Data Stage MCQ Questions With Answers which will help you to clear Intermediate level quiz.




1) Deletion code generated in change capture when the

  1. Record which present only in before dataset not in After dataset
  2. Records which present only in after datset not in before dataset
  3. Record present in both the datasets
  4. None
Answer : A

2) Change capture stage contains two input links as

  1. previous and current
  2. history and delta
  3. before and after
  4. None
Answer : C

3) What are the characteristics of the containers

  1. Containers are used to execute jobs.
  2. Containers are a group of stages and links.
  3. Containers are used to stop jobs
  4. Containers allow multiple users to access a single job.
Answer : B

4) Which one is the type of view in Datastage Director?

  1. Job View
  2. Log View
  3. Status View
  4. All of the above
Answer : D

5) What does the ERROR 81021 Calling subroutine DSR_RECORD ACTION=2 mean and the solution is?

  1. It means when a job sequence is used and it contains many stages (usually more than 10) and very often when a network connection is slow and the solution is to redesign the whole job
  2. Error means that the source records are bad and the job needs a restart
  3. It is due to the communicaiton failure between DataStage client and server. First try to complie by using explicit login to Datastage Designer if problem still persist then execute DS.REINDEX ALL command from the Datastage shell.
  4. Job has failed with the error 81021 due to large volumes of data processing and the solution is to split the source records into multiple threads
Answer : C

6) Encode stage, contains maximum how many links

  1. 3
  2. 10
  3. 1
  4. 0
Answer : C

7) Environment variable which needs to be set to debug the run time environment

  1. $OSH_PRINT_DROPPED_COLUMNS
  2. $OSH_PRINT_SCHEMAS
  3. $OSH_SHOW_COMPONENT_CALLS
  4. $OSH_SHOW_STARTUP_STATUS
Answer : B

8) Environment variable that needs to be set for compiling the datastage job

  1. APT_LINKER
  2. APT_PM_DBX
  3. APT_LINKEROPT
  4. APT_COMPILER
Answer : D

9) Which environment variable,when set to true,causes a report to be produced which shows the operators,processes and data sets in the job?

  1. APT_DUMP_SCORE
  2. APT_JOB_REPORT
  3. APT_MONITOR_SIZE
  4. APT_RECORD_COUNTS
Answer : A

10) What does setting an environment variable, specified as a job parameter, to PROJDEF do?

  1. Populates the environment variable with the value of PROJDEF.
  2. Explicitly unsets the environment variable.
  3. Uses the value for the environment variable as shown in the DataStage Administrator.
  4. Uses the current setting for the environment variable from the operating system.
Answer : C

11) what is meaning of file extender in data stage server jobs?

  1. File extender means adding a new file to an already existing file in datastage
  2. File extender means adding only records to an already existing file in datastage
  3. File extender means adding columns or records to an already existing file in datastage
  4. File extender means adding only columns to an already existing file in datastage
Answer : C

12) Select the correct statement

  1. Remove duplicate stage having reject link
  2. Filter stage having reject link
  3. Both A and B
  4. None of the above
Answer : B

13) What Stage is used to read a flat file with pipe delimited.

  1. File lookup Stage
  2. sequential Stage
  3. Complex Flat File Stage
  4. Transformer Stage
Answer : B

14) Debug menu is available for

  1. Server Jobs
  2. Parallel Jobs
  3. Shared Containers
  4. All of the above
Answer : D

15) Which statement is not true from following

  1. Annotations can be used in server and parallel job but not in mainframe jobs
  2. Annotations can be used in server and parallel job but not in shared containers
  3. Annotations can be used only in server job
  4. None of the above
Answer : D

16) Variable used to set the directory to write the build op code

  1. OSH_BUILDOP_OBJECT
  2. OSH_BUILDOP_XLC_BIN
  3. OSH_BUILDOP_CODE
  4. A&C
Answer : C

17) Degree of Parallelism is DataStage job is defined by

  1. Logical nodes configured in the config file
  2. No of nodes in the hardware
  3. Job Run time Parameters
  4. Target Database
Answer : A

18) Is direct lookup possible using DB2 enterprise stage

  1. Yes
  2. No
Answer : B

19) A left outer join can be implemented by

  1. Join Stage
  2. Lookup Stage
  3. Join or Oracle stage
  4. None of the above
Answer : C

20) what does deletion of a dataset from                        “DataStage designer -> Tools -> DataSet Management” do?

  1. Deletes the control file only
  2. Deletes the dataset file at the server only
  3. Deletes both the control and server files
  4. None of the above
Answer : C

21) Datastage jobs can be exported in .XML file as well as in .DSX file

  1. True
  2. FALSE
Answer : A

22) Datastage Native operators are developed in

  1. C ++
  2. C
  3. BASIC
  4. COBAL
Answer : A

23) Loading Data into Teradata tables is achieved by

  1. ODBC Stage
  2. Teradata API
  3. JAVA API
  4. All of the above
Answer : B

24) Job Version Control can be achieved by

  1. Versioning Tools
  2. Versioning feature available within the Tool
  3. Both
  4. None of the Above
Answer : C

25) Which two tasks will create DataStage projects

  1. Install the DataStage engine
  2. Copy a project in DataStage Administrator
  3. Add new projects from DataStage Administrator
  4. Both A and C
Answer : D

26) Which three lookup types may be performed in the Lookup stage?

  1. Equality match
  2. Range on stream link
  3. Range on the reference link
  4. All of the above
Answer : D

27) Which Oracle Enterprise stage read property can be set using db options to tune job performance?

  1. memsize
  2. arraysize
  3. partitionsize
  4. transactsize
Answer : B

28) Which two stages allow reject links?

  1. Join stage
  2. Merge stage
  3. Lookup stage
  4. Both B & C
Answer : D

29) You would like to compare two versions of a job that has been copied and renamed. How would
you proceed?

  1. Use Advanced Find to locate job duplicates.
  2. Use the Compare against feature
  3. Verify the properties of the job for the version.
  4. Perform a check sum on an export of the job.
Answer : B

30) Which two statements describe both File Sets and Data Sets?

  1. File and Data Sets preserve partitioning.
  2. File and Data Sets are stored in internal format
  3. File and Data Sets contain header file and data files
  4. Both A & C
Answer : D

31) Which stage does not require sorted input?

  1. Join stage
  2. Merge stage
  3. Lookup stage
  4. Remove Duplicates stage
Answer : C

32) Which partitioning method requires specifying a key

  1. DB2
  2. Entire
  3. Modulus
  4. Random
Answer : C

33) What is descriptor file in dataset stage?

  1. It contains the description of data file
  2. It contains the data file
  3. It contains the address of data file
  4. All of the above
Answer : C

34) What is data file in dataset stage ?

  1. Data File contains the data in native format
  2. Data File contains the configuration data
  3. It contains the address of dataset
  4. None of the above
Answer : A

35) What is OConv () function and where is it used?

  1. OConv is used to convert only date into internal format
  2. OConv is used to convert only date into user understandable format
  3. OConv is used to convert the system understandable format to user understandable format
  4. None of the above
Answer : C

36) What is IConv () function and where is it used?

  1. IConv is used to convert only date into internal format
  2. IConv is used to convert the input to system understandable format
  3. IConv is used to convert only date into user understandable format
  4. All of the above
Answer : B

37) What is NLS in datastage? how we use NLS in Datastage ?

  1. NLS stands for no language support. It is used for excluding other country languages like French, German, Spanish, etc.
  2. NLS stands for national language support. It is used for including other country languages like French, German, Spanish, etc(whose scripts are similar to English) in the data that is processed
  3. NLS stands for neutral library support. It is used to support the system library functions used for other country languages like French, German, Spanish, etc.
  4. None of the above
Answer : B

38) What is the link partitioner in datastage job?

  1. It’s used for partition parallelism
  2. It’s used for pipeline parallelism
  3. The Link Partitioner stage is an active stage which takes one input andallows you to distribute partitioned rows to up to 64 output links. It’s used in DataStage Server Jobs.
  4. All of the above
Answer : C

39) What is Modulus in Dynamic Hashed File?

  1. If the size of the file remains same it is called as “Modulus”
  2. If the size of the file increases it is called as “Modulus”
  3. If the size of the file decreases it is called as “Modulus”.
  4. None of the above
Answer : B

40) What is Splitting in Dynamic Hashed File?

  1. If the size of the file remains same it is called as “Splitting”.
  2. If the size of the file increases it is called as “Splitting”.
  3. If the size of the file decreases it is called as “Splitting”.
  4. None of the above
Answer : C

41) Link Partitioner enables to run server jobs in parallel but needs MPP system

  1. TRUE
  2. False
Answer : B

42) Enable hashed file cache sharing can be used to

  1. Run multiple instances of a job
  2. Enable multiple processes to access the same hash file in cache
  3. Save memory resources and speed up execution
  4. Both B & C
Answer : B, C

43) Comments in routine should begin with

  1. REM
  2. * (Start Sign)
  3. ! (Exclamation Sign)
  4. All of the above
Answer : D

44) Which one is correct from below:

  1. IF X THEN
  2. A = B; REM The rest of this line is a comment; B = C
  3. END
  4. IF X THEN
  5. A = B; REM The rest of this line is a comment
  6. B = C
  7. END
  8. IF X THEN
  9. A = B; * The rest of this line is a comment; B = C
  10. END
  11. All of the above
Answer : B

45) Server Job compiles successfully but doesn’t run because

  1. Job has not been compiled with compatible compiler
  2. Job design have cyclic dependencies within a sequence of active stages
  3. Configuration file is missing from Datastage server
  4. All of the above
Answer : B

46) Which of the below statement is not true when a Datastage job is validated, :

  1. Connections are made to the data sources or data warehouse for server jobs
  2. Parallel job runs in ‘check only’ mode so data is not affected
  3. Intermediate files are create
  4. A
  5. B
  6. C
  7. D
Answer : C

47) Parameter Sets provide an easier and faster method to add parameters to a job, eliminating the need to add parameters invidiually to each job.

  1. True
  2. FALSE
Answer : A

48) Which of the following statement is incorrect:

  1. We can take an export of Data Stage job executable in DSX format.
  2. We can take an export of Data Stage job executable in XML format.
  3. We can take an export of Data Stage job executable in pdf format.
  4. All of the above
Answer : C

49) What is Modulus and Splitting in Dynamic Hashed File

  1. Modulus – Increasing the file Size
  2. Splitting – Decreasing the file size
  3. Modulus – Decreasing the file Size
  4. Splitting – Increase the file size
  5. None
Answer : A

50) Size of read cache of a hash file can be set to the value between

  1. 0-256 MB
  2. 0-1024 MB
  3. 0-999 MB
  4. 0 – 2048 MB
Answer : C

51) What are the two types of hashed files?

  1. Static and Dynamic
  2. Generic and Specific
  3. Static and Specific
  4. Generic and Dynamic
Answer : A

52) One of the below join option is not available in Join stage

  1. Self join
  2. Inner join
  3. Left outer join
  4. Full outer join
Answer : A

53) Which one is false statement about the Join stage?

  1. All the inputs to the Join stage must be sorted by the Join key.
  2. Join stages can have reject links that capture rows without matches
  3. The Join stage supports inner, left outer, and right outer joins.
  4. None of the above
Answer : B

54) How does a Join stage process an Inner join?

  1. It transfers all values from the right data set and transfers values from the left data set and intermediate data sets only where key columns match.
  2. It transfers records from the input data sets whose key columns contain equal values to the output data set.
  3. It transfers all values from the left data set but transfers values from the right data set and intermediate data sets only when key columns match.
  4. It transfers records in which the contents of the key columns are equal from the left and right input data sets to the output data set. It also transfers records whose key columns contain unequal values from both input data sets to the output data set.
Answer : B

55) Number of input links that a merge stage can accepts

  1. 1
  2. 3
  3. 2
  4. n
Answer : D

56) Which feature does the Merge stage have that is not available in the Join and Lookup stages?

  1. All inputs must be sorted.
  2. Input data may be unsorted.
  3. Several reject links may be specified
  4. No more than one reject link may be specified.
Answer : C

57) A DataStage job contains a parallel Transformer with a single input link and a single output link. The Transformer has a constraint that should produce 1000 records,however only 900 came out through the output link. What should be done to identify the mis

  1. Scan generated osh script for possible errors.
  2. Remove the constraint on the output link.
  3. Turn trace on using DataStage Administrator.
  4. Add a Reject link to the Transformer stage.
Answer : D

58) Which one is false statement about the Parallel Transformer stage?

  1. The Transformer allows you to copy columns.
  2. The Transformer allows you to do lookups.
  3. The Transformer allows you to apply transforms using routines.
  4. The Transformer allows you to do data type conversions.
Answer : B

59) How does Hash Partition in a join stage affects the performance?

  1. Better performance
  2. Lower performance
  3. No Change
  4. Cant Say
Answer : D

60) The default partitioning method of a derived operator is

  1. Any
  2. Hash
  3. Round Robin
  4. Random
Answer : A

61) Which one is not a keyless partitioning method?

  1. Entire
  2. Modulus
  3. Round Robin
  4. Random
Answer : B

62) Which two partitioning methods are keyless?

  1. Round Robin
  2. Entire
  3. Hash
  4. Both A & B
Answer : D

63) Which one of the below is a passive stage?

  1. Peek stage
  2. Aggregator stage
  3. Sort stage
  4. Transformer stage
Answer : A

64) By default where can you output data using Peek stage?

  1. job log in director
  2. link properties of peek
  3. stage properties of peek
  4. none
Answer : A

65) Pivot stage in DataStage before 8.5 supports

  1. Only horizontal pivoting i.e. columns into row
  2. Only vertical pivoting i.e. rows into columns
  3. Both A & B
  4. None of the above
Answer : A

66) Players process are                                    A.The actual processes associated with Stages                                                      B.Send stderr to SL                             C.Establish connections to other players for data flo

  1. A,B and C
  2. All
  3. D only
  4. A and B
Answer : B

67) Protected Project can be accessible only by user who has

  1. Super Operator role
  2. Production Manager role
  3. Super Operator & Production Manager Roles
  4. None
Answer : B

68) What Happens if RCP (Runtime column propagation) is disabled?

  1. Datastage jobs cannot be run
  2. OSH has to perform Import and export every time when the job runs and thus processing time of job is increased.
  3. The metadata of those stage whose output connects to the shared container input, will not be stored
  4. No impact on Datastage jobs and the metadata will be propagated at run time
Answer : B

69) When RCP is Disabled                          A.DataStage Designer will enforce Stage Input Column to Output Column mappings.             B.At job compile time modify operators are inserted on output links in the generated osh C.Modify operators can add

  1. A only
  2. A and B
  3. None
  4. All
Answer : D

70) When RCP is Enabled                            A.DataStage Designer will not enforce mapping rules.                                                                      B.No Modify operator inserted at compile time. C.Danger of runtime error if column nam

  1. A and B
  2. B and C
  3. All
  4. D only
Answer : C

71) Which should specified to manage Runtime Column Propagation?

  1. enabled in DataStage Administrator and at the stage level
  2. attached to a table definition in DataStage Manager
  3. enabled only at the stage level
  4. enabled with environmental parameters set at runtime
Answer : A

72) One of the below object don’t be a part of DataStage Repository

  1. Jobs
  2. Table Definitions
  3. Shared containers
  4. Local containers
Answer : D

73) Section Leader process are                     A.Forks Players processes (one/Stage) B.Establish connections to other players for data flow                                          C.Manages up/down communication.     D.Clean up upon completion.

  1. A and C are correct
  2. A and B are correct
  3. C and D are correct
  4. None
Answer : A

74) Sequencer job does not consist?

  1. Activity Stage
  2. Command Stage
  3. Routing Stage
  4. Non of them
Answer : D

75) Which one of the below would not be necessary to build a Job Sequence that: picks up data from a file that will arrive in an directory overnight, launches a job once the file has arrived, sends an email to the administrator upon successful completion of t

  1. Notification Activity
  2. Wait For File Activity
  3. Job Activity
  4. Sequencer
Answer : D

76) Difference between Hashfile and Sequential File in server job?

  1. Hash file stores the data based on hash algorithm and on a key value. A sequential file is just a file with no key column.
  2. Hash file can be used as a reference for look up. Sequential file cannot
  3. searching a record is faster in hash file as comparedf to sequential file.
  4. All of the above
Answer : D

77) Sequential File Stage                              A.Normally will execute in sequential mode B.Can execute in parallel if reading multiple files (file pattern option)                                            C.Can use multiple readers within a node on

  1. A
  2. B
  3. A and B
  4. All
Answer : D

78) Which “Reject Mode” option in the Sequential File stage will write records to a reject link?

  1. Output
  2. Fail
  3. Drop
  4. Continue
Answer : A

79) How I can convert Server Jobs into Parallel Jobs?

  1. You can convert your server job into a server shared container. The server shared container can also be used in parallel jobs as shared container.
  2. Using IPC Stage, by keeping this stage in between two passive stages
  3. Using LINK PARTITIONER AND LINK COLLECTOR
  4. Not possible
Answer : D

80) How can improve the performance of the server jobs?

  1. By enable inter process row buffering through the administrator.
  2. By adding IPC stage between two passive stages
  3. A only
  4. Both A and B
Answer : D

81) Identify the usages of INPROCESS , INTERPROCESS row buffers from the following

  1. In Process – Connected between active stages. Improves performance of the jobs by turning of and on job recompilation.
  2. Inter Process – Used when running on SMP parallel system. Enables the job to run using a separate process for each active stage, which
  3. In Process – Connected between active stages. Improves performance of the jobs by turning in-process row bufferring during execution of the job
  4. Inter Process – Used when running on SMP parallel system. Enables the job to run using the same process for e
  5. In Process – Connected between active stages. Improves performance of the jobs by turning in-process row bufferring on and recompiling the job.
  6. Inter Process – Used when running on SMP parallel system. Enables the job to run using a separate process f
  7. In Process – Connected between active stages. Improves performance of the jobs by turning of and on job recompilation.
  8. Inter Process – Used when running on MPP system. Enables the job to run using a separate process for each active stage, which is run s
Answer : C

82) True or False?                                                A.Server jobs are compiled and run on DataStage Server                                        B.Parallel jobs are compiled and run on a DataStage Unix Server and can be run in parallel on SMP M

  1. Both are False
  2. A True and B False
  3. Both are True
  4. B True and A False.
Answer : C

83) In MPP/ Cluster systems                                          A. Each node is a uniprocessor or SMP.               B. Its having own hardware resources.                C. Its sharing hardware resources across all nodes

  1. B and C
  2. D Only
  3. A,B and D
  4. B only
Answer : C

84) if stable sort is set to true it will allow the duplicate records to output links?

  1. True
  2. FALSE
Answer : B

85) Surrogate Key stages properties are          A.Value Of the key is 16-bit, 32-bit, or 64-bit integer.                                                         B.starting number is 0 by default.                                   C.starting number is 1 by de

  1. A,B and D are correct.
  2. D Only
  3. A and B Only
  4. A Only
Answer : A

86) Which Statements describes how to add functionality to the Transformer stage?

  1. Edit the C++ code generated by the Transformer Stage
  2. Create a new parallel routine in the routines category that specifies name,type,path and return type of an external program
  3. Create a new server routines in the routines category that specifics the name and category of the function written in datastage Basic
  4. Create a new parallel routine in the routines category that specifics the name,path,type and return type of a function written and compliled in C++
Answer : D

87) One of the below execution order is true in Transformer Stage

  1. Stage Variable > Function > Routine
  2. Stage Variable > Constraint > Derivation
  3. . Constraint > Stage Variable > Equation
  4. Derivation > Function > Stage Variable
Answer : B

88) Which one of the below reason would cause of using a Transformer stage instead of a Copy stage?

  1. Drop a column.
  2. Send the input data to multiple output streams.
  3. Select certain output rows based on a condition.
  4. All of the above
Answer : C

89) In CDC stage the ‘Delete’ option would recognise those

  1. records that are dropped in After Link
  2. records that are dropped in before Link
  3. records that are dropped in both the link
  4. A & B
Answer : B

90) Which containers can be saved independenly

  1. Shared Containers
  2. Local Containers
  3. Globe containers
  4. direct containers
Answer : A

91) During run time what is the default warnings after which a job would abort?

  1. 100
  2. 150
  3. 50
  4. 200
Answer : C

92) A job is built with multiple active stages interconnected. To get the warnings that’s generated correctly at the active stage which environmental variable has to be used?

  1. APT_CONFIG_FILE
  2. APT_DISABLE_COMBINATION
  3. APT_STARTUP_SCRIPT
  4. APT_THIN_SCORE
Answer : B

93) Which environment variable is used to override the pad character of 0x0 (ASCII null), used by default when InfoSphere® DataStage® extends, or pads, a string field to a fixed length

  1. APT_STRING_PADCHAR
  2. APT_DELIMITED_READ_SIZE
  3. APT_MAX_DELIMITED_READ_SIZE
  4. APT_IMPORT_PATTERN_USES_CAT
Answer : A

94) Exporting the job in ISX feature helps to

  1. Export jobs within a project
  2. Export jobs across projects within a server
  3. A & B
  4. Exports job even with executables if needed
Answer : A, B & D

95) In a Join stage the key columns have a different naming convention. So

  1. Use transformer stage before Join stage to rename the key columns to match both the links
  2. Use Copy stage before the Join stage to rename the Key columns to match
  3. Need not use any stages but directly connect the link to Join Stage
  4. A & B
Answer : D

96) The job has stage variables defined in the transformer. The stage variables are defined in such a way that they are dependent on each other

  1. The order of the stage variables defined impacts the output
  2. The order of the stage variables defined does not have any impact on the output
  3. Order of Execution within the stage variables are not mandate
  4. None
Answer : A

97) The Sort stage and Link sort stage does the functionality of the sorting. So

  1. Both of the types of stage do only sorting and no additional functionality
  2. Duplicates can be removed in link sort
  3. Duplicates can be removed in Sort stage
  4. Key change column is available in link sort
Answer : C

98) Complex flat files can

  1. Read flat files
  2. Read compressed VSAM files
  3. can have reject link
  4. ALL
Answer : D

99) In the lookup stage the warning “Ignoring duplicate entry at table record” indicates

  1. Duplicate in primary link
  2. duplicates in the reference link
  3. Duplicates at both the link
  4. A & B
Answer : B

100) Which one streamlines collaboration between business analysts, data modelers, and developers by capturing and defining business requirements in a common, familiar format and then transforming that business logic directly into DataStage ETL jobs

  1. Designer
  2. Glossary
  3. FastTrack
  4. Director
Answer : C

101) For Joining data between two different links with a logic of left outer and with the rejects

  1. use Lookup stage
  2. Use Join stage
  3. Use Merge stage
  4. A &C
Answer : D

102) For getting data from a reference link that’s huge in size

  1. use Lookup stage
  2. Use Join stage
  3. Use Merge stage
  4. B&C
Answer : D

103) To import the table definition use the following options

  1. ODBC Table definition
  2. Plug in Metadata definition
  3. Orchestrate Schema Definition
  4. All
Answer : D