Scenario Based Ab Initio Interview Questions And Answers For Freshers Experienced
AB INITIO INTERVIEW QUESTIONS - Ab Initio Scenario Based Interview Questions And Answers For Freshers And Experienced. First of all you know what is Ab Initio? But a quick reminder on what is Ab Initio and what it does? what are the uses of Ab Initio?
“Abinitio” is a latin word meaning “from the beginning.”Abinitio is a tool used to extract, transform and load data. It is also used for data analysis, data manipulation, batch processing, and graphical user interface based parallel processing.
“Abinitio” is a latin word meaning “from the beginning.”Abinitio is a tool used to extract, transform and load data. It is also used for data analysis, data manipulation, batch processing, and graphical user interface based parallel processing.
Ab Initio Interview Questions And Answers
Real time abinitio interview questions are prepared by expert ab initio professionals working in top multinational companies like IBM, Accenture, TCS, Infosys, Cognizant, CITI and many more companies. Let's read below Ab Initio scenario based interview questions and answers and crack your interview.
Q. Mention what is Abinitio?
“Abinitio” is a latin word meaning “from the beginning.” Abinitio is a tool used to extract, transform and load data. It is also used for data analysis, data manipulation, batch processing, and graphical user interface based parallel processing.
Q. Explain what is the architecture of Abinitio?
Architecture of Abinitio includes
- GDE (Graphical Development Environment)
- Co-operating System
- Enterprise meta-environment (EME)
- Conduct-IT
Q. Mention what is the role of Co-operating system in Abinitio?
The Abinitio co-operating system provide features like
- Manage and run Abinitio graph and control the ETL processes
- Provide Abinitio extensions to the operating system
- ETL processes monitoring and debugging
- Meta-data management and interaction with the EME
Q. Explain what does dependency analysis mean in Abinitio?
In Abinitio, dependency analysis is a process through which the EME examines a project entirely and traces how data is transferred and transformed- from component-to-component, field-by-field, within and between graphs.
Q. Explain how Abinitio EME is segregated?
Abinition is logically divided into two segments
- Data Integration Portion
- User Interface ( Access to the meta-data information)
Q. Mention how can you connect EME to Abinitio Server?
To connect with Abinitio Server, there are several ways like
- Set AB_AIR_ROOT
- Login to EME web interface- http://serverhost:[serverport]/abinitio
- Through GDE, you can connect to EME data-store
- Through air-command
Q. List out the file extensions used in Abinitio?
The file extensions used in Abinitio are
- .mp:It stores Abinitio graph or graph component
- .mpc:Custom component or program
- .mdc:Dataset or custom data-set component
- .dml:Data manipulation language file or record type definition
- .xfr:Transform function file
- .dat:Data file (multifile or serial file)
Q. Mention what information does a .dbc file extension provides to connect to the database?
The .dbc extension provides the GDE with the information to connect with the database are
- Name and version number of the data-base to which you want to connect
- Name of the computer on which the data-base instance or server to which you want to connect runs, or on which the database remote access software is installed
- Name of the server, database instance or provider to which you want to link
Q. Explain how you can run a graph infinitely in Abinitio?
To execute graph infinitely, the graph end script should call the .ksh file of the graph. Therefore, if the graph name is abc.mp then in the end script of the graph it should call to abc.ksh. This will run the graph for infinitely.
Q. Mention what the difference between “Look-up” file and “Look is up” in Abinitio?
Lookup file defines one or more serial file (Flat Files); it is a physical file where the data for the Look-up is stored. While Look-up is the component of abinitio graph, where we can save data and retrieve it by using a key parameter.
Q. Mention what are the different types of parallelism used in Abinitio?
Different types of parallelism used in Abinitio includes
- Component parallelism:A graph with multiple processes executing simultaneously on separate data uses parallelism
- Data parallelism:A graph that works with data divided into segments and operates on each segments respectively, uses data parallelism.
- Pipeline parallelism:A graph that deals with multiple components executing simultaneously on the same data uses pipeline parallelism. Each component in the pipeline read continuously from the upstream components, processes data and writes to downstream components. Both components can operate in parallel.
Q. Explain what is Sort Component in Abinitio?
The Sort Component in Abinitio re-orders the data. It comprises of two parameters “Key” and “Max-core”.
- Key: It is one of the parameters for sort component which determines the collation order
- Max-core: This parameter controls how often the sort component dumps data from memory to disk
Q. Mention what dedup-component and replicate component does?
- Dedup component:It is used to remove duplicate records
- Replicate component:It combines the data records from the inputs into one flow and writes a copy of that flow to each of its output ports
Q. Mention what is a partition and what are the different types of partition components in Abinitio?
In Abinitio, partition is the process of dividing data sets into multiple sets for further processing. Different types of partition component includes
- Partition by Round-Robin:Distributing data evenly, in block size chunks, across the output partitions
- Partition by Range: You can divide data evenly among nodes, based on a set of partitioning ranges and key
- Partition by Percentage: Distribution data, so the output is proportional to fractions of 100
- Partition by Load balance: Dynamic load balancing
- Partition by Expression: Data dividing according to a DML expression
- Partition by Key: Data grouping by a key
Q. Explain what is SANDBOX?
A SANDBOX is referred for the collection of graphs and related files that are saved in a single directory tree and behaves as a group for the purposes of navigation, version control, and migration.
Q. Explain what is de-partition in Abinitio?
De-partition is done in order to read data from multiple flow or operations and are used to re-join data records from different flows. There are several de-partition components available which includes Gather, Merge, Interleave, and Concatenation.
Q. List out some of the air commands used in Abintio?
Air command used in Abinitio includes
- air object Is<EME path for the object-/Projects/edf/..>: It is used to see the listings of objects in a directory inside the project
- air object rm<EME path for the object-/Projects/edf/..>: It is used to remove an object from the repository
- air object versions-verbose<EME path for the object-/Projects/edf/..>: It gives the version history of the object.
Other air command for Abinitio include air object cat, air object modify, air lock show user, etc.
Q. Mention what is Rollup Component?
Roll-up component enables the users to group the records on certain field values. It is a multiple stage function and consists initialize 2 and Rollup 3.
Q. Mention what is the syntax for m_dump in Abinitio?
The syntax for m_dump in Abinitio is used to view the data in multifile from unix prompt. The command for m_dump includes
- m_dump a.dml a.dat:This command will print the data as it manifested from GDE when we view data in formatted text
- m_dump a.dml a.dat>b.dat:The output is re-directed in b.dat and will act as a serial file.b.dat that can be referred when it is required.
Q. What is the relation between eme, gde and co-operating system?
Eme is said as enterprise metadataenv, gde as graphical development env and co-operating system can be said as abinitio server relation b/w this co-op, eme and gde is as fallowsco operating system is the abinitio server. This co-op is installed on particular o.s platform that is called native o.s .coming to the eme, its just as repository in Informatica, its hold the metadata, transformations, dbconfig files source and targets information’s. Coming to gde its is end user environment where we can develop the graphs (mapping just like in Informatica) designer uses the gde and designs the graphs and save to the eme or sand box it is at user side. Where eme is at server side.
Q. What is the use of aggregation when we have rollupas we know rollup component in abinitio is used to summarize group of data record. Then where we will use aggregation?
Aggregation and Rollup both can summarize the data but rollup is much more convenient to use. In order to understand how a particular summarization being rollup is much more explanatory compared to aggregate. Rollup can do some other functionality like input and output filtering of records.Aggregate and rollup perform same action, rollup display intermediate result in main memory, Aggregate does not support intermediate result.
Q. What are kinds of layouts does ab initio supports?
Basically there are serial and parallel layouts supported by AbInitio. A graph can have both at the same time. The parallel one depends on the degree of data parallelism. If the multi-file system is 4-way parallel then a component in a graph can run 4 way parallel if the layout is defined such as it’s same as the degree of parallelism.
Q. How can you run a graph infinitely?
To run a graph infinitely, the end script in the graph should call the .ksh file of the graph. Thus if the name of the graph is abc.mp then in the end script of the graph there should be a call to abc.ksh. Like this the graph will run infinitely.
Q. How do you add default rules in transformer?
Double click on the transform parameter of parameter tab page of component properties, it will open transform editor. In the transform editor click on the Edit menu and then select Add Default Rules from the dropdown. It will show two options – 1) Match Names 2) Wildcard.
Q. Do you know what a local lookup is?
If your lookup file is a multifile and partioned/sorted on a particular key then local lookup function can be used ahead of lookup function call. This is local to a particular partition depending on the key.
Lookup File consists of data records which can be held in main memory. This makes the transform function to retrieve the records much faster than retrieving from disk. It allows the transform component to process the data records of multiple files fast.
Q. What is the difference between look-up file and look-up, with a relevant example?
Generally Lookup file represents one or more serial files(Flat files). The amount of data is small enough to be held in the memory. This allows transform functions to retrieve records much more quickly than it could retrieve from Disk.
A lookup is a component of abinitio graph where we can store data and retrieve it by using a key parameter.A lookup file is the physical file where the data for the lookup is stored.
Q. How many components in your most complicated graph?
It depends the type of components you us. Usually avoid using much complicated transform function in a graph.
Q. Explain what is lookup?
Lookup is basically a specific dataset which is keyed. This can be used to mapping values as per the data present in a particular file (serial/multi file). The dataset can be static as well dynamic ( in case the lookup file is being generated in previous phase and used as lookup file in current phase). Sometimes, hash-joins can be replaced by using reformat and lookup if one of the inputto the join contains less number of records with slim record length.AbInitio has built-in functions to retrieve values using the key for the lookup.
Q. Have you worked with packages?
Multistage transform components by default use packages. However user can create his own set of functions in a transfer function and can include this in other transfer functions.
Q. Have you used rollup component? Describe how?
If the user wants to group the records on particular field values then rollup is best way to do that. Rollup is a multi-stage transform function and it contains the following mandatory functions.
- Initialize
- Rollup
- Finalize
Also need to declare one temporary variable if you want to get counts of a particular group.
For each of the group, first it does call the initialize function once, followed by rollup function calls for each of the records in the group and finally calls the finalize function once at the end of last rollup call.
Q. How do you add default rules in transformer?
Add Default Rules — Opens the Add Default Rules dialog. Select one of the following: Match Names — Match names: generates a set of rules that copies input fields to output fields with the same name. Use Wildcard (.*) Rule — Generates one rule that copies input fields to output fields with the same name.
1) If it is not already displayed, display the Transform Editor Grid.
2) Click the Business Rules tab if it is not already displayed.
3) Select Edit > Add Default Rules.
In case of reformat if the destination field names are same or subset of the source fields then no need to write anything in the reformat xfr unless you dont want to use any real transform other than reducing the set of fields or split the flow into a number of flows to achieve the functionality.
Q. What is the difference between partitioning with key and round robin?
Partition by Key or hash partition ->This is a partitioning technique which is used to partition data when the keys are diverse. If the key is present in large volume then there can large data skew? But this method is used more often for parallel data processing.
Round robin partition is another partitioning technique to uniformly distribute the data on each of the destination data partitions. The skew is zero in this case when no of records is divisible by number of partitions. A real life example is how a pack of 52 cards is distributed among 4 players in a round-robin manner.
Q. How do you improve the performance of a graph?
There are many ways the performance of the graph can be improved.
1) Use a limited number of components in a particular phase
2) Use optimum value of max core values for sort and join components
3) Minimize the number of sort components
4) Minimize sorted join component and if possible replace them by in-memory join/hash join
5) Use only required fields in the sort, reformat, join components
6) Use phasing/flow buffers in case of merge, sorted joins
7) If the two inputs are huge then use sorted join, otherwise use hash join with proper driving port
8) For large dataset don’t use broadcast as partitioner
9) Minimize the use of regular expression functions like re_index in the transfer functions
10) Avoid repartitioning of data unnecessarily
Try to run the graph as long as possible in MFS. For these input files should be partitioned and if possible output file should also be partitioned.
Q. How do you truncate a table?
From Abinitio run sql component using the DDL “truncate table by using the truncate table component in Ab Initio
Q. Have you ever encountered an error called “depth not equal”?
When two components are linked together if their layout does not match then this problem can occur during the compilation of the graph. A solution to this problem would be to use a partitioning component in between if there was change in layout.
Q. What is the function you would use to transfer a string into a decimal?
In this case no specific function is required if the size of the string and decimal is same. Just use decimal cast with the size in the transform function and will suffice. For example, if the source field is defined as string(8) and the destination as decimal(8) then (say the field name is field1).
out.field :: (decimal(8)) in.field
If the destination field size is lesser than the input then use of string_substring function can be used like the following. Say destination field is decimal (5).
Outfield: (decimal(5))string_lrtrim(string_substring(in.field,1,5)) /* string_lrtrim used to trim leading and trailing spaces */
Q. What are primary keys and foreign keys?
In RDBMS the relationship between the two tables is represented as Primary key and foreign key relationship. Whereas the primary key table is the parent table and foreign key table is the child table. The criteria for both the tables are there should be a matching column.
Q. What is an outer join?
An outer join is used when one wants to select all the records from a port – whether it has satisfied the join criteria or not.
I really appreciate information shared above. It’s of great help. If someone want to learn Online (Virtual) instructor lead live training in AB INITIO kindly contact us http://www.maxmunus.com/contact
ReplyDeleteMaxMunus Offer World Class Virtual Instructor led training on AB INITIO We have industry expert trainer. We provide Training Material and Software Support. MaxMunus has successfully conducted 100000+ trainings in India, USA, UK, Australlia, Switzerland, Qatar, Saudi Arabia, Bangladesh, Bahrain and UAE etc.
For Demo Contact us.
Saurabh Srivastava
MaxMunus
E-mail: saurabh@maxmunus.com
Skype id: saurabhmaxmunus
Ph:+91 8553576305 / 080 - 41103383
http://www.maxmunus.com/
IntelliMindz is a best IT Training in Bangalore with placement, offering 200 and more software courses with 100% Placement Assistance.
DeleteAb Initio Training In Bangalore
Ab Initio Training In Chennai
Thank you it very nice blog for beginners . Keep Updating Big Data Hadoop Online Course
ReplyDeleteThanks, this is generally helpful.
ReplyDeleteStill, I followed step-by-step your method in this
dot net training
.net online training