You are on page 1of 8

Informatica Scenario based Questions and Solutions (Part-1) Scenario1: We have a target source table containing 3 columns : Col1,

Col2 and Col3. There is only 1 row in the table as follows: Col1 Col2 Col3 ----------------a b c There is target table containg only 1 column Col. Design a mapping so that the target table contains 3 rows as follows: Col ----a b c Solution: Not using a Normalizer transformation: Create 3 expression transformations exp_1,exp_2 and exp_3 with 1 port each. Connect col1 from Source Qualifier to port in exp_1.Connect col2 from Source Qualifier to port in exp_2.Connect col3 from source qualifier to port in exp_3. Make 3 instances of the target. Connect port from exp_1 to target_1. Connect port from exp_2 to target_2 and connect port from exp_3 to target_3. Scenario 2: There is a source table that contains duplicate rows.Design a mapping to load all the unique rows in 1 target while all the duplicate rows (only 1 occurence) in another target. Solution : Bring all the columns from source qualifier to an Aggregator transformation. Check group by on the key column. Create a new output port count_col in aggregator transformation and write an expression count(key_column). Make a router transformation with 2 groups:Dup and Non-Dup. Check the router conditions count_col>1 in Dup group while count_col=1 in Non-dup group. Load these 2 groups in different targets. Scenario 3: There is a source table containing 2 columns Col1 and Col2 with data as follows: Col1 Col2 a l b p a m

a b x

n q y

Design a mapping to load a target table with following values from the above mentioned source: Col1 Col2 a l,m,n b p,q x y Solution: Use a sorter transformation after the source qualifier to sort the values with col1 as key. Build an expression transformation with following ports(order of ports should also be the same): 1. Col1_prev : It will be a variable type port. Expression should contain a variable e.g val 2. Col1 : It will be Input/Output port from Sorter transformation 3. Col2 : It will be input port from sorter transformation 4. val : It will be a variable type port. Expression should contain Col1 5. Concatenated_value: It will be a variable type port. Expression should be decode(Col1,Col1_prev,Concatenated_value||','||Col2,Col1) 6. Concatenated_Final : It will be an outpur port conating the value of Concatenated_value After expression, build a Aggregator Transformation. Bring ports Col1 and Concatenated_Final into aggregator. Group by Col1. Don't give any expression. This effectively will return the last row from each group. Connect the ports Col1 and Concatenated_Final from aggregator to the target table. Scenario 4: Design an Informatica mapping to load first half records to 1 target while other half records to a separate target. Solution: You will have to assign a row number with each record. To achieve this, either use Oracle's psudo column rownum in Source Qualifier query or use NEXTVAL port of a Sequence generator. Lets name this column as rownumber. From Source Qualifier, create 2 pipelines: First Pipeline: Carry first port Col1 from SQ transformation into an aggregator transformation. Create a new output port "tot_rec" and give the expression as COUNT(Col1). Do not

group by any port. This will give us the total number of records in Source Table. Carry this port tot_rec to an Expression Transformation. Add another port DUMMY in expression transformation with default value 1. Second Pipeline: from SQ transformation, carry all the ports(including an additional port rownumber generated by rownum or sequence generator) to an Expression Transformation. Add another port DUMMY in expression transformation with default value 1. Join these 2 pipelines with a Joiner Transformation on common port DUMMY. carry all the source table ports and 2 additional ports tot_rec and rownumber to a router transformation. Add 2 groups in Router : FIRST_HALF and SECOND_HALF. Give condition rownumber<=tot_rec/2 in FIRST_HALF. Give condition rownumber>tot_rec/2 in SECOND_HALF. Connect the 2 groups to 2 different targets. Count of Individual Characters in a String Write a query to display the count of individual characters in a given string. e.g if the given string is "mesopotamia" , the output should be : m 2 e 1 i 1 s 1 o2 p1 t 1 a 2 Solution: select ind, count(ind) from ( select substr(given_string,rownum,1) ind from ( select 'mesopotamia' given_string from all_objects ) where rownum<=length(given_string) ) group by ind

Count of a Particular Character in a string Write a query to count a particular character in a string. e.g Return the number of times 'N' appears in string 'NITININ'. The result shoudl be 3. Solution: select length('NITININ') - length(replace('NITININ','N','')) from dual

Generate a Pyramid

You have table pyramid_table with 2 columns CH and NUM. It contains 1 row. The column CH contains a single character and column NUM contains a number. Write a SQL query to Generate a pyramid of CH characters of height NUM e.g if CH is '*' and NUM is 5, the output should be : * *** ***** ******* ********* Solution: SELECT lpad(' ',num-rownum,' ')||substr(lpad(ch,2*num,ch),1,2*rownum-1)||lpad(' ',num-rownum,' ') as pyramid from pyramid_table,all_objects where rownum<=num

LIKE' does'nt like UNDERSCORE

Write a query to return those values from an Oracle table where the column myword contains an UNDERSCORE ( _ ) sign in its values. e.g if the column 'myword' contains following values: myword ---------FAC_FAC FACFAC _FACE FACE FACE_ The output should be: myword ---------FAC_FAC _FACE FACE_ Solution: Generally, most people will write the query as follows: select myword from mytable where myword like '%_%'. However this query will not give expected results. It will return all the values of column myword. The correct query will be as follows: select myword from mytable where myword like '%\_%' escape '\' We will have to escape the '_' character to exactly match it.

Divisions

A table contains certain columns. One of the columns is Division. The user enters a division name through a prompt. Write a query to satisfy below scenario: If the value entered in prompt is 'a' or 'b', then all records should be displayed else, the records pertaining to that particular division should be displayed. Solution: The query would be written as follows: select * from my_table where division = &division OR

'^' = decode(&division,'a','^','b','^',&division) Here any special character (like '^' in above query) can be choosed to give all results in case division value entered from prompt is either 'a' or 'b'.

nformatica Scenario - 5
Previous Scenarios Scenario: A source table contains emp_name and salary columns. Develop an Informatica mapping to load all records with 5th highest salary into the target table. Solution: The mapping will contain following transformations after the Source Qualifier Transformation: 1. Sorter : It will contain 2 ports - emp_name and salary. The property 'Direction' will be selected as 'Descending' on key 'Salary' 2. Expression transformation: It will 6 ports as follows a> emp_name : It will be an I/O port directly connected from previous sorter transformation b> salary_prev : It will be a variable type port. Give any vriable name e.g val in its Expression column c> salary : It will be an I/O port directly connected from previous transformation d> val : It will be a variable port. The expression column of this port will contain 'salary' e> rank: It will be a variable type port. The expression column will contain decode (salary,salary_prev,rank,rank+1) f> rank_o : It will be an output port containg the value of 'rank'. 3. Filter Transformation : It will have 2 I/O ports emp_name and salary with a filter condition rank_o = 5 The ports emp_name and salary from Filter Transformation will be connected to target

Oracle BULK Loader

SQL *Loader (sqlldr) is a bulk loader utility to load data from external files into Oracle database. To use this utility, a control file is required which specifies how data should be loaded into the database and a data file is required which specifies what data should be loaded. A sample control file is as follows: LOAD DATA INFILE "datafile" APPEND INTO TABLE "tablename" FIELDS TERMINATED BY "separater" ("list of all attribute names to be loaded") A sample data file can be of following form:

1,'Gaurav' 2, 'ABC' 3,'PQR'

Simple things about Lookup Transformation


Q Define lookup transformation? A lookup transformation is used to lookup data in a data-pool. This data-pool may be a flatfile, relational table, view or a synonym. You can also create a lookup definition from a source qualifier. The Integration Service queries the lookup source based on the lookup ports in the transformation and a lookup condition. The Lookup transformation returns the result of the lookup to the target or another transformation. Lookups are generally used to get a related value, to perform a calculation using the derived related value or to update a slowly changing dimension. When you configure a flat file Lookup transformation for sorted input, the condition columns must be grouped. If the condition columns are not grouped, the Integration Service cannot cache the lookup and fails the session. For optimal caching performance, sort the condition columns. The Integration Service always caches flat file and pipeline lookups. If you configure a Lookup transformation to use a dynamic cache, you can use only the equality operator (=) in the lookup condition. Q What are the differences between connected and unconnected lookups? 1. Connected Lokkup uses a dynamic or static cache while unconnected lookup uses only static cache. 2. Connected lookup can return multiple columns from the same row or insert into the dynamic lookup cache while unconnected lookup returns one column from each row. 3. Connected lookup supports user-defined default values while unconnected lookup does not supports user-defined default values. Q How can you transformation? return multiple ports from an unconnected lookup

Unconnected lookup transformation returns only 1 port. To return multiple ports, concatenate all those ports in the overwritten lookup query and return the concatenated port. Now separate out those columns in an expression transformation. Q How can you optimize a lookup transformation? 1. If you have privileges to modify the database containing a lookup table, you can improve lookup initialization time by adding an index to the lookup table. 2. You can improve performance by indexing the columns in the lookup ORDER BY. 3. By default, the Integration Service generates an ORDER BY clause for a cached lookup. The ORDER BY clause contains all lookup ports. To increase performance, you can suppress the default ORDER BY clause and enter an override ORDER BY with fewer columns. Place two dashes -- as a comment notation after the ORDER BY clause to suppress the ORDER BY clause that the Integration Service generates. 4. If you include more than one lookup condition, place the conditions in the following order to optimize lookup performance: - Equal to (=) - Less than (<), greater than (>), less than or equal to (<=), greater than or equal to (>=) - Not equal to (!=) 5. Improve session performance by caching small lookup tables.

6. If the lookup table is on the same database as the source table in the mapping and caching is not feasible, join the tables in the source database rather than using a Lookup transformation.

http://infiniteandmore.blogspot.com/2010/11/how-to-find-out-referenced-tableand.html

You might also like