You are on page 1of 12

How can you recognise whether or not the newly added rows in the source are gets insert

in the target?

1. In the Type2 maping we have three options to recognise the newly added rows Version number Flagvalue Effective
date Range.
2. You can see that in the session log properties; take some 10 new records, and run the workflow, then look into the
session log, there you can find the effected row, applied row, and rejected row.
3. Maintain timestamp field in the target and map that to sysdate. Query target based on the timestamp field to check
the count etc.

What is the difference between Informatica 7.0 and 8.0 ?

A. Features of Informatica 8 The architecture of Power Center 8 has changed a lot;


1. PC8 is service-oriented for modularity, scalability and flexibility.
2. The Repository Service and Integration Service (as replacement for Rep Server and Informatica Server) can be run on
different computers in a network (so called nodes), even redundantly.
3. Management is centralized, that means services can be started and stopped on nodes via a central web interface.
4. Client Tools access the repository via that centralized machine, resources are distributed dynamically.
5. Running all services on one machine is still possible, of course.
6. It has a support for unstructured data which includes spreadsheets, email, Microsoft Word files, presentations and .PDF
documents. It provides high availability, seamless fail over, eliminating single points of failure.
7. It has added performance improvements (To bump up systems performance, Informatica has added "push down
optimization" which moves data transformation processing to the native relational database I/O engine whenever its is
most appropriate.)
8. Informatica has now added more tightly integrated data profiling, cleansing, and matching capabilities.
9. Informatica has added a new web based administrative console.
10.Ability to write a Custom Transformation in C++ or Java.
11.Midstream SQL transformation has been added in 8.1.1, not in 8.1.
12.Dynamic configuration of caches and partitioning
13.Java transformation is introduced.
14.User defined functions
15.PowerCenter 8 release has "Append to Target file" feature.
B. 1.Informatica 7.0 is a client server architecture where 8.0 is service oriented architecture
2.Through 7.0 migration is critical where as with 8.0 migration is possible and easy
3.grid and pushdown optimization is not there in 7.0 but in 8.0 these are available
4.with 7.0 we cant change the lookup cache size but with 8.0 we can change the lookup cache size
5.encryption and description is not in 7.0 possible with 8,0

C. main dif is 1. node and domain concepts are availabe in 8.x,these concepts are not available in 7.x
2.push down operation is available in 8.x
3. in 8.x newly 2 trans.. are added they are sql and java trans..
these are the dif...

Differences between Normalizer and Normalizer transformation.

Normalizer Transformation can be used to obtain multiple columns from a single row.

What is the target load order? specify the target loadorder based on source qualifiers in a maping.If you have the multiple source
qualifiers connected to the multiple targets,You can designatethe order in which informatica server loads data into the targets.
Explain about Informatica server Architecture?

Informatica server,load manager/rs,data transfer manager,reader,temp server and writer are the components of informatica server.
first load manager sends a request to the reader if the reader is ready to read the data from source and dump into thetemp server
and data transfer manager manages the load and it send the request to writer as per first in first out process and writer takes the
data fromtemp server and loads it itno the target

when we will strt the workflow the data loaded into load manager and load to dispatcher there the parts are there
first one is
reader thread:=
it is a subprogrames. uses the source table and source table connection to read the source data from the source database.
second one is
sharedmemory :=
in this sharedmemory the extract data from reader is stored under shared memory is called staging area.
writerthread :=
to colllect the data from shared memory and uses target table and target table and target table connections to load the data into
targetdatabase.

8.x is SOA architecture(Service Oriented Architecture) whereas 7.x is server oriented.

How do you handle decimal places while importing a flatfile into informatica?

while importing flat file definetion just specify the scale for a neumaric data type. in the mapping, the flat file source supports only
number datatype(no decimal and integer). In the SQ associated with that source will have a data type as decimal for that number
port of the source.

source ->number datatype port ->SQ -> decimal datatype.Integer is not supported. hence decimal is taken care.

Import the field as string and then use expression to convert it, so that we can avoid truncation if decimal places inj source itself.

How do we do unit testing in informatica? How do we load data in informatica ?

Unit testing are of two types : 1. Quantitaive testing 2.Qualitative testing


Steps.
1.First validate the mapping
2.Create session on themapping and then run workflow.
Once the session is succeeded the right click on session and go for statistics tab.
There you can see how many number of source rows are applied and how many number of rows loaded in to targets
and how many number of rows rejected.This is called Quantitative testing.
If once rows are successfully loaded then we will go for qualitative testing.
Steps
1.Take the DATM(DATM means where all business rules are mentioned to the corresponding source columns) and
check whether the data is loaded according to the DATM in to target table.If any data is not loaded according to the
DATM then go and check in the code and rectify it. This is called Qualitative testing.This is what a devloper will do in
Unit Testing.
What is the use of incremental aggregation? Explain me in brief with an example.
Its a session option. when the informatica server performs incremental aggr. it passes new source data through the
mapping and uses historical chache data to perform new aggregation caluculations incrementaly. for performance we
will use it.

When using incremental aggregation, you apply captured changes in the source to aggregate calculations in a session. If the source
changes incrementally and you can capture changes, you can configure the session to process those changes. This allows the
Integration Service to update the target incrementally, rather than forcing it to process the entire source and recalculate the same
data each time you run the session.
For example, you might have a session using a source that receives new data every day. You can capture those incremental changes
because you have added a filter condition to the mapping that removes pre-existing data from the flow of data. You then enable
incremental aggregation.

When the session runs with incremental aggregation enabled for the first time on March 1, you use the entire source. This allows the
Integration Service to read and store the necessary aggregate data. On March 2, when you run the session again, you filter out all
the records except those time-stamped March 2. The Integration Service then processes the new data and updates the target
accordingly.

Consider using incremental aggregation in the following circumstances:

You can capture new source data. Use incremental aggregation when you can capture new source data each time you run the
session. Use a Stored Procedure or Filter transformation to process new data.

Incremental changes do not significantly change the target. Use incremental aggregation when the changes do not significantly
change the target. If processing the incrementally changed source alters more than half the existing target, the session may not
benefit from usingincremental aggregation. In this case, drop the table and recreate the target with complete source data.

Note: Do not use incremental aggregation if the mapping contains percentile or median functions. The Integration Service uses
system memory to process these functions in addition to the cache memory you configure in the session properties. As a result, the
Integration Service does not storeincremental aggregation values for percentile and median functions in disk caches.

Whenever a session is created for a mapping Aggregate Transformation, the session option for Incremental Aggregation can be
enabled. When PowerCenter performs incremental aggregation, it passes new source data through the mapping and uses historical
cache data to perform new aggregation calculations incrementally.

What is power center repository?

Standalone repository. A repository that functions individually, unrelated and unconnected to other repositories.
Global repository. (PowerCenter only.) The centralized repository in a domain, a group of connected repositories. Each domain can
contain one global repository. The global repository can contain common objects to be shared throughout the domain through
global shortcuts.
Local repository. (PowerCenter only.) A repository within a domain that is not the global repository. Each local repository in the
domain can connect to the global repository and use objects in its shared folders.

Power Center repository is used to store informatica's meta data .


Information such as mapping name,location,target definitions,source definitions,transformation and flow is stored as meta data in
the repository.

Is sorter an active or passive transformation?What happens if we uncheck the distinct option in sorter.Will it be under active or
passive transformation?

Sorter is an active transformation. if you don't check the distinct option it is considered as a passive transformation. becos this
distinct option eliminates the duplicate records from the table

How the informatica server sorts the string values in Ranktransformation?

We can run informatica server either in UNICODE data moment mode or ASCII data moment mode.
Unicode mode: in this mode informatica server sorts the data as per the sorted order in session.

ASCII Mode:in this mode informatica server sorts the date as per the binary order
What is the difference between stop and abort

stop: _______If the session u want to stop is a part of batch you must stop the batch,

if the batch is part of nested batch, Stop the outer most bacth

Abort:----

You can issue the abort command , it is similar to stop command except it has 60 second time out .

If the server cannot finish processing and commiting data with in 60 sec

Here's the difference:

ABORT is equivalent to:


1. Kill -9 on Unix (NOT kill -7) but YES, Kill -9
2. SIGTERM ABEND (Force ABEND) on Mainframe
3. Windows FORCE QUIT on application.

What does this do?


Each session uses SHARED/LOCKED (semaphores) memory blocks. The ABORT function kills JUST THE CODE threads, leaving the
memory LOCKED and SHARED and allocated. The good news: It appears as if AIX Operating system cleans up these lost memory
blocks. The bad news? Most other operating systems DO NOT CLEAR THE MEMORY, leaving the memory "taken" from the system.
The only way to clear this memory is to warm-boot/cold-boot (restart) the informatica SERVER machine, yes, the entire box must be
re-started to get the memory back.

If you find your box running slower and slower over time, or not having enough memory to allocate new sessions, then I suggest that
ABORT not be used.

So then the question is: When I ask for a STOP, it takes forever. How do I get the session to stop fast?

well, first things first. STOP is a REQUEST to stop. It fires a request (equivalent to a control-c in SQL*PLUS) to the source database,
waits for the source database to clean up. The bigger the data in the source query, the more time it takes to "roll-back" the source
query, to maintain transaction consistency in the source database. (ie: join of huge tables, big group by, big order by).

It then cleans up the buffers in memory by releasing the data (without writing to the target) but it WILL run the data all the way
through to the target buffers, never sending it to the target DB. The bigger the session memory allocations, the longer it takes to
clean up.

Then it fires a request to stop against the target DB, and waits for the target to roll-back. The higher the commit point, the more
data the target DB has to "roll-back".

FINALLY, it shuts the session down.

WHAT IF I NEED THE SESSION STOPPED NOW?


Pick up the phone and call the source system DBA, have them KILL the source query IN THE DATABASE. This will send an EOF (end of
file) downstream to Informatica, and Infa will take less time to stop the session.

If you use abort, be aware, you are choosing to "LOSE" memory on the server in which Informatica is running (except AIX).

If you use ABORT and you then re-start the session, chances are, not only have you lost memory - but now you have TWO competing
queries on the source system after the same data, and you've locked out any hope of performance in the source database. You're
competing for resources with a defunct query that's STILL rolling back.

How can you create or import flat file definition in to the warehouse designer?

You can not create or import flat file defintion in to warehouse designer directly.Instead you must analyze the file in source
analyzer,then drag it into the warehouse
designer.
When you drag the flat file source defintion into warehouse desginer workspace,the warehouse designer creates a relational target
defintion not a file defintion.If you want to load to a file,configure the session to write to a flat file.When the informatica server runs
the session,it creates and loads the flat file.

How can you improve session performance in aggregator transformation?

There are 3 ways to improve session performance for an aggregator tranformation :-

A) 1)Size of data cache = Bytes required for variable columns + bytes required for output columns.

2) Size of index cache = size of ports used in group by clause.

B) If you provide sorted data for group by ports aggregation will be faster, so for ports which are used in group by of an aggregator
sort those ports in a sorter.

C) We can use incremental aggregation if we think that there will be no change in data which is already aggregated.

How can we use pmcmd command in a workflow or to run a session

By using command task. pmcmd>startworkflow -f foldername workflowname

When we create a target as flat file and source as oracle.. how can i specify first rows as column names in flat files...

When importing a flat file into target designer a flat file import wizard appears. In this there is an option as 'import field names from
first line'. Just check this option so integration server treats first row values as column names.

How do you configure mapping in informatica

You should configure the mapping with the least number of transformations and expressions to do the most amount of work
possible. You should minimize the amount of data moved by deleting unnecessary links between transformations.
For transformations that use data cache (such as Aggregator, Joiner, Rank, and Lookup transformations), limit connected
input/output or output ports. Limiting the number of connected input/output or output ports reduces the amount of data the
transformations store in the data cache.

You can also perform the following tasks to optimize the mapping:

Configure single-pass reading.


Optimize datatype conversions.
Eliminate transformation errors.
Optimize transformations.
Optimize expressions. You should configure the mapping with the least number of transformations and expressions to do the most
amount of work possible. You should minimize the amount of data moved by deleting unnecessary links between transformations.
For transformations that use data cache (such as Aggregator, Joiner, Rank, and Lookup transformations), limit connected
input/output or output ports. Limiting the number of connected input/output or output ports reduces the amount of data the
transformations store in the data cache.

What are the out put files that the informatica server creates during the session running?

Informatica server log: Informatica server(on unix) creates a log for all status and error messages(default name: pm.server.log). It
also creates an error log for error
messages.
These files will be created in informatica home directory:-
Session log file: Informatica server creates session log file for each session.It writes information about session into log files such as
initialization process,creation of sql
commands for reader and writer threads,errors encountered and load summary.The amount of detail in session log file depends on
the tracing level that you set.

Session detail file: This file contains load statistics for each targets in mapping.Session detail include information such as table
name,number of rows written or rejected.U
can view this file by double clicking on the session in monitor window

Performance detail file: This file contains information known as session performance details which helps you where performance can
be improved.To genarate this file select
the performance detail option in the session property sheet.

Reject file: This file contains the rows of data that the writer does notwrite to targets.

Control file: Informatica server creates control file and a target file when you run a session that uses the external loader.The control
file contains the information about the
target flat file such as data format and loading instructios for the external loader.
Post session email: Post session email allows you to automatically communicate information about a session run to designated
recipents.You can create two different
messages.One if the session completed sucessfully the other if the session fails.

Indicator file: If you use the flat file as a target,You can configure the informatica server to create indicator file.For each target
row,the indicator file contains a number to indicate
whether the row was marked for insert,update,delete or reject.
output file: If session writes to a target file,the informatica server creates the target file based on file prpoerties entered in the
session property sheet.

Cache files: When the informatica server creates memory cache it also creates cache files.

For the following circumstances informatica server creates index and datacache files:-
Aggreagtor transformation
Joiner transformation
Rank transformation
Lookup transformation

Can anyone explain error handling in informatica with examples so that it will be easy to explain the same in the interview.

Go to the session log file there we will find the information regarding to the session initiation process, errors
encountered, load summary. so by seeing the errors encountered during the session running, we can resolve the errors.

There is one file called the bad file which generally has the format as *.bad and it contains the records rejected by informatica
server. There are two parameters one fort the types of row and other for the types of columns. The row indicators signifies what
operation is going to take place ( i.e. insertion, deletion, updation etc.). The column indicators contain information regarding why the
column has been rejected.( such as violation of not null constraint, value error, overflow etc.) If one rectifies the error in the data
preesent in the bad file and then reloads the data in the target,then the table will contain only valid data.

When do u we use dynamic cache and when do we use static cache in an connected and unconnected lookup transformation

We use dynamic cache only for connected lookup. We use dynamic cache to check whether the record already exists in the target
table are not. And depending on that, we insert,update or delete the records using update strategy. Static cache is the default cache
in both connected and unconnected. If u select static cache on lookup table in infa, it own't update the cache and the row in the
cache remain constant. We use this to check the results and also to update slowly changing records

Dynamic cache:-To cache a target table/FF src and 1)insert rows


or 2)update existing rows in the cache
we use lookup transformation with Dynamic cache.
Static cache:-We can configure any static or read only cache for any lookup source.By Default Integration service creates a static
cache.In ths it caches the lookup table, and when lookup condition is true it returns a value.

When do u use a unconnected lookup and connected lookup....

what is the difference between dynamic and static lookup...y and when do v use these types of lookups ( ie...dynamic and static )

In static lookup cache, you cache all the lookup data at the starting of the session. in dynamic lookup cache, you go and query the
database to get the lookup value for each record which needs the lookup. static lookup cache adds to the session run time....but it
saves time as informatica does not need to connect to your databse every time it needs to lookup. depending on how many rows in
your mapping needs a lookup, you can decide on this...also remember that static lookup eats up space...so remember to select only
those columns which are needed

Unconnected Lookup
Physically unconnected from other transformations-NO data flow
arrows leading to or from an unconnected Lookup
Lookup data is called from the point in the Mapping that needs it so there are less number of looks up
Lookup function can be set within any transformation that supports
expressions.

What is source qualifier transformation?

source qualifier transformer is like a waper to database data .it changes the data fron the rdbms in tabular format in such a way
thatinformatica server recognises easily while loading data from source to targets using transformers.
it is of three more types
application sq:it is for erp files
xml sq: it is for xml files
mq sq:it is for ibm products or as400/mainframes

How can we partition a session in Informatica?

The Informatica? PowerCenter? Partitioning option optimizes parallel processing on multiprocessor hardware by providing a thread-
based architecture and built-in data partitioning.
GUI-based tools reduce the development effort necessary to create data partitions and streamline ongoing troubleshooting and
performance tuning tasks, while ensuring data integrity throughout the execution process. As the amount of data within an
organization expands and real-time demand for information grows, the PowerCenter Partitioning option
enables hardware and applications to provide outstanding performance and jointly scale to handle large volumes of data and users.

How to import oracle sequence into Informatica.

CREATE ONE PROCEDURE AND DECLARE THE SEQUENCE INSIDE THE PROCEDURE,FINALLY CALL THE PROCEDURE IN INFORMATICA
WITH THE HELP OF STORED PROCEDURE TRANSFORMATION.

What are the real time problems generally come up while doing/running mapping/any transformation?can any body explain with
example.

1) Informatica uses OBDC connections to connect to the databases.


The database passwords (production ) is changed in a periodic
manner and the same is not updated at the Informatica side.
Your mappings will fail in this case and you will get database connectivity error.
2) If you are using Update strategy transformation in the mapping, in the session properties
you have to select Treat Source Rows : Data Driven. If we do not select this Informatica
server will ignore updates and it only Inserts rows.
3) If we have mappings loading multiple target tables we have to provide the Target Load Plan
in the sequence we want them to get loaded.
4) Error:Snapshot too old is a very common error when using Oracle tables. We get this error
while using too large tables. Idealy we should schelude these loads when server is not very
busy (meaning when no other loads are running).
5) We might get some poor performance issues while reading from large tables. All the source tables
should be indexed and updated regularly.

How u will create header and footer in target using informatica?

If you are focus is about the flat files then one can set it in file properties while creating a mapping or at the session level in session
properties

What are two types of processes that informatica runs the session?

Load manager Process: Starts the session, creates the DTM process, and sends post-session email when the session completes.
The DTM process. Creates threads to initialize the session, read, write, and transform data, and handle pre- and post-session
operations.

What are the session parameters?

ession parameters are like maping parameters,represent values you might want to change between sessions such as database
connections or source files.
Server manager also allows you to create userdefined session parameters.Following are user defined session parameters:-
Database connections
Source file names: use this parameter when you want to change the name or location of
session source file between session runs.
Target file name : Use this parameter when you want to change the name or location of
session target file between session runs.
Reject file name : Use this parameter when you want to change the name or location of
session reject files between session runs.

What are two modes of data movement in Informatica Server?

The data movement mode depends on whether Informatica Server should process single byte or multi-byte character data. This
mode selection can affect the enforcement
of code page relationships and code page validation in the Informatica Client and Server.
a) Unicode - IS allows 2 bytes for each character and uses additional byte for each non-ascii character (such as Japanese characters)
b) ASCII - IS holds all data in a single byte.
The IS data movement mode can be changed in the Informatica Server configuration parameters. This comes into effect once you
restart the Informatica Server.

How can you say that union Transormation is Active transformation.

As we are combining results of two select queries using Union Tr Most probably no of rows increases.So it is An Active Tr.

If you have four lookup tables in the workflow. How do you troubleshoot to improve performance?

There r many ways to improve the mapping which has multiple lookups.

1) we can create an index for the lookup table if we have permissions(staging area).

2) divide the lookup mapping into two (a) dedicate one for insert means: source - target,, these r new rows . only the new rows will
come to mapping and the process will be fast . (b) dedicate the second one to update : source=target,, these r existing rows. only the
rows which exists allready will come into the mapping.

3)we can increase the chache size of the lookup.


What is the difference between Narmal load and Bulk load?

Normal Load: Normal load will write information to the database log file so that if any recorvery is needed it is will be helpful. when
the source file is a text file and loading data to a table,in such cases we should you normal load only, else the session will be failed.

Bulk Mode: Bulk load will not write information to the database log file so that if any recorvery is needed we can't do any thing in
such cases.

compartivly Bulk load is pretty faster than normal load.

How to recover sessions in concurrent batches?

If multiple sessions in a concurrent batch fail, you might want to truncate all targets and run the batch again. However, if a session in
a concurrent batch fails and the rest of
the sessions complete successfully, you can recover the session as a standalone session.
To recover a session in a concurrent batch:
1.Copy the failed session using Operations-Copy Session.
2.Drag the copied session outside the batch to be a standalone session.
3.Follow the steps to recover a standalone session.
4.Delete the standalone copy.

What is batch and describe about types of batches?

Batch--- is a group of any thing


Different batches ----Different groups of different things.

There are two types of batches


1. Concurrent
2. Sequential

What is Datadriven?

The informatica server follows instructions coded into update strategy transformations with in the session maping determine how to
flag records for insert, update, delete or reject. If you do not choose data driven option setting,the informatica server ignores all
update strategy transformations in the mapping. If the data driven option is selected in the session properties,it follows the
instructions in the update strategy transformation in the mapping o.w it follows instuctions specified in the session. When ever we
are using the update stratagy transformation , integration server by default select the column Treat source row as 'Data Driven'.

Can Informatica be used as a Cleansing Tool? If Yes, give example of transformations that can implement a data cleansing routine.

Yes, we can use Informatica for cleansing data. some time we use stages to cleansing the data. It depends upon performance again
else we can use expression to cleasing data. For example an feild X have some values and other with Null values and assigned to
target feild where target feild is notnull column, inside an expression we can assign space or some constant value to avoid session
failure. The input data is in one format and target is in another format, we can change the format in expression.
we can assign some default values to the target to represent complete set of data in the target.

On a day, I load 10 rows in my target and on next day if I get 10 more rows to be added to my target out of which 5 are updated
rows how can I send them to target? How can I insert and update the record?

We can use do this by identifying the granularity of the target table .


We can use CRC external procedure after that to compare newly generated CRC no. with the old one and if they do not match then
update the row
How do you create single lookup transformation using multiple tables?

Write a override sql query. Adjust the ports as per the sql query. By writing SQL override and specifying joins in the SQL
override.
How to move the mapping from one database to another?

1.? Open the mapping you want to migrate.? Go to File Menu - Select 'Export Objects' and give a name - an XML file will be
generated.? Connect to the repository where you want to migrate and then select File Menu - 'Import Objects' and select the XML
file name.

2.? Connect to both the repositories.??Go to the source folder and select mapping name from the?object navigator and
select?'copy' from 'Edit' menu.? Now, go to the target folder and select 'Paste' from 'Edit' menu.? Be sure you open the target
folder.

Where to store informatica rejected data? How to extract the informatica rejected data ?

The reject rows say for example due to unique key constrain is all pushed by session into the $PMBadFileDir (default relative path is
<INFA_HOME/PowerCenter/server/infa_shared/BadFiles) which is configured with path at Integration Service level. Every Target will
have property saying Reject filename which gives the file in which rejects rows are stored.

Briefly explian the Versioning Concept in Power Center 7.1.

When you create a version of a folder referenced by shortcuts, all shortcuts continue to reference their original object in the original
version. They do not automatically update to the current folder version.

For example, if you have a shortcut to a source definition in the Marketing folder, version 1.0.0, then you create a new folder
version, 1.5.0, the shortcut continues to point to the source definition in version 1.0.0.

Maintaining versions of shared folders can result in shortcuts pointing to different versions of the folder. Though shortcuts to
different versions do not affect the server, they might prove more difficult to maintain. To avoid this, you can recreate shortcuts
pointing to earlier versions, but this solution is not practical for much-used objects. Therefore, when possible, do not version folders
referenced by shortcuts.

To achieve the session partition what are the necessary tasks you have to do?

Configure the session to partition source data.


Install the informatica server on a machine with multiple CPU?s.

What does the expression n filter transformations do in Informatica Slowly growing target wizard?

EXP is used to perform record level operations and is a passive transformation.


like op_col1=ip_col1*10+ip_col2
for all the records same operation will be performed on values of 2 i/p fields - ip_col1,ip_col2 and o/p will pass through o/p field-
op_col1
FIL is used to filter some records based on any condition.(the way we write condition in where clause,we can simply put the
condition in FIL transformation)...records not matching the condition will be DROPPED(not rejected)from the mapping flow and
there is no way to capture dropped rows(unlike rejected rows in UPD can be captured in reject file iff forward reject rows option is
not ticked)...so FIL is active transformation...
FIL-Filter Transformation
EXP-Expression Transformation
UPD-Update Strategy Transformation
What are the types of metadata that stores in repository?

Following are the types of metadata that stores in the repository:-


Database connections
Global objects
Mappings
Mapplets
Multidimensional metadata
Reusable transformations
Sessions and batches
Short cuts
Source definitions
Target defintions
Transformations.

At the max how many tranformations can be us in a mapping?

In a mapping we can use any number of transformations depending on the project, and the included transformations in the
perticular related transformatons.

How do I import VSAM files from source to target. Do I need a special plugin

As far my knowledge by using power exchange tool convert vsam file to oracle tables then do mapping as usual to the target table.

How to use the unconnected lookup i.e., from where the input has to be taken and the output is linked?
What condition is to be given?

The unconnected lookup is used just like a function call. in an expression output/variable port or any place where an expression is
accepted(like condition in update strategy etc..), call the unconnected lookup...something like :LKP.lkp_abc(input_port).......(lkp_abc
is the name of the unconnected lookup...(plz check the exact syntax)).....give theinput value just like we pass parameters to
functions, and it'll return the output after looking up.

Why is meant by direct and indirect loading options in sessions?

Direct loading can be used to Single transformation where as indirect transformation can be used to multiple transformations or files
In the direct we can perform recovery process but in Indirect we cant do it .

What is IQD file?

IQD file is nothing but Impromptu Query Definetion,This file is maily used in Cognos Impromptu tool after creating a imr( report) we
save the imr as IQD file which is used while creating a cube in power play transformer.In data source type we select Impromptu
Query Definetion.

What is the procedure to load the fact table.Give in detail?

Based on the requirement to your fact table, choose the sources and data and transform it based on
your business needs. For the fact table, you need a primary key so use a sequence generator
transformation to generate a unique key and pipe it to the target (fact) table with the foreign keys
from the source tables.

What is the status code?

Status code provides error handling for the informatica server during the session.The stored procedure issues a status code that
notifies whether or not stored procedure completed sucessfully.This value can not seen by the user.It only used by the informatica
server to determine whether to continue running the session or stop.
Why did you use stored procedure in your ETL Application?

usage of stored procedure has the following advantages


1checks the status of the target database
2drops and recreates indexes
3determines if enough space exists in the database
4performs aspecilized calculation
What is the procedure to load the fact table.Give in detail?

Based on the requirement to your fact table, choose the sources and data and transform it based on
your business needs. For the fact table, you need a primary key so use a sequence generator
transformation to generate a unique key and pipe it to the target (fact) table with the foreign keys
from the source tables.

Can you generate reports in Informatcia?

Yes. By using Metadata reporter we can generate reports in informatica.

What are variable ports and list two situations when they can be used?

We can use variable ports to store values of previous records which is not otherwise possible in Informatica.

How to lookup the data on multiple tabels.

if the two tables are relational, then u can use the SQL lookup over ride option to join the two tables in the lookup
properties.u cannot join a flat file and a relatioanl table. eg: lookup default query will be select lookup table
column_names from lookup_table. u can now continue this query. add column_names of the 2nd table with the
qualifier, and a where clause. if u want to use a order by then use -- at the end of the order by.

What transformation you can use inplace of lookup?

1.can u explain one critical mapping? 2.performance issue which one is better? whether connected lookup
tranformation or unconnected one?

it depends on your data and the type of operation u r doing. If u need to calculate a value for all the rows or for the
maximum rows coming out of the source then go for a connected lookup. Or,if it is not so then go for unconnectd
lookup. Specially in conditional case like, we have to get value for a field 'customer' from order tabel or from
customer_data table,on the basis of following rule: If customer_name is null then ,customer=customer_data.ustomer_Id
otherwise customer=order.customer_name. so in this case we will go for unconnected lookup

You might also like