Professional Documents
Culture Documents
info
Get a related value-Get the Employee Name from Employee table based on the Employee
IDPerform Calculation.
Update slowly changing dimension tables - We can use unconnected lookup transformation to
determine whether the records already exist in the target or not.
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 01:12:33 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
Nice Question If we don't have a look our datawarehouse will be have more unwanted duplicates
Use a Lookup transformation in your mapping to look up data in a relational table view or synonym.
Import a lookup definition from any relational database to which both the Informatica Client and Server
can connect. You can use multiple Lookup transformations in a mapping
Cheers
Sithu
=======================================
Lookup Transformations used to search data from relational tables/FLAT Files that are not used in
mapping.
Types of Lookup:
1. Connected Lookup
2. UnConnected Lookup
=======================================
The main use of lookup is to get a related value either from a relational sources or flat files
=======================================
1)We use Lookup transformations that query the largest amounts of data to
improve overall performance. By doing that we can reduce the number of lookups
on the same table.
We will use the Lookup SQL Override option to add a WHERE clause to the default
SQL statement if it is not defined
We will use SQL override to suppress the default ORDER BY statement and enter an
override ORDER BY with fewer columns. Indexing the Lookup Table
For cached lookups we will index the lookup table using the columns in the
lookup ORDER BY statement.
For Un-cached lookups we will Index the lookup table using the columns in the
lookup where condition.
3)In some cases we use lookup instead of Joiner as lookup is faster than
joiner in some cases when lookup contains the master data only.
=======================================
Look up Transformation is like a set of Reference for the traget table.For example suppose you are
travelling by an auto ricksha..In the morning you notice that the auto driver showing you some card and
saying that today onwards there is a hike in petrol.so you have to pay more. So the card which he is
showing is a set of reference for there costumer..In the same way the lookup transformation works.
These are of 2 types :
a) Connected Lookup
b) Un-connected lookup
Connected lookup is connected in a single pipeline from a source to a target where as Un Connected
Lookup is isolated with in the mapping and is called with the help of a Expression Transformation.
=======================================
Look up tranformations are used to
Get a related value
Updating slowly changing dimension
Caluculating expressions
=======================================
2.Informatica - While importing the relational source definition
from database, what are the meta data of source U i
QUESTION #2 Source name
Database location
Column names
Datatypes
Key constraints
No best answer available. Please pick the good answer available or submit your answer.
=======================================
source name data types key constraints database location
=======================================
Relational sources are tables views synonyms. Source name Database location Column name Datatype
Key Constraints. For synonyms you will have to manually create the constraints.
=======================================
3.Informatica - How many ways you can update a relational
source defintion and what r they?
QUESTION #3 Two ways
1. Edit the definition
2. Reimport the defintion
No best answer available. Please pick the good answer available or submit your answer.
January 30, 2006 04:59:06 #1
gazulas Member Since: January 2006 Contribution: 17
RE: How many ways you can update a relational source d...
=======================================
in 2 ways we can do it
=======================================
4.Informatica - Where should U place the flat file to import the
No best answer available. Please pick the good answer available or submit your answer.
December 13, 2005 08:42:59 #1
rishi
RE: Where should U place the flat file to import the f...
=======================================
There is no such restrication to place the source file. In performance point of view its better to place the
file in server local src folder. if you need path please check the server properties availble at workflow
manager.
It doesn't mean we should not place in any other folder if we place in server src folder by default src
will be selected at time session creation.
=======================================
file must be in a directory local to the client machine.
=======================================
Basically the flat file should be stored in the src folder in the informatica server folder.
Now logically it should pick up the file from any location but it gives an error of invalid identifier or
not able to read the first row.
So its better to keep the file in the src folder.which is already created when the informatica is installed
=======================================
We can place source file any where in network but it will consume more time to fetch data from source
file but if the source file is present on server srcfile then it will fetch data from source upto 25 times
faster than previous.
=======================================
5.Informatica - To provide support for Mainframes source data,
which files r used as a source definitions?
QUESTION #5 COBOL files
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
October 07, 2005 11:49:42 #1
Shaks Krishnamurthy
=======================================
COBOL Copy-book files
=======================================
The mainframe files are Used as VSAM files in Informatica by using the Normaliser transformation
=======================================
6.Informatica - Which transformation should u need while using
the cobol sources as source defintions?
QUESTION #6 Normalizer transformaiton which is used to normalize the data.
Since cobol sources r oftenly consists of Denormailzed data.
Normalizer transformaiton
Cheers,
Sithu
Normalizer transformaiton
Cheers
Sithu
=======================================
No best answer available. Please pick the good answer available or submit your answer.
August 22, 2005 03:23:12 #1
Praveen
RE: How can U create or import flat file definition in to the warehouse designer?
=======================================
U can create flat file definition in warehouse designer.in the warehouse designer u can create new
target: select the type as flat file. save it and u can enter various columns for that created target by
editing its properties.Once the target is created save it. u can import it from the mapping designer.
=======================================
Yes you can import flat file directly into Warehouse designer. This way it will import the field
definitions directly.
=======================================
1) Manually create the flat file target definition in warehouse designer
2)create target definition from a source definition. This is done my dropping a source definition in
warehouse designer.
3)Import flat file definitionusing a flat file wizard. ( file must be local to the client machine)
=======================================
While creating flatfiles manually we drag and drop the structure from SQ if the structure we need is the
same as of source for this we need to check-in the source and then drag and drop it into the Flatfile if
not all the columns in the source will be changed as primary keys.
=======================================
8.Informatica - What is the maplet?
QUESTION #8 Maplet is a set of transformations that you build in the maplet
designer and U can use in multiple mapings.
No best answer available. Please pick the good answer available or submit your answer.
December 08, 2005 23:38:47 #1
phani
=======================================
For Ex:Suppose we have several fact tables that require a series of dimension keys.Then we can create a
mapplet which contains a series of Lkp transformations to find each dimension key and use it in each
fact table mapping instead of creating the same Lkp logic in each mapping.
=======================================
Cheers
Sithu
=======================================
Set of transforamations where the logic can be reusble
=======================================
A mapplet should have a mapplet input transformation which recives input values and a output
transformation which passes the final modified data to back to the mapping.
when the mapplet is displayed with in the mapping only input & output ports are displayed so that the
internal logic is hidden from end-user point of view.
=======================================
No best answer available. Please pick the good answer available or submit your answer.
November 23, 2005 16:06:23 #1
sir
=======================================
a transformation is repository object that pass data to the next stage(i.e to the next transformation or
target) with/with out modifying the data
=======================================
It is a process of converting given input to desired output.
=======================================
set of operation
Cheers
Sithu
=======================================
Transformation is a repository object of converting a given input to desired output.It can generates
modifies and passes the data.
=======================================
A TransFormation Is a Repository Object.
That Generates Modifies or Passes Data.
The Designer Provides a Set of Transformations That perform Specific Functions.
For Example An AGGREGATOR Transformation Performs Calculations On Groups Of Data.
=======================================
10.Informatica - What r the designer tools for creating
tranformations?
QUESTION #10 Mapping designer
Tansformation developer
Mapplet designer
No best answer available. Please pick the good answer available or submit your answer.
February 21, 2007 05:29:40 #1
MANOJ KUMAR PANIGRAHI
=======================================
Mapping designer
Mapplet designer
=======================================
Mapping Designer
Maplet Designer
Transformation Deveoper - for reusable transformation
=======================================
11.Informatica - What r the active and passive transforamtions?
QUESTION #11 An active transforamtion can change the number of rows that
pass through it.A passive transformation does not change the number of rows
that pass through it.
No best answer available. Please pick the good answer available or submit your answer.
January 24, 2006 03:32:14 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
Transformations can be active or passive. An active transformation can change the number of rows that
pass through it such as a Filter transformation that removes rows that do not meet the filter condition.
A passive transformation does not change the number of rows that pass through it such as an
Expression transformation that performs a calculation on data and passes all rows through the
transformation.
Cheers
Sithu
=======================================
Active Transformation : A Transformation which change the number of rows when data is flowing from
source to target
Passive Transformation : A transformation which does not change the number of rows when the data is
flowing from source to target
=======================================
12.Informatica - What r the connected or unconnected
transforamations?
QUESTION #12 An unconnected transforamtion is not connected to other
transformations in the mapping.Connected transforamation is connected to
other transforamtions in the mapping.
No best answer available. Please pick the good answer available or submit your answer.
August 22, 2005 03:26:32 #1
Praveen
=======================================
An unconnected transformation cant be connected to another transformation. but it can be called inside
another transformation.
=======================================
Connected transformation is a part of your data flow in the pipeline while unconnected Transformation
is not.
use unconnected transforms when you wanna call the same transform many times in a single mapping.
=======================================
In addition to first answer uncondition transformation are directly connected and can/used in as many as
other transformations. If you are using a transformation several times use unconditional. You get better
performance.
=======================================
Connect Transformation :
A transformation which participates in the mapping data flow
Connected
transformation can receive multiple inputs and provides multiple outputs
Unconnected :
Thanks
Rekha
=======================================
13.Informatica - How many ways u create ports?
QUESTION #13 Two ways
1.Drag the port from another transforamtion
2.Click the add buttion on the ports tab.
No best answer available. Please pick the good answer available or submit your answer.
September 28, 2006 06:31:21 #1
srinivas.vadlakonda
=======================================
Two ways
1.Drag the port from another transforamtion
2.Click the add buttion on the ports tab.
=======================================
we can copy and paste the ports in the ports tab
=======================================
14.Informatica - What r the reusable transforamtions?
QUESTION #14 Reusable transformations can be used in multiple mappings.
When u need to incorporate this transformation into maping,U add an instance
of it to maping.Later if U change the definition of the transformation ,all
instances of it inherit the changes.Since the instance of reusable transforamation
is a pointer to that transforamtion,U can change the transforamation in the
transformation developer,its instances automatically reflect these changes.This
feature can save U great deal of work.
Cheers
Sithu
Cheers
Sithu
=======================================
1) by creating normal transformation and making it reusable by checking it in the check box of the
properties of the edit transformation.
2) by using transformation developer here what ever transformation is developed it is reusable and it
can be used in mapping designer where we can further change its properties as per our requirement.
=======================================
1. A reusable transformation can be used in multiple transformations
2.The designer stores each reusable transformation as metada separate from
any mappings that use the transformation.
3. Every reusable transformation falls within a category of transformations available in the Designer
4.one can only create an External Procedure transformation as a reusable transformation.
=======================================
15.Informatica - What r the methods for creating reusable
transforamtions?
QUESTION #15 Two methods
1.Design it in the transformation developer.
2.Promote a standard transformation from the mapping designer.After U add a
transformation to the mapping , U can promote it to the status of reusable
transformation.
Once U promote a standard transformation to reusable status,U can demote it to
a standard transformation at any time.
If u change the properties of a reusable transformation in mapping,U can revert
it to the original reusable transformation properties by clicking the revert
button.
No best answer available. Please pick the good answer available or submit your answer.
September 12, 2005 12:22:21 #1
Praveen Vasudev
=======================================
=======================================
Cheers
Sithu
=======================================
16.Informatica - What r the unsupported repository objects for a
mapplet?
QUESTION #16 COBOL source definition
Joiner transformations
Normalizer transformations
Non reusable sequence generator transformations.
Pre or post session stored procedures
Target defintions
Power mart 3.5 style Look Up functions
XML source definitions
IBM MQ source defintions
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 04:23:12 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
q
Source definitions. Definitions of database objects (tables views synonyms) or files that provide
source data.
q
Target definitions. Definitions of database objects or files that contain the target data.
q
Multi-dimensional metadata. Target definitions that are configured as cubes and dimensions.
q
Mappings. A set of source and target definitions along with transformations containing business
logic that you build into the transformation. These are the instructions that the Informatica Server uses
to transform and move data.
q
Reusable transformations. Transformations that you can use in multiple mappings.
q
Mapplets. A set of transformations that you can use in multiple mappings.
q
Sessions and workflows. Sessions and workflows store information about how and when the
Informatica Server moves data. A workflow is a set of instructions that describes how and when to run
tasks related to extracting transforming and loading data. A session is a type of task that you can put in
a workflow. Each session corresponds to a single mapping.
Cheers
Sithu
=======================================
Hi
q
You cannot include the following objects in a mapplet:
q
Normalizer transformations
q
Cobol sources
q
XML Source Qualifier transformations
q
XML sources
q
Target definitions
q
Pre- and post- session stored procedures
q
Other mapplets
Shivaji Thaneru
=======================================
normaliser xml source qualifier and cobol sources cannot be used
=======================================
- Normalizer transformations
- Cobol sources
- XML Source Qualifier transformations
- XML sources
- Target definitions
- Pre- and post- session stored procedures
- Other mapplets
-PowerMart 3.5-style LOOKUP functions
-non reusable sequence generator
=======================================
17.Informatica - What r the mapping paramaters and maping
variables?
QUESTION #17 Maping parameter represents a constant value that U can define
before running a session.A mapping parameter retains the same value throughout
the entire session.
When u use the maping parameter ,U declare and use the parameter in a maping
or maplet.Then define the value of parameter in a parameter file for the session.
Unlike a mapping parameter,a maping variable represents a value that can
change throughout the session.The informatica server saves the value of maping
variable to the repository at the end of session run and uses that value next time
U run the session.
No best answer available. Please pick the good answer available or submit your answer.
September 12, 2005 12:30:13 #1
Praveen Vasudev
=======================================
Start value Current value ( when the session starts the execution of the undelying mapping)
Start value <> Current value ( while the session is in progress and the variable value changes in one
ore more occasions)
Current value at the end of the session is nothing but the start value for the subsequent run of the
same session.
=======================================
You can use mapping parameters and variables in the SQL query user-defined join and source filter of a
Source Qualifier transformation. You can also use the system variable $$$SessStartTime.
The Informatica Server first generates an SQL query and scans the query to replace each mapping
parameter or variable with its start value. Then it executes the query on the source database.
Cheers
Sithu
=======================================
Mapping variable represents a value that can be changed during the mapping run.
=======================================
18.Informatica - Can U use the maping parameters or variables
created in one maping into another maping?
QUESTION #18 NO.
We can use mapping parameters or variables in any transformation of the same
maping or mapplet in which U have created maping parameters or variables.
Click Here to view complete document
Submitted by : Ray
NO. You might want to use a workflow parameter/variable if you want it to be visible with other
mappings/sessions
Hi
The following sentences extracted from Informatica help as it is.Did it support the above to answers.
After you create a parameter you can use it in the Expression Editor of any transformation in a mapping
or mapplet. You can also use it in Source Qualifier transformations and reusable transformations.
Shivaji Thaneru
=======================================
I differ on this; we can use global variable in sessions as well as in mappings. This provision is
provided in Informatica 7.1.x versions; I have used it. Please check this in properties.
Regards
-Vaibhav
=======================================
hi
Thanx Shivaji but the statement does not completely answer the question.
a mapping parameter can be used in reusable transformation
but does it mean u can use the mapping parameter whereever the instances of the reusable
transformation are used?
=======================================
The scope of a mapping variable is the mapping in which it is defined. A variable Var1 defined in
mapping Map1 can only be used in Map1. You cannot use it in another mapping say Map2.
=======================================
19.Informatica - Can u use the maping parameters or variables
created in one maping into any other reusable transform
QUESTION #19 Yes.Because reusable tranformation is not contained with any
maplet or maping.
No best answer available. Please pick the good answer available or submit your answer.
February 02, 2007 17:06:04 #1
mahesh4346 Member Since: January 2007 Contribution: 6
=======================================
But when one cant use Mapping parameters and variables of one mapping in another Mapping then how
can that be used in reusable transformation when Reusable transformations themselves can be used
among multiple mappings?So I think one cant use Mapping parameters and variables in reusable
transformationsPlease correct me if i am wrong
=======================================
Hi you can use the mapping parameters or variables in a reusable transformation. And when you use the
xformation in a mapping during execution of the session it validates if the mapping parameter that is
used in the xformation is defined with this mapping or not. If not the session fails.
=======================================
20.Informatica - How can U improve session performance in
aggregator transformation?
QUESTION #20 Use sorted input.
No best answer available. Please pick the good answer available or submit your answer.
September 12, 2005 12:34:09 #1
Praveen Vasudev
RE:
=======================================
2. donot forget to check the option on the aggregator that tell the aggregator that the input is sorted on
the same keys as group by.
=======================================
hi
You can use the following guidelines to optimize the performance of an Aggregator transformation.
Sorted input reduces the amount of data cached during the session and improves session performance.
Use this option with the Sorter transformation to pass sorted data to the Aggregator transformation.
Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator
transformation stores in the data cache.
If you use a Filter transformation in the mapping place the transformation before the Aggregator
transformation to reduce unnecessary aggregation.
Shivaji T
=======================================
Following are the 3 ways with which we can improve the session performance:-
c) Filter before aggregating (if you are using any filter condition)
=======================================
By using Incrimental aggrigation also we can improve performence.Becaue it passes the new data to the
mapping and uses historical data to perform aggrigation
=======================================
to improve session performance in aggregator transformation enable the session option Incremental
Aggregation
=======================================
-Use sorted input to decrease the use of aggregate caches.
-Limit connected input/output or output ports.
Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator
transformation stores in the data cache.
-Filter the data before aggregating it.
=======================================
21.Informatica - What is aggregate cache in aggregator
transforamtion?
QUESTION #21 The aggregator stores data in the aggregate cache until it
completes aggregate calculations.When u run a session that uses an aggregator
transformation,the informatica server creates index and data caches in memory
to process the transformation.If the informatica server requires more space,it
stores overflow values in cache files.
No best answer available. Please pick the good answer available or submit your answer.
=======================================
When you run a workflow that uses an Aggregator transformation the Informatica Server creates index
and data caches in memory to process the transformation. If the Informatica Server requires more space
it stores overflow values in cache files.
Cheers
Sithu
=======================================
Aggregate cache contains data values while aggregate calculations are being performed. Aggregate
cache is made up of index cache and data cache. Index cache contains group values and data cache
consists of row values.
=======================================
when server runs the session with aggregate transformation it stores data in memory until it completes
the aggregation
when u partition a source the server creates one memory cache and one disk cache for each partition .it
routes the data from one partition to another based on group key values of the transformation
=======================================
22.Informatica - What r the diffrence between joiner
transformation and source qualifier transformation?
QUESTION #22 U can join hetrogenious data sources in joiner transformation
which we can not achieve in source qualifier transformation.
U need matching keys to join two relational sources in source qualifier
transformation.Where as u doesn’t need matching keys to join two sources.
Two relational sources should come from same datasource in sourcequalifier.U
can join relatinal sources which r coming from diffrent sources also.
No best answer available. Please pick the good answer available or submit your answer.
=======================================
Cheers
Sithu
=======================================
Hi
The Source Qualifier transformation provides an alternate way to filter rows. Rather than filtering rows
from within a mapping the Source Qualifier transformation filters rows when read from a source. The
main difference is that the source qualifier limits t he row set extracted from a source while the Filter
transformation limits the row set sent to a target . Since a source qualifier reduces the number of rows
used throughout the mapping it provides better performance.
However the Source Qualifier transformation only lets you filter rows from relational sources while the
Filter transformation filters rows from any type of source. Also note that since it runs in the database
you must make sure that the filter condition in the Source Qualifier transformation only uses standard
SQL.
Shivaji Thaneru
=======================================
hi as per my knowledge you need matching keys to join two relational sources both in Source qualifier
as well as in Joiner transformation. But the difference is that in Source qualifier both the keys must have
primary key - foreign key relation Whereas in Joiner transformation its not needed.
=======================================
source qualifier is used for reading the data from the database where as joiner transformation is used for
joining two data tables.
source qualifier can also be used to join two tables but the condition is that both the table should be
from relational database and it should have the primary key with the same data structure.
using joiner we can join data from two heterogeneous sources like two flat files or one file from
relational and one file from flat.
=======================================
23.Informatica - In which condtions we can not use joiner
transformation(Limitaions of joiner transformation)?
QUESTION #23 Both pipelines begin with the same original data source.
Both input pipelines originate from the same Source Qualifier transformation.
Both input pipelines originate from the same Normalizer transformation.
Both input pipelines originate from the same Joiner transformation.
Either input pipelines contains an Update Strategy transformation.
Either input pipelines contains a connected or unconnected Sequence Generator
transformation.
No best answer available. Please pick the good answer available or submit your answer.
January 25, 2006 12:18:35 #1
Surendra
=======================================
Now we can use a joiner even if the data is coming from the same source.
SK
=======================================
You cannot use a Joiner transformation in the following situations(according to infa 7.1):
Can you please let me know the correct and clear answer for Limitations of joiner transformation?
swapna
=======================================
You cannot use a Joiner transformation in the following situations(according to infa 7.1): When You
connect a Sequence Generator transformation directly before the Joiner transformation.
Utsav
=======================================
Yes joiner only supports equality condition
The Joiner transformation does not match null values. For example if both EMP_ID1 and EMP_ID2
from the example above contain a row with a null value the PowerCenter Server does not consider them
a match and does not join the two rows. To join rows with null values you can replace null input with
default values and then join on the default values.
=======================================
We cannot use joiner transformation in the following two conditions:-
1. When our data comes through Update Strategy transformation or in other words after Update strategy
we cannot add joiner transformation
2. We cannot connect a Sequence Generator transformation directly before the Joiner transformation.
=======================================
24.Informatica - what r the settiings that u use to cofigure the
joiner transformation?
All Rights Reserved, etl-informatica.info (27 of 363)4/1/2009 7:50:58 PM
All Rights Reserved, etl-informatica.info
q
Master and detail source
q
Type of join
q
Condition of the join
the Joiner transformation supports the following join types, which you set in the Properties tab:
q
Normal (Default)
q
Master Outer
q
Detail Outer
q
Full Outer
Cheers,
Sithu
q
Master and detail source
q
Type of join
q
Condition of the join
the Joiner transformation supports the following join types which you set in the Properties tab:
q
Normal (Default)
q
Master Outer
q
Detail Outer
q
Full Outer
Cheers
Sithu
=======================================
1) CASE SENSITIVE STRING COMPARISON: To join the string based on the case
sensitive basis.
10) SORTED INPUT: Check box will be there and will have to check it if the
input to the joiner is sorted.
=======================================
25.Informatica - What r the join types in joiner transformation?
QUESTION #25 Normal (Default)
Master outer
Detail outer
Full outer
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
September 12, 2005 12:38:39 #1
Praveen Vasudev
RE:
=======================================
Normal (Default) -- only matching rows from both master and detail
Master outer -- all detail rows and only matching rows from master
Detail outer -- all master rows and only matching rows from detail
Full outer -- all rows from both master and detail ( matching or non matching)
=======================================
follw this
1. In the Mapping Designer choose Transformation-Create. Select the Joiner transformation. Enter
a name click OK.
The naming convention for Joiner transformations is JNR_ TransformationName . Enter a description for
the transformation. This description appears in the Repository Manager making it easier for you or
others to understand or remember what the transformation does.
The Designer creates the Joiner transformation. Keep in mind that you cannot use a Sequence Generator
or Update Strategy transformation as a source to a Joiner transformation.
1. Drag all the desired input/output ports from the first source into the Joiner transformation.
The Designer creates input/output ports for the source fields in the Joiner as detail fields by default.
You can edit this property later.
1. Select and drag all the desired input/output ports from the second source into the Joiner
transformation.
The Designer configures the second set of source fields and master fields by default.
1. Double-click the title bar of the Joiner transformation to open the Edit Transformations dialog
box.
1. Click any box in the M column to switch the master/detail relationship for the sources. Change
the master/detail relationship if necessary by selecting the master source in the M column.
Tip: Designating the source with fewer unique records as master increases performance during a join.
Certain ports are likely to contain NULL values since the fields in one of the sources may be empty.
You can specify a default value if the target database does not handle NULLs.
1. Click the Add button to add a condition. You can add multiple conditions. The master and detail
ports must have matching datatypes. The Joiner transformation only supports equivalent ( ) joins:
10. Select the Properties tab and enter any additional settings for the transformations.
1. Click OK.
Cheers
Sithu
=======================================
26.Informatica - What r the joiner caches?
QUESTION #26 When a Joiner transformation occurs in a session, the
Informatica Server reads all the records from the master source and builds index
and data caches based on the master rows.
After building the caches, the Joiner transformation reads records from the detail
source and perform joins.
When the PowerCenter Server processes a Joiner transformation, it reads rows from both sources
concurrently and builds the index and data cache based on the master rows. The PowerCenter Server
then performs the join based on the detail source data and the cache data. To improve performance for
an unsorted Joiner transformation, use the source with fewer rows as the master source. To improve
performance for a sorted Joiner transformation, use the source with fewer duplicate key values as the
master.
=======================================
For version 7.x and above :
When the PowerCenter Server processes a Joiner transformation it reads rows from both sources
concurrently and builds the index and data cache based on the master rows. The PowerCenter Server
then performs the join based on the detail source data and the cache data. To improve performance for
an unsorted Joiner transformation use the source with fewer rows as the master source. To improve
performance for a sorted Joiner transformation use the source with fewer duplicate key values as the
master.
=======================================
27.Informatica - what is the look up transformation?
QUESTION #27 Use lookup transformation in u’r mapping to lookup data in a
relational table,view,synonym.
Informatica server queries the look up table based on the lookup ports in the
transformation.It compares the lookup transformation port values to lookup
table column values based on the look up condition.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
December 09, 2005 00:06:38 #1
phani
=======================================
Using it we can access the data from a relational table which is not a source in the mapping.
For Ex:Suppose the source contains only Empno but we want Empname also in the mapping.Then
instead of adding another tbl which contains Empname as a source we can Lkp the table and get the
Empname in target.
=======================================
A lookup is a simple single-level reference structure with no parent/child relationships. Use a lookup
when you have a set of reference members that you do not need to organize hierarchically.
=======================================
In DecisionStream a lookup is a simple single-level reference structure with no parent/child
relationships. Use a lookup when you have a set of reference members that you do not need to organize
hierarchically.HTH
=======================================
Use a Lookup transformation in your mapping to look up data in a relational table view or synonym.
Import a lookup definition from any relational database to which both the Informatica Client and Server
can connect. You can use multiple Lookup transformations in a mapping.
Cheers
Sithu
=======================================
Lookup transformation in a mapping is used to look up data in a flat file or a relational table view or
synonym. You can import a lookup definition from any flat file or relational database to which both the
PowerCenter Client and Server can connect. You can use multiple Lookup transformations in a
mapping.
Cheers
Sridhar
=======================================
28.Informatica - Why use the lookup transformation ?
QUESTION #28 To perform the following tasks.
Get a related value. For example, if your source table includes employee ID, but
you want to include the employee name in your target table to make your
summary data easier to read.
Perform a calculation. Many normalized tables include values used in a
calculation, such as gross sales per invoice or sales tax, but not the calculated
value (such as net sales).
Update slowly changing dimension tables. You can use a Lookup transformation
to determine whether records already exist in the target.
No best answer available. Please pick the good answer available or submit your answer.
August 21, 2006 22:26:47 #1
samba
=======================================
Lookup table is nothing but the lookup on a table view synonym and falt file.
by using lookup we can get a related value with join conditon and performs caluclations
1)connected
2)Unconnected
connected lookup is with in pipeline only but unconnected lookup is not connected to pipeline
cheers
samba
=======================================
hey with regard to look up is there a dynamic look up and static look up ? if so how do you set it. and is
there a combinationation of dynamic connected lookups..and static unconnected look ups.?
=======================================
look up has two types Connected and Unconnected..Usually we use look-up so as to get the related
Value from a table ..it has Input port Output Port Lookup Port and Return Port..where Lookup Port
looks up the corresponding column for the value and resturn port returns the value..usually we use when
there are no columns in common
=======================================
For maintaining the slowly changing diamensions
=======================================
Hi
To configure just double click on the lookup transformation and go to properties tab
If u dont select this option then the lookup is merely a normal lookup.
Thanks.
=======================================
29.Informatica - What r the types of lookup?
QUESTION #29 Connected and unconnected
No best answer available. Please pick the good answer available or submit your answer.
November 08, 2005 18:44:53 #1
swati
=======================================
i>connected
ii>unconnected
iii>cached
iv>uncached
=======================================
1.
Connected lookup
2.
Unconnected lookup
1.
Persistent cache
2.
Re-cache from database
3.
Static cache
4.
Dynamic cache
5.
Shared cache
Cheers
Sithu
=======================================
hello boss/madam
I don't understand why people are specifying the cache types I want to know that now a days caches are
also taken into this category of lookup.
If yes do specify on the answer list
thankyou
=======================================
30.Informatica - Differences between connected and unconnected
lookup?
QUESTION #30
Connected lookup Unconnected lookup
Receives input values diectly Receives input values from the result of a lkp
from the pipe line. expression in a another transformation.
U can use a dynamic or static
U can use a static cache.
cache
Cache includes all lookup Cache includes all lookup out put ports in the
columns used in the maping lookup condition and the lookup/return port.
Support user defined default
Does not support user defiend default values
values
No best answer available. Please pick the good answer available or submit your answer.
February 03, 2006 03:25:15 #1
Prasanna
=======================================
In addition:
Connected Lookip can return/pass multiple rows/groups of data whereas unconnected can return only
one port.
=======================================
In addition to this: In Connected lookup if the condition is not satisfied it returns '0'. In UnConnected
lookup if the condition is not satisfied it returns 'NULL'.
=======================================
Hi
Receives input values directly from the pipeline. Receives input values from the result of a :LKP
expression in another transformation.
You can use a dynamic or static cache. You can use a static cache.
Cache includes all lookup columns used in the
mapping (that is lookup source columns included in Cache includes all lookup/output ports in the
the lookup condition and lookup source columns lookup condition and the lookup/return port.
linked as output ports to other transformations).
Can return multiple columns from the same row or Designate one return port (R). Returns one
insert into the dynamic lookup cache. column from each row.
Shivaji Thaneru
=======================================
31.Informatica - what is meant by lookup caches?
QUESTION #31 The informatica server builds a cache in memory when it
processes the first row af a data in a cached look up transformation.It allocates
memory for the cache based on the amount u configure in the transformation or
session properties.The informatica server stores condition values in the index
cache and output values in the data cache.
No best answer available. Please pick the good answer available or submit your answer.
September 28, 2006 06:34:33 #1
srinivas vadlakonda
=======================================
lookup cache is the temporary memory that is created by the informatica server to hold the lookup data
and to perform the lookup conditions
=======================================
A LookUP Cache is a Temporary Memory Area which is Created by the Informatica Server. which
stores the Lookup data based on certain Conditions. The Caches are of Three types 1) Persistent 2)
Dynamic 3) Static and 4) Shared Cache.
=======================================
32.Informatica - What r the types of lookup caches?
QUESTION #32 Persistent cache: U can save the lookup cache files and reuse
them the next time the informatica server processes a lookup transformation
configured to use the cache.
Static cache: U can configure a static or readonly cache for only lookup table.By
default informatica server creates a static cache.It caches the lookup table and
lookup values in the cache for each row that comes into the transformation.when
the lookup condition is true,the informatica server does not update the cache
while it prosesses the lookup transformation.
Dynamic cache: If u want to cache the target table and insert new rows into
cache and the target,u can create a look up transformation to use dynamic cache.
The informatica server dynamically inerts data to the target table.
shared cache: U can share the lookup cache between multiple transactions.U can
share unnamed cache between transformations inthe same maping.
No best answer available. Please pick the good answer available or submit your answer.
December 13, 2005 06:02:36 #1
Sithu
=======================================
Cache
1. Static cache
2. Dynamic cache
3. Persistent cache
Sithu
=======================================
Cache are three types namely Dynamic cache Static cache Persistent cache
Cheers
Sithu
=======================================
Dynamic cache
Persistence Cache
Re cache
Shared Cache
=======================================
hi could any one get me information where you would use these caches for look ups and how do you set
them.
thanks
infoseeker
=======================================
There are 4 types of lookup cache -
Persistent Recache Satic & Dynamic.
Bye
Stephen
=======================================
Types of Caches are :
1) Dynamic Cache
2) Static Cache
3) Persistent Cache
4) Shared Cache
5) Unshared Cache
=======================================
There are five types of caches such as
static cache
dynamic cache
persistant cache
shared cache etc...
=======================================
33.Informatica - Difference between static cache and dynamic
cache
QUESTION #33
Static cache Dynamic cache
U can insert rows into the cache as u
U can not insert or update the cache
pass to the target
The informatic server returns a value from the lookup The informatic server inserts rows
table or cache when the condition is true.When the into cache when the condition is false.
condition is not true, informatica server returns the This indicates that the the row is not
default value for connected transformations and null for in the cache or target table. U can
unconnected transformations. pass these rows to the target table
Click Here to view complete document
Submitted by : vp
lets say for example your lookup table is your target table. So when you create the Lookup selecting the
dynamic cache what It does is it will lookup values and if there is no match it will insert the row in both
the target and the lookup cache (hence the word dynamic cache it builds up as you go along), or if there
is a match it will update the row in the target. On the other hand Static caches dont get updated when
you do a lookup.
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 01:08:06 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
The Normalizer transformation normalizes records from COBOL and relational sources allowing you to
organize the data according to your own needs. A Normalizer transformation can appear anywhere in a
data flow when you normalize a relational source. Use a Normalizer transformation instead of the
Source Qualifier transformation when you normalize a COBOL source. When you drag a COBOL
source into the Mapping Designer workspace the Normalizer transformation automatically appears
creating input and output ports for every column in the source
Cheers
Sithu
=======================================
35.Informatica - How the informatica server sorts the string
values in Ranktransformation?
QUESTION #35 When the informatica server runs in the ASCII data movement
mode it sorts session data using Binary sortorder.If U configure the seeion to use
a binary sort order,the informatica server caluculates the binary value of each
string and returns the specified number of rows with the higest binary values for
the string.
No best answer available. Please pick the good answer available or submit your answer.
December 09, 2005 00:25:27 #1
phani
=======================================
When Informatica Server runs in UNICODE data movement mode then it uses the sort order configured
in session properties.
=======================================
36.Informatica - What is the Rankindex in Ranktransformation?
QUESTION #36 The Designer automatically creates a RANKINDEX port for
each Rank transformation. The Informatica Server uses the Rank Index port to
store the ranking position for each record in a group. For example, if you create a
Rank transformation that ranks the top 5 salespersons for each quarter, the rank
index numbers the salespeople from 1 to 5:
No best answer available. Please pick the good answer available or submit your answer.
January 12, 2006 04:41:57 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
Based on which port you want generate Rank is known as rank port the generated values are known as
rank index.
Cheers
Sithu
=======================================
37.Informatica - What is the Router transformation?
QUESTION #37 A Router transformation is similar to a Filter transformation
because both transformations allow you to use a condition to test data.
However, a Filter transformation tests data for one condition and drops the rows
of data that do not meet the condition. A Router transformation tests data for
one or more conditions and gives you the option to route rows of data that do
not meet any of the conditions to a default output group.
If you need to test the same input data based on multiple conditions, use a
Router Transformation in a mapping instead of creating multiple Filter
transformations to perform the same task.
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 04:46:42 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
A Router transformation is similar to a Filter transformation because both transformations allow you
to use a condition to test data. A Filter transformation tests data for one condition and drops the rows of
data that do not meet the condition. However a Router transformation tests data for one or more
conditions and gives you the option to route rows of data that do not meet any of the conditions to a
default output group.
Cheers
Sithu
=======================================
Note:- i think the definition and purpose of Router transformation define by sithusithu sithu is not clear
and not fully correct as they of have mentioned
<A Router transformation tests data for one or more conditions >
sorry sithu and sithusithu
but i want to clarify is that in Filter transformation also we can give so many conditions together. eg.
empno 1234 and sal>25000 (2conditions)
1. Similar as filter transformation to sort the source data according to the condition applied.
2. When we want to load data into different target tables from same source but with different condition
as per target tables requirement.
e.g. From emp table we want to load data in three(3) different target tables T1(where deptno 10) T2
(where deptno 20) and T3(where deptno 30).
For this if we use filter transformation we need three(3) filter transformations
So instead of using three(3) filter transformation we will use only one(1) Router transformation.
Advantages:-
1. Better Performance because in mapping the Router transformation Informatica server processes the
input data only once instead of three as in filter transformation.
2. Less complexity because we use only one router transformation instead of multiple filter
transformation.
=======================================
38.Informatica - What r the types of groups in Router
transformation?
QUESTION #38 Input group Output group
The designer copies property information from the input ports of the input group
to create a set of output ports for each output group.
Two types of output groups
User defined groups
Default group
U can not modify or delete default groups.
No best answer available. Please pick the good answer available or submit your answer.
December 09, 2005 00:35:44 #1
phani
=======================================
Input group contains the data which is coming from the source.We can create as many user-defined
groups as required for each condition we want to specify.Default group contains all the rows of data
that doesn't satisfy the condition of any group.
=======================================
q
Input
q
Output
Input Group
The Designer copies property information from the input ports of the input group to create a set of
output ports for each output group.
Output Groups
q
User-defined groups
q
Default group
Cheers
Sithu
=======================================
39.Informatica - Why we use stored procedure transformation?
QUESTION #39 For populating and maintaining data bases.
No best answer available. Please pick the good answer available or submit your answer.
=======================================
A Stored Procedure transformation is an important tool for populating and maintaining databases.
Database administrators create stored procedures to automate time-consuming tasks that are too
complicated for standard SQL statements.
Cheers
Sithu
=======================================
You might use stored procedures to do the following tasks:
q
Check the status of a target database before loading data into it.
q
Determine if enough space exists in a database.
q
Perform a specialized calculation.
q
Drop and recreate indexes.
Shivaji Thaneru
=======================================
You might use stored procedures to do the following tasks:
q
Check the status of a target database before loading data into it.
q
Determine if enough space exists in a database.
q
Perform a specialized calculation.
q
Drop and recreate indexes.
Shivaji Thaneru
=======================================
we use a stored procedure transformation to execute a stored procedure which in turn might do the
=======================================
40.Informatica - What is source qualifier transformation?
QUESTION #40 When U add a relational or a flat file source definition to a
maping,U need to connect it to
a source qualifer transformation.The source qualifier transformation represnets
the records
that the informatica server reads when it runs a session.
Source qualifier is also a table, it acts as an intermediator in between source and target metadata. And, it
also generates sql, which creating mapping in between source and target metadatas.
Thanks,
Rama Rao
When you add a relational or a flat file source definition to a mapping you need to connect it to a
Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server
reads when it executes a session.
q
Join data originating from the same source database. You can join two or more tables with
primary-foreign key relationships by linking the sources to one Source Qualifier.
q
Filter records when the Informatica Server reads source data. If you include a filter condition the
Informatica Server adds a WHERE clause to the default query.
q
Specify an outer join rather than the default inner join. If you include a user-defined join the
Informatica Server replaces the join information specified by the metadata in the SQL query.
q
Specify sorted ports. If you specify a number for sorted ports the Informatica Server adds an
ORDER BY clause to the default SQL query.
q
Select only distinct values from the source. If you choose Select Distinct the Informatica Server
adds a SELECT DISTINCT statement to the default SQL query.
q
Create a custom query to issue a special SELECT statement for the Informatica Server to read
source data. For example you might use a custom query to perform aggregate calculations or execute a
stored procedure.
Cheers
Sithu
=======================================
When you add a relational or a flat file source definition to a mapping you need to connect it to a
Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server
reads when it executes a session.
Cheers
Sithu
=======================================
Source qualifier is also a table it acts as an intermediator in between source and target metadata. And it
also generates sql which creating mapping in between source and target metadatas.
Thanks
Rama Rao
=======================================
Def:- The Transformation which Converts the source(relational or flat) datatype to Informatica datatype.
So it works as an intemediator between and source and informatica server.
=======================================
Source Qualifier Transformation is the beginning of the pipeline of any transformation the main
purpose of this transformation is that it is reading the data from the relational or flat file and is passing
the data ie. read into the mapping designed so that the data can be passed into the other transformations
=======================================
Source Qualifier is a transformation with every source definiton if the source is Relational Database.
Source Qualifier fires a Select statement on the source db.
With every Source Definition u will get a source qualifier without Source qualifier u r mapping will be
invalid and u cannot define the pipeline to the other instance.
If the source is Cobol then for that source definition u will get a normalizer transormation not the
Source Qualifier.
=======================================
41.Informatica - What r the tasks that source qualifier performs?
QUESTION #41 Join data originating from same source data base.
Filter records when the informatica server reads source data.
Specify an outer join rather than the default inner join
specify sorted records.
Select only distinct values from the source.
Creating custom query to issue a special SELECT statement for the informatica
server to read
source data.
No best answer available. Please pick the good answer available or submit your answer.
January 24, 2006 03:42:08 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
q
Join data originating from the same source database. You can join two or more tables with
primary-foreign key relationships by linking the sources to one Source Qualifier.
q
Filter records when the Informatica Server reads source data. If you include a filter condition the
Informatica Server adds a WHERE clause to the default query.
q
Specify an outer join rather than the default inner join. If you include a user-defined join the
Informatica Server replaces the join information specified by the metadata in the SQL query.
q
Specify sorted ports. If you specify a number for sorted ports the Informatica Server adds an
ORDER BY clause to the default SQL query.
q
Select only distinct values from the source. If you choose Select Distinct the Informatica Server
adds a SELECT DISTINCT statement to the default SQL query.
q
Create a custom query to issue a special SELECT statement for the Informatica Server to read
source data. For example you might use a custom query to perform aggregate calculations or execute a
stored procedure.
Cheers
Sithu
=======================================
42.Informatica - What is the target load order?
QUESTION #42 U specify the target loadorder based on source qualifiers in a
maping.If u have the multiple
source qualifiers connected to the multiple targets,U can designatethe order in
which informatica
server loads data into the targets.
No best answer available. Please pick the good answer available or submit your answer.
March 01, 2006 14:27:34 #1
saritha
=======================================
A target load order group is the collection of source qualifiers transformations and targets linked
together in a mapping.
=======================================
43.Informatica - What is the default join that source qualifier
provides?
QUESTION #43 Inner equi join.
No best answer available. Please pick the good answer available or submit your answer.
January 24, 2006 03:40:28 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
The Joiner transformation supports the following join types which you set in the Properties tab:
q
Normal (Default)
q
Master Outer
q
Detail Outer
q
Full Outer
Cheers
Sithu
=======================================
Equijoin on a key common to the sources drawn by the SQ.
=======================================
44.Informatica - What r the basic needs to join two sources in a
source qualifier?
QUESTION #44 Two sources should have primary and Foreign key relation
ships.
Two sources should have matching data types.
No best answer available. Please pick the good answer available or submit your answer.
December 14, 2005 10:32:44 #1
rishi
=======================================
The both the table should have a common feild with same datatype.
Its not neccessary both should follow primary and foreign relationship. If any relation ship exists that
will help u in performance point of view.
=======================================
Also of you are using a lookup in your mapping and the lookup table is small then try to join that looup
in Source Qualifier to improve perf.
Regards
SK
=======================================
Both the sources must be from same database.
=======================================
45.Informatica - what is update strategy transformation ?
QUESTION #45 This transformation is used to maintain the history data or just
most recent changes in to target
table.
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 04:33:23 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
The model you choose constitutes your update strategy how to handle changes to existing rows. In
PowerCenter and PowerMart you set your update strategy at two different levels:
q
Within a session. When you configure a session you can instruct the Informatica Server to
either treat all rows in the same way (for example treat all rows as inserts) or use instructions
coded into the session mapping to flag rows for different database operations.
q
Within a mapping. Within a mapping you use the Update Strategy transformation to flag rows
for insert delete update or reject.
Chrees
Sithu
=======================================
Update strategy transformation is used for flagging the records for insert
update delete and reject
Insert
Update
Delete
Update as Insert
Thanks
Rekha
=======================================
46.Informatica - What is the default source option for update
stratgey transformation?
QUESTION #46 Data driven.
No best answer available. Please pick the good answer available or submit your answer.
March 28, 2006 05:03:53 #1
Gyaneshwar
=======================================
DATA DRIVEN
=======================================
47.Informatica - What is Datadriven?
QUESTION #47 The informatica server follows instructions coded into update
strategy transformations with in the session maping determine how to flag
records for insert, update, delete or reject. If u do not choose data driven option
setting,the informatica server ignores all update strategy transformations in the
mapping.
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 04:36:22 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
The Informatica Server follows instructions coded into Update Strategy transformations within the
session mapping to determine how to flag rows for insert delete update or reject.
If the mapping for the session contains an Update Strategy transformation this field is marked Data
Driven by default.
Cheers
Sithu
=======================================
When Data driven option is selected in session properties it the code will consider the update strategy
(DD_UPDATE DD_INSERT DD_DELETE DD_REJECT) used in the mapping and not the options
selected in the session properties.
=======================================
48.Informatica - What r the options in the target session of update
strategy transsformatioin?
QUESTION #48 Insert
Delete
Update
Update as update
Update as insert
Update esle insert
Truncate table
No best answer available. Please pick the good answer available or submit your answer.
February 03, 2006 03:46:07 #1
Prasanna
=======================================
Update as Insert:
This option specified all the update records from source to be flagged as inserts in the target. In other
words instead of updating the records in the target they are inserted as new records.
This option enables informatica to flag the records either for update if they are old or insert if they are
new records from source.
=======================================
49.Informatica - What r the types of maping wizards that r to be
provided in Informatica?
QUESTION #49 The Designer provides two mapping wizards to help you create
mappings quickly and easily. Both wizards are designed to create mappings for
loading and maintaining star schemas, a series of dimensions related to a central
fact table.
Getting Started Wizard. Creates mappings to load static fact and dimension
tables, as well as slowly growing dimension tables.
Slowly Changing Dimensions Wizard. Creates mappings to load slowly
changing dimension tables based on the amount of historical dimension data you
want to keep and the method you choose to handle historical dimension data.
No best answer available. Please pick the good answer available or submit your answer.
January 09, 2006 02:43:25 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
Type1
Type2
Full History
Version
Flag
Date
Type3
=======================================
Inf designer :
Mapping -> wizards --> 1) Getting started -->Simple pass through mapping
-->Slowly growing target
=======================================
50.Informatica - What r the types of maping in Getting Started
Wizard?
QUESTION #50 Simple Pass through maping :
Loads a static fact or dimension table by inserting all rows. Use this mapping
when you want to drop all existing data from your table before loading new
data.
No best answer available. Please pick the good answer available or submit your answer.
January 09, 2006 02:46:25 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
1. Simple Pass through2. Slowly Growing TargetCheers Sithu
=======================================
51.Informatica - What r the mapings that we use for slowly
changing dimension table?
QUESTION #51 Type1: Rows containing changes to existing dimensions are
updated in the target by overwriting the existing dimension. In the Type 1
Dimension mapping, all rows contain current dimension data.
Use the Type 1 Dimension mapping to update a slowly changing dimension table
when you do not need to keep any previous versions of dimensions in the table.
Type 2: The Type 2 Dimension Data mapping inserts both new and changed
dimensions into the target. Changes are tracked in the target table by versioning
the primary key and creating a version number for each dimension in the table.
Use the Type 2 Dimension/Version Data mapping to update a slowly changing
dimension table when you want to keep a full history of dimension data in the
table. Version numbers and versioned primary keys track the order of changes to
each dimension.
Type 3: The Type 3 Dimension mapping filters source rows based on user-defined
comparisons and inserts only those found to be new dimensions to the target.
Rows containing changes to existing dimensions are updated in the target. When
updating an existing dimension, the Informatica Server saves existing data in
different columns of the same row and replaces the existing data with the
updates
No best answer available. Please pick the good answer available or submit your answer.
June 03, 2006 09:39:20 #1
mamatha
=======================================
hello sir
i want whole information on slowly changing dimension.and also want project on slowly changing
dimension in informatica.
mamatha.
=======================================
2.Look up Transfermation.
=======================================
Type1: Rows containing changes to existing dimensions are updated in the target by overwriting the
existing dimension. In the Type 1 Dimension mapping all rows contain current dimension data. Use the
Type 1 Dimension mapping to update a slowly changing dimension table when you do not need to keep
any previous versions of dimensions in the table. Type 2: The Type 2 Dimension Data mapping inserts
both new and changed dimensions into the target. Changes are tracked in the target table by versioning
the primary key and creating a version number for each dimension in the table. Use the Type 2
Dimension/Version Data mapping to update a slowly changing dimension table when you want to keep
a full history of dimension data in the table. Version numbers and versioned primary keys track the
order of changes to each dimension. Type 3: The Type 3 Dimension mapping filters source rows based
on user-defined comparisons and inserts only those found to be new dimensions to the target. Rows
containing changes to existing dimensions are updated in the target.
=======================================
SCD:
Source to SQ - 1 mapping
SQ to LKP - 2 mapping
Cheers
Prasath
=======================================
52.Informatica - What r the different types of Type2 dimension
maping?
QUESTION #52 Type2 Dimension/Version Data Maping: In this maping the
updated dimension in the source will gets inserted in target along with a new
version number.And newly added dimension
Type2 Dimension/Flag current Maping: This maping is also used for slowly
changing dimensions.In addition it creates a flag value for changed or new
dimension.
Flag indiactes the dimension is new or newlyupdated.Recent dimensions will
gets saved with cuurent flag value 1. And updated dimensions r saved with the
value 0.
Type2 Dimension/Effective Date Range Maping: This is also one flavour of Type2
maping used for slowly changing dimensions.This maping also inserts both new
and changed dimensions in to the target.And changes r tracked by the effective
date range for each version of each dimension.
No best answer available. Please pick the good answer available or submit your answer.
January 04, 2006 05:31:39 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
Type2
1. Version number
2. Flag
3.Date
Cheers
Sithu
=======================================
No best answer available. Please pick the good answer available or submit your answer.
December 14, 2005 10:43:31 #1
rishi
=======================================
If it is Type 2 Dimension the abouve answer is fine but if u want to get the info of all the insert
statements and Updates you need to use session log file where you configure it to verbose.
You will get complete set of data which record was inserted and which was not.
=======================================
Just use debugger to know how the data from source moves to target it will show how many new rows
get inserted else updated.
=======================================
54.Informatica - What r two types of processes that informatica
runs the session?
QUESTION #54 Load manager Process: Starts the session, creates the DTM
process, and sends post-session email when the session completes.
The DTM process. Creates threads to initialize the session, read, write, and
transform data, and handle pre- and post-session operations.
No best answer available. Please pick the good answer available or submit your answer.
September 17, 2007 08:17:02 #1
rasmi Member Since: June 2007 Contribution: 20
=======================================
When the workflow start to run
=======================================
55.Informatica - Can u generate reports in Informatcia?
QUESTION #55 Yes. By using Metadata reporter we can generate reports in
informatica.
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 05:05:46 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
It is a ETL tool you could not make reports from here but you can generate metadata report that is not
going to be used for business analysis
Cheers
Sithu
=======================================
can u pls tell me how generate metadata reports?
=======================================
56.Informatica - Define maping and sessions?
QUESTION #56 Maping: It is a set of source and target definitions linked by
transformation objects that define the rules for transformation.
Session : It is a set of instructions that describe how and when to move data
from source to targets.
No best answer available. Please pick the good answer available or submit your answer.
December 04, 2006 15:07:09 #1
Pavani
=======================================
Mapping:
A set of source and target definitions linked by diffrent transformation that define the rules for data
transformation.
Session:
and
Identification of the mapping by the informatica server is done with the help of session.
-Pavani.
=======================================
57.Informatica - Which tool U use to create and manage sessions
and batches and to monitor and stop the informatica s
QUESTION #57 Informatica server manager.
No best answer available. Please pick the good answer available or submit your answer.
May 16, 2006 12:55:46 #1
Leninformatica
=======================================
Informatica Workflow Managar and Informatica Worlflow Monitor
=======================================
58.Informatica - Why we use partitioning the session in
informatica?
QUESTION #58 Partitioning achieves the session performance by reducing the
time period of reading the source and loading the data into target.
No best answer available. Please pick the good answer available or submit your answer.
September 30, 2005 00:26:04 #1
khadarbasha Member Since: September 2005 Contribution: 2
=======================================
Performance can be improved by processing data in parallel in a single session by creating multiple
partitions of the pipeline.
Informatica server can achieve high performance by partitioning the pipleline and performing the
extract transformation and load for each partition in parallel.
=======================================
59.Informatica - How the informatica server increases the session
performance through partitioning the source?
QUESTION #59 For a relational sources informatica server creates multiple
connections for each parttion of a single source and extracts seperate range of
data for each connection.Informatica server reads multiple partitions of a single
source concurently.Similarly for loading also informatica server creates multiple
connections to the target and loads partitions of data concurently.
For XML and file sources,informatica server reads multiple files concurently.For
loading the data informatica server creates a seperate file for each partition(of a
No best answer available. Please pick the good answer available or submit your answer.
February 13, 2006 08:00:53 #1
durga
=======================================
fine explanation
=======================================
60.Informatica - What r the tasks that Loadmanger process will
do?
QUESTION #60 Manages the session and batch scheduling: Whe u start the
informatica server the load maneger launches and queries the repository for a list
of sessions configured to run on the informatica server.When u configure the
session the loadmanager maintains list of list of sessions and session start times.
When u sart a session loadmanger fetches the session information from the
repository to perform the validations and verifications prior to starting DTM
process.
Locking and reading the session: When the informatica server starts a session
lodamaager locks the session from the repository.Locking prevents U starting the
session again and again.
Verifies permission and privelleges: When the sesson starts load manger checks
whether or not the user have privelleges to run the session.
Creating log files: Loadmanger creates logfile contains the status of session.
No best answer available. Please pick the good answer available or submit your answer.
August 17, 2005 02:08:34 #1
AnjiReddy
=======================================
How can you determine whether informatica server is running or not with out using event viewer by
using shell command. I would appreciate the solution for this one. Feel free to mail me at puli.
reddy@gmail.com
=======================================
61.Informatica - What r the different threads in DTM process?
QUESTION #61 Master thread: Creates and manages all other threads
Maping thread: One maping thread will be creates for each session.Fectchs
session and maping information.
Pre and post session threads: This will be created to perform pre and post session
operations.
Reader thread: One thread will be created for each partition of a source.It reads
data from source.
No best answer available. Please pick the good answer available or submit your answer.
October 12, 2006 00:56:46 #1
Killer
=======================================
Yupz this make sense ! )
=======================================
62.Informatica - Can u copy the session to a different folder or
repository?
QUESTION #62 Yes. By using copy session wizard u can copy a session in a
different folder or repository.But that
target folder or repository should consists of mapping of that session.
If target folder or repository is not having the maping of copying session ,
u should have to copy that maping first before u copy the session
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
February 03, 2006 03:56:14 #1
Prasanna
=======================================
In addition you can copy the workflow from the Repository manager. This will automatically copy the
mapping associated source targets and session to the target folder.
=======================================
63.Informatica - What is batch and describe about types of
batches?
QUESTION #63 Grouping of session is known as batch.Batches r two types
Sequential: Runs sessions one after the other
Concurrent: Runs session at same time.
No best answer available. Please pick the good answer available or submit your answer.
February 15, 2006 13:13:03 #1
sangroover
=======================================
=======================================
64.Informatica - Can u copy the batches?
QUESTION #64 NO
No best answer available. Please pick the good answer available or submit your answer.
December 16, 2007 14:26:30 #1
dl_mstr Member Since: November 2007 Contribution: 26
=======================================
Yes I think workflows can be copied from one folder/repository to another
=======================================
It should be definitely yes.
Without that :
1.the migration of the workflows from dev to test and to Production would not make any sense
2. For the similar logics we have to do same cumbersome job
There might be some limitations while copying the batches like we might not be able to copy the
overwritten properties of the workflow.
=======================================
there is a slight correction in the above answer.
We might not be able to copy the overriden (wrote overwritten in above answer) properties
=======================================
65.Informatica - When the informatica server marks that a batch is
failed?
QUESTION #65 If one of session is configured to "run if previous completes" and
that previous session fails.
No best answer available. Please pick the good answer available or submit your answer.
April 01, 2008 06:05:46 #1
Vani_AT Member Since: December 2007 Contribution: 16
=======================================
A batch fails when the sessions in the workflow are checked with the property
"Fail if parent fails"
and any of the session in the sequential batch fails.
=======================================
66.Informatica - What r the different options used to configure the
sequential batches?
QUESTION #66 Two options
Run the session only if previous session completes sucessfully. Always runs the
session.
No best answer available. Please pick the good answer available or submit your answer.
August 16, 2007 13:17:03 #1
rasmi Member Since: June 2007 Contribution: 20
=======================================
Hi
Where we have to specify these options.
=======================================
You have to specify those options in the workflow designer. You can double click the pipeline which
connects two sessions and there you can define prevtaskstatus succeeded. Then only the next session
runs. You can also go edit the session and check 'fail parent if this task fails' which means it will mark
the workflow as failed. If the workflow is failed it won't run the remaining sessions.
=======================================
I would like to make a small correction for the above answer.
Even if a session fails with the above property set
all the following sessions of the workflow still run and succeed depending on the validity/correctness of
the individual sessions.
The only difference this property makes is it marks the workflow as failed.
=======================================
67.Informatica - In a sequential batch can u run the session if
previous session fails?
QUESTION #67 Yes.By setting the option always runs the session.
No best answer available. Please pick the good answer available or submit your answer.
June 26, 2008 01:03:25 #1
prade Member Since: May 2008 Contribution: 6
RE: In a sequential batch can u run the session if previous session fails?
=======================================
Yes you can.
No best answer available. Please pick the good answer available or submit your answer.
February 15, 2006 13:13:47 #1
sangroover
=======================================
Logically Yes
=======================================
Logically Yes we can craete worklets and call the batch
=======================================
Logically yes.
I have not worked with worklets. But as we can start a single session within a workflow; similarly we
should be able to start a worklet within a workflow.
=======================================
69.Informatica - Can u start a session inside a batch idividually?
QUESTION #69 We can start our required session only in case of sequential
batch.in case of concurrent batch
we cant do like this.
No best answer available. Please pick the good answer available or submit your answer.
=======================================
Yes we can do this in any case. Sequential or concurrent doesn't matter.
Ther is no absolute concurrent workflow. Every workflow starts with a "start" task and hence the
workflow is a hybrid.
=======================================
Yes we can do
=======================================
70.Informatica - How can u stop a batch?
QUESTION #70 By using server manager or pmcmd.
No best answer available. Please pick the good answer available or submit your answer.
May 28, 2007 04:38:06 #1
VEMBURAJ.P
=======================================
by using menu command or pmcmd.
=======================================
In workflow monitor
1. click on the workflow name
2. click on stop
=======================================
71.Informatica - What r the session parameters?
QUESTION #71
Session parameters r like maping parameters,represent values U might want to
change between
sessions such as database connections or source files.
No best answer available. Please pick the good answer available or submit your answer.
April 01, 2008 06:40:17 #1
Vani_AT Member Since: December 2007 Contribution: 16
=======================================
In addition to this we provide the lookup file name.
The values for these variables are provided in the parameter file and the parameters start with a $$
(double dollar symbol). There is a predefined format for specifying session parameter. For eg. If we
want to use a parameter for source file then it must be prefixed with $$SrcFile_<any string optional>.
=======================================
a small correction it starts with single dollar symbol . double dollar symbol is used for mapping
parameters.
=======================================
72.Informatica - What is parameter file?
QUESTION #72 Parameter file is to define the values for parameters and
variables used in a session.A parameter
file is a file created by text editor such as word pad or notepad.
U can define the following values in parameter file
Maping parameters
Maping variables
session parameters
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 01:27:04 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
When you start a workflow you can optionally enter the directory and name of a parameter file. The
Informatica Server runs the workflow using the parameters in the file you specify.
For UNIX shell users enclose the parameter file name in single quotes:
-paramfile '$PMRootDir/myfile.txt'
For Windows command prompt users the parameter file name cannot have beginning or trailing spaces.
If the name includes spaces enclose the file name in double quotes:
Note: When you write a pmcmd command that includes a parameter file located on another machine
use the backslash (\) with the dollar sign ($). This ensures that the machine where the variable is defined
expands the server variable.
Cheers
Sithu
=======================================
73.Informatica - What is difference between partioning of
relatonal target and partitioning of file targets?
No best answer available. Please pick the good answer available or submit your answer.
June 13, 2006 19:10:59 #1
UmaBojja
=======================================
1.Database partitioning
2.RoundRobin
3.Pass-through
4.Hash-Key partitioning
All these are applicable for relational targets.For flat file only database partitioning is not applicable.
Informatica supports Nway partitioning.U can just specify the name of the target file and create the
partitions rest will be taken care by informatica session.
=======================================
Could you please tell me how can we partition the session and target?
=======================================
74.Informatica - Performance tuning in Informatica?
QUESTION #74 The goal of performance tuning is optimize session performance
so sessions run during the available load window for the Informatica Server.
Increase the session performance by following.
Flat files: If u’r flat files stored on a machine other than the informatca server,
move those files to the machine that consists of informatica server.
Relational datasources: Minimize the connections to sources ,targets and
informatica server to
improve session performance.Moving target database into server system may
improve session
performance.
Staging areas: If u use staging areas u force informatica server to perform
multiple datapasses.
Removing of staging areas may improve session performance.
U can run the multiple informatica servers againist the same repository.
Distibuting the session load to multiple informatica servers may improve
session performance.
Run the informatica server in ASCII datamovement mode improves the session
performance.Because ASCII datamovement mode stores a character value in one
byte.Unicode mode takes 2 bytes to store a character.
If a session joins multiple source tables in one Source Qualifier, optimizing the
query may improve performance. Also, single table select statements with an
ORDER BY or GROUP BY clause may benefit from optimization such as adding
indexes.
We can improve the session performance by configuring the network packet size,
which allows
data to cross the network at one time.To do this go to server manger ,choose
server configure database connections.
If u r target consists key constraints and indexes u slow the loading of data.To
improve the session performance in this case drop constraints and indexes before
u run the session and rebuild them after completion of session.
Running a parallel sessions by using concurrent batches will also reduce the time
of loading the
data.So concurent batches may also increase the session performance.
No best answer available. Please pick the good answer available or submit your answer.
January 05, 2007 10:20:40 #1
Infoseek Member Since: January 2007 Contribution: 4
=======================================
thanks for your above answer hey would like to know how we can partition a big datafile(flat) around
1Gig..and what are the options to set for the same. in Power centre V7.X
=======================================
Hey Thank You so much for the information. That was one of the best answers I have read on this
website. Descriptive yet to the point and highly useful in real world. I appreciate your effort.
=======================================
75.Informatica - What is difference between maplet and reusable
transformation?
QUESTION #75 Maplet consists of set of transformations that is reusable.A
reusable transformation is a
single transformation that can be reusable.
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 01:15:34 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
Cheers
Sithu
=======================================
Mapplet is a group of reusable transformation.The main purpose of using Mapplet is to hide the logic
from end user point of view...It works like a function in C language.We can use it N number of times.Its
a reusable object.
Reusable transformation is a single transformation.
=======================================
76.Informatica - Define informatica repository?
QUESTION #76 The Informatica repository is a relational database that stores
information, or metadata, used by the Informatica Server and Client tools.
Metadata can include information such as mappings describing how to
transform source data, sessions indicating when you want the Informatica
Server to perform the transformations, and connect strings for sources and
targets.
No best answer available. Please pick the good answer available or submit your answer.
=======================================
Infromatica Repository: The informatica repository is at the center of the informatica suite. You create
a set of metadata tables within the repository database that the informatica application and tools access.
The informatica client and server access the repository to save and retrieve metadata.
Cheers
Sithu
=======================================
77.Informatica - What r the types of metadata that stores in
repository?
QUESTION #77
Following r the types of metadata that stores in the repository
Database connections
Global objects
Mappings
Mapplets
Multidimensional metadata
Reusable transformations
Sessions and batches
Short cuts
Source definitions
Target defintions
Transformations
No best answer available. Please pick the good answer available or submit your answer.
=======================================
q
Source definitions. Definitions of database objects (tables views synonyms) or files that provide
source data.
q
Target definitions. Definitions of database objects or files that contain the target data.
q
Multi-dimensional metadata. Target definitions that are configured as cubes and dimensions.
q
Mappings. A set of source and target definitions along with transformations containing business
logic that you build into the transformation. These are the instructions that the Informatica Server uses
to transform and move data.
q
Reusable transformations. Transformations that you can use in multiple mappings.
q
Mapplets. A set of transformations that you can use in multiple mappings.
q
Sessions and workflows. Sessions and workflows store information about how and when the
Informatica Server moves data. A workflow is a set of instructions that describes how and when to run
tasks related to extracting transforming and loading data. A session is a type of task that you can put in
a workflow. Each session corresponds to a single mapping.
Cheers
Sithu
=======================================
78.Informatica - What is power center repository?
QUESTION #78 The PowerCenter repository allows you to share metadata
across repositories to create a data mart domain. In a data mart domain, you
can create a single global repository to store metadata used across an enterprise,
and a number of local repositories to share the global metadata as needed.
No best answer available. Please pick the good answer available or submit your answer.
January 19, 2006 01:44:05 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
q
Standalone repository. A repository that functions individually unrelated and unconnected to other
repositories.
q
Global repository. (PowerCenter only.) The centralized repository in a domain a group of connected
repositories. Each domain can contain one global repository. The global repository can contain common
objects to be shared throughout the domain through global shortcuts.
q
Local repository. (PowerCenter only.) A repository within a domain that is not the global repository.
Each local repository in the domain can connect to the global repository and use objects in its shared
folders.
Cheers
Sithu
=======================================
79.Informatica - How can u work with remote database in
informatica?did u work directly by using remote connections?
QUESTION #79 To work with remote datasource u need to connect it with
remote connections.But it is not
preferable to work with that remote source directly by using remote connections .
Instead u bring that source into U r local machine where informatica server
resides.If u work directly with remote source the session performance will
decreases by passing less amount of data across the network in a particular time.
No best answer available. Please pick the good answer available or submit your answer.
January 27, 2006 02:18:13 #1
sithusithu Member Since: December 2005 Contribution: 161
=======================================
Configure FTP
Connection details
IP address
User authentication
Cheers
Sithu
=======================================
80.Informatica - What is tracing level and what r the types of
tracing level?
QUESTION #80 Tracing level represents the amount of information that
informatcia server writes in a log file.
Types of tracing level
Normal
Verbose
Verbose init
Verbose data
No best answer available. Please pick the good answer available or submit your answer.
April 16, 2007 03:19:08 #1
Minnu
=======================================
Its
1) Terse
2) Normal
3) Verbose Init
4) Verbose data
=======================================
81.Informatica - If a session fails after loading of 10,000 records in
to the target.How can u load the records from
QUESTION #81 As explained above informatcia server has 3 methods to
recovering the sessions.Use performing recovery to load the records from where
the session fails.
No best answer available. Please pick the good answer available or submit your answer.
April 08, 2007 11:17:03 #1
ggk.krishna Member Since: February 2007 Contribution: 12
=======================================
Hi
We can restart the session using session recovery option in workflow manager and workflow
monitor. Then the loading starts from 10001 th row.
If you define target load type as "Bulk" session recovery is not possible.
=======================================
82.Informatica - If i done any modifications for my table in back
end does it reflect in informatca warehouse or mapi
QUESTION #82 NO. Informatica is not at all concern with back end data base.It
displays u all the information
that is to be stored in repository.If want to reflect back end changes to
informatica screens,
No best answer available. Please pick the good answer available or submit your answer.
August 04, 2006 05:35:19 #1
vidyanand
=======================================
Yes It will be reflected once u refresh the mapping once again.
=======================================
It does matter if you have SQL override - say in the SQ or in a Lookup you override the default sql.
Then if you make a change to the underlying table in the database that makes the override SQL
incorrect for the modified table the session will fail.
If you change a table - say rename a column that is in the sql override statement then session will fail.
But if you added a column to the underlying table after the last column then the sql statement in the
override will still be valid. If you make change to the size of columns the sql will still be valid although
you may get truncation of data if the database column has larger size (more characters) than the SQ or
subsequent transformation.
=======================================
83.Informatica - After draging the ports of three sources(sql server,
oracle,informix) to a single source qualifier, c
QUESTION #83 NO.Unless and until u join those three ports in source qualifier
u cannot map them directly. Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
December 14, 2005 10:37:10 #1
rishi
=======================================
if u drag three hetrogenous sources and populated to target without any join means you are entertaining
Carteisn product. If you don't use join means not only diffrent sources but homegeous sources are show
same error.
If you are not interested to use joins at source qualifier level u can add some joins sepratly.
=======================================
Yes it possible...
=======================================
I don't think dragging three heterogeneous sources in a single source qualifier is valid.
Whenever we drag multiple sources in same source qualifier
1. There must be a joining key between the tables.
2. The SQL needs to be executed in the database to join the three tables.
To use a single source qualifier for multiple sources the data source for all the sources should be same.
For Heterogeneous join a Joiner transformation has to be used.
The first part of the question itself is not possible.
=======================================
Sources from heterogenous databases cannot be pulled into a single source qualifier. They can only be
joined using a joiner. And then can be written to the target
=======================================
84.Informatica - What is Data cleansing..?
QUESTION #84 The process of finding and removing or correcting data that is
incorrect, out-of-date, redundant, incomplete, or formatted incorrectly. Click Here
to view complete document
No best answer available. Please pick the good answer available or submit your answer.
April 27, 2005 11:12:05 #1
neetha
=======================================
Data cleansing is a two step process including DETECTION and then CORRECTION of errors in a
data set.
=======================================
This is nothing but polising of data. For example of one of the sub system store the Gender as M and F.
The other may store it as MALE and FEMALE. So we need to polish this data clean it before it is add
to Datawarehouse. Other typical example can be Addresses. The all sub systesms maintinns the
customer address can be different. We might need a address cleansing to tool to have the customers
addresses in clean and neat form.
=======================================
Data cleansing means remove the inconistance data and transfer the data correct way and correct manner
=======================================
it means
=======================================
85.Informatica - how can we partition a session in Informatica?
QUESTION #85
No best answer available. Please pick the good answer available or submit your
answer.
July 08, 2005 18:12:42 #1
Kevin B
Fields : Date key full date day of wek day month quarter fiscal year
=======================================
In a relational data model for normalization purposes year lookup quarter lookup month lookup and
week lookups are not merged as a single table. In a dimensional data modeling(star schema) these tables
would be merged as a single table called TIME DIMENSION for performance and slicing data.
This dimensions helps to find the sales done on date weekly monthly and yearly basis. We can have a
trend analysis by comparing this year sales with the previous year or this week sales with the previous
week.
=======================================
A TIME DIMENSION is a table that contains the detail information of the time at which a particular
=======================================
87.Informatica - Diff between informatica repositry server &
informatica server
QUESTION #87
No best answer available. Please pick the good answer available or submit your
answer.
August 11, 2005 02:05:13 #1
Nagi R Anumandla
informatica server connects source data and target data using native
odbc drivers
again it connect to the repository for running sessions and retriveing metadata information
source------>informatica server--------->target
REPOSITORY
=======================================
server
-------------
¤¤¤
=======================================
89.Informatica - Discuss the advantages & Disadvantages of star
& snowflake schema?
QUESTION #89
No best answer available. Please pick the good answer available or submit your
answer.
August 25, 2005 02:24:19 #1
prasad nallapati
RE: Discuss the advantages & Disadvantages of star & snowflake schema?
In a STAR schema there is no relation between any two dimension tables whereas in a SNOWFLAKE
schema there is a possible relation between the dimension tables.
=======================================
star schema consists of single fact table surrounded by some dimensional table.In snowflake schema the
dimension tables are connected with some subdimension table.
star schema is used for report generation snowflake schema is used for cube.
The advantage of snowflake schema is that the normalized tables r easier to maintain.it also saves the
storage space.
The disadvantage of snowflake schema is that it reduces the effectiveness of navigation across the
tables due to large no of joins between them.
=======================================
It depends upon the clients which they are following whether snowflake or star schema.
=======================================
Snowflakes are an addition to the Kimball Dimensional system to enable that system to handle
hierarchial data. When Kimball proposed the dimensional data warehouse it was not first recogonized
that hierarchial data could not be stored.
Commonly every attempt is made not to use snowflakes by flattening hierarchies but when either this is
not possible or practical the snowflake design solves the problem.
Snowflake tables are ofter called "outlyers" by data modelers because they must share a key with a
diminsion that directly connects to a fact table.
SD2 can have "outlyers" but these are very difficult to instantiate.
=======================================
90.Informatica - Waht are main advantages and purposeof
usingNormalizer Transformation in Informatica?
QUESTION #90
No best answer available. Please pick the good answer available or submit your
answer.
August 25, 2005 02:27:10 #1
prasad nallapati
RE: Waht are main advantages and purpose of using Normalizer Transformation in Informatica?
=======================================
hi By using Normalizer transformation we can conver rows into columns and columns into rows and
also we can collect multile rows from one row
=======================================
vamshidhar
Normally its used to convert columns to rows but for converting rows to columns we need an
aggregator and expression and little effort is needed for coding. Denormalization is not possible with a
Normalizer transformation.
=======================================
91.Informatica - How to read rejected data or bad data from bad
file and reload it to target?
QUESTION #91
No best answer available. Please pick the good answer available or submit your
answer.
October 04, 2005 23:16:28 #1
ravi kumar guturi
RE: How to read rejected data or bad data from bad fil...
correction the rejected data and send to target relational tables using loadorder utility. Find out the
rejected data by using column indicatior and row indicator.
=======================================
Design a trap to a file or table by the use of a filter transformation or a router transformation. Router
works well for this.
=======================================
92.Informatica - How do youtransfert the data from data
warehouse to flatfile?
QUESTION #92
No best answer available. Please pick the good answer available or submit your
answer.
November 09, 2005 17:02:07 #1
paul luthra
22 transformation expression joiner aggregator router stored procedure etc. You can find on Informatica
transformation tool bar.
=======================================
In a mapping we can use any number of transformations depending on the project and the included
transformations in the perticular related transformatons.
=======================================
There is no such limitation to use this number of transformations. But in performance point of view
using too many transformations will reduce the session performance.
My idea is if needed more tranformations to use in a mapping its better to go for some stored procedure.
=======================================
design with the least number of transformations that can do the most jobs.
=======================================
94.Informatica - What is the difference between Narmal load and
Bulk load?
QUESTION #94
No best answer available. Please pick the good answer available or submit your
answer.
September 09, 2005 09:01:19 #1
suresh
RE: What is the difference between Narmal load and Bulk load?
bulk load is faster than normal load. In case of bulk load informatica server by passes the data base log
file so we can not roll bac the transactions. Bulk load is also called direct loading.
=======================================
Normal Load: Normal load will write information to the database log file so that if any recorvery is
needed it is will be helpful. when the source file is a text file and loading data to a table in such cases
we should you normal load only else the session will be failed.
Bulk Mode: Bulk load will not write information to the database log file so that if any recorvery is
needed we can't do any thing in such cases.
=======================================
Rule of thumb
=======================================
also remember that not all databases support bulk loading. and bulk loading fails a session if your
mapping has primary keys
=======================================
95.Informatica - what is a junk dimension
QUESTION #95
No best answer available. Please pick the good answer available or submit your
answer.
October 17, 2005 06:22:53 #1
prasad Nallapati
=======================================
A junk dimension is a used for constrain queary purpose based on text and flag values.
Some times a few dimensions discarded in a major dimensions That time we kept in to one place the all
discarded dimensions that is called junk dimensions.
=======================================
Junk dimensions are particularly useful in Snowflake schema and one of the reasons why snowfalke is
preferred over the star schema.
There are dimensions that are frequently updated. So from the base set of the dimensions that are
already existing we pull out the dimensions that are frequently updated and put them into a separate
table.
This dimension table is called the junk dimension.
=======================================
96.Informatica - can we lookup a table from a source qualifer
transformation-unconnected lookup
QUESTION #96
No best answer available. Please pick the good answer available or submit your
answer.
November 22, 2005 01:29:07 #1
nandam Member Since: November 2005 Contribution: 1
1) Unless you assign the output of the source qualifier to another transformation or to target no way it
will include the feild in the query.
=======================================
No it's not possible. source qualifier don't have any variables fields to utilize as expression.
=======================================
97.Informatica - how to get the first 100 rows from the flat file
into the target?
QUESTION #97
No best answer available. Please pick the good answer available or submit your
answer.
October 04, 2005 01:07:33 #1
ravi
RE: how to get the first 100 rows from the flat file i...
=======================================
double click on link and type $$source sucsess rows(parameter in session variables) 100
=======================================
I'd copy first 100 records to new file and load.
Just add this Unix command in session properties --> Components --> Pre-session Command
head -100 <source file path> > <new file name>
Mention new file name and path in the Session --> Source properties.
=======================================
Hope it helps.
=======================================
Use a sequence generator set properties as reset and then use filter put condition as NEXTVAL< 100 .
=======================================
98.Informatica - can we modify the data in flat file?
QUESTION #98
No best answer available. Please pick the good answer available or submit your
answer.
October 04, 2005 23:19:25 #1
ravikumar guturi
Just open the text file with notepad change what ever you want (but datatype should be the same)
Cheers
sithu
=======================================
yes by open a text file and edit
=======================================
You can generate a flat file with program....mean not manually.
=======================================
Let's not discuss about manually modifying the data of flat file.
Let's assume that the target is a flat file. I want to update the data in the flat file target based on the input
source rows. Like we use update strategy/ target properties in case of relational targets for update; do
we have any options in the session or maaping to perform a similar task for a flat file target?
I have heard about the append option in INFA 8.x. This may be helpful for incremental load in the flat
file.
But this is not a workaround for updating the rows.
Please post your views.
=======================================
You can modify the flat file using shell scripting in unix ( awk grep sed ).
=======================================
99.Informatica - difference between summary filter and details
filter?
QUESTION #99
No best answer available. Please pick the good answer available or submit your
answer.
December 01, 2005 09:52:34 #1
renuka
Hi
Summary Filter --- we can apply records group by that contain common values.
Detail Filter --- we can apply to each and every record in a database.
=======================================
=======================================
100.Informatica - what are the difference between view and
materialized view?
QUESTION #100
No best answer available. Please pick the good answer available or submit your
answer.
Sorting Options Page 1 ofFirst
Oldest 3 « First 1 2 3 > Last »
Materialized views are schema objects that can be used to summarize precompute replicate and
distribute data. E.g. to construct a data warehouse.
A materialized view provides indirect access to table data by storing the results of a query in a separate
schema object. Unlike an ordinary view which does not take up any storage space or contain any data
=======================================
view is a tailriad representaion of data its access data from existing table it have logical structure cant
space occupation.
but meterailzedview stores precaluculated data its have physical structure space occupation
=======================================
view is a tailraid representation of data but metereialized view is stores precaluculated data view is a
logical structure but mview is physical structure view is cant occupie the space bu mview is occpies
space.
=======================================
If you (Change) update or insert in view the corresponding table will affect. but changes will not affect
materialized view.
=======================================
materialized views to store copies of data or aggregations.Materialized views can be used to replicate
all or part of a single table
or part of single table or to replicate the result of a query against multiple tables.refreshes of the
replicated daa can be done automatically by the database at time intervals.
=======================================
A view do no derive the change made to it master table after the view is created.
A materialized view immediately carries the change done to its mater table even after the materialized
view is created.
=======================================
view- the select query is stored in the db. whenever u use select from view the stored query is executed.
Effectively u r calling the stored query. In case u want use the query repetadly or complex queries we
store the queries in Db using View.
where as materialized view stores the data as well. like table. here storage parameters are required.
=======================================
A view is just a stored query and has no physical part. Once a view is instantiated performance can be
quite good until it is aged out of the cache. A materialized view has a physical table associated with it; it
doesn't have to resolve the query each time it is queried. Depending on how large a result set and how
complex the query a materialized view should perform better.
=======================================
In materialized view we cann't perform DML operation but the reverse is true in case of simple view.
=======================================
In case of materialised view we can perform DML but reverse is not true in case of simple view.
=======================================
101.Informatica - What is the difference between summary filter
and detail filter
QUESTION #101
No best answer available. Please pick the good answer available or submit your
answer.
November 23, 2005 15:07:50 #1
sir
=======================================
=======================================
At the time of software intragartion buttom/up is good but implimentatino time top/down is good.
=======================================
top down
Bottom up
Cheers
Sithu
=======================================
in top down approch : first we have to build dataware house then we will build data marts. which will
need more crossfunctional skills and timetaking process also costly.
in bottom up approach: first we will build data marts then data warehuse. the data mart that is first
build will remain as a proff of concept for the others. less time as compared to above and less cost.
=======================================
Nothing wrong with any of these approaches. It all depends on your business requirements and what is
in place already at your company. Lot of folks have a hybrid approach. For more info read Kimball vs
Inmon..
=======================================
103.Informatica - Discuss which is better among incremental load,
Normal Load and Bulk load
QUESTION #103
No best answer available. Please pick the good answer available or submit your
answer.
October 20, 2005 03:06:53 #1
ravi guturi
=======================================
It depends on the requirement. Otherwise Incremental load which can be better as it takes onle that data
which is not available previously on the target.
=======================================
If the database supports bulk load option from Infromatica then using BULK LOAD for intial loading
the tables is recommended.
Depending upon the requirment we should choose between Normal and incremental loading strategies.
=======================================
Normal Loading is Better
=======================================
Rajesh:Normal load is Better
=======================================
=======================================
if supported by the database bulk load can do the loading faster than normal load.(incremental load
concept is differnt dont merge with bulk load mormal load)
=======================================
104.Informatica - What is the difference between connected and
unconnected stored procedures.
QUESTION #104
No best answer available. Please pick the good answer available or submit your
answer.
September 25, 2005 20:02:14 #1
sangeetha
Unconnected:
The unconnected Stored Procedure transformation is not connected directly to the flow of the mapping.
It either runs before or after the session or is called by an expression in another transformation in the
mapping.
connected:
The flow of data through a mapping in connected mode also passes through the Stored Procedure
transformation. All data entering the transformation through the input ports affects the stored procedure.
You should use a connected Stored Procedure transformation when you need data from an input port
sent as an input parameter to the stored procedure or the results of a stored procedure sent as an output
parameter to another transformation.
=======================================
Run a stored procedure before or after your session. Unconnected
Run a stored procedure once during your mapping such as pre- or post-
Unconnected
session.
Run a stored procedure every time a row passes through the Stored
Connected or Unconnected
Procedure transformation.
Run a stored procedure based on data that passes through the mapping
such as when a specific port does not contain a null value. Unconnected
Pass parameters to the stored procedure and receive a single output
Connected or Unconnected
parameter.
Pass parameters to the stored procedure and receive multiple output
parameters.
Note: To get multiple output parameters from an unconnected Stored Connected or Unconnected
Procedure transformation you must create variables for each output
parameter. For details see Calling a Stored Procedure From an
Expression.
Run nested stored procedures. Unconnected
Cheers
Sithu
=======================================
105.Informatica - Differences between Informatica 6.2 and
Informatica 7.0Yours sincerely,Rushi.
QUESTION #105
No best answer available. Please pick the good answer available or submit your
answer.
October 04, 2005 01:17:06 #1
ravi
3.grid servers working on different operating systems can coexist on same server
7.version controlling
8.data profilling
=======================================
1) Data profiling.
Thanks in advance.
-Azhar
=======================================
106.Informatica - whats the diff between Informatica powercenter
server, repositoryserver and repository?
QUESTION #106 Powercenter server contains the sheduled runs at which time
data should load from source to target
Repository contains all the definitions of the mappings done in designer.
No best answer available. Please pick the good answer available or submit your answer.
November 08, 2005 01:59:02 #1
Gokulnath_J Member Since: November 2005 Contribution: 3
=======================================
Repository is a database in which all informatica componets are stored in the form of tables. The
reposiitory server controls the repository and maintains the data integrity and Consistency across the
repository when multiple users use Informatica. Powercenter Server/Infa Server is responsible for
execution of the components (sessions) stored in the repository.
=======================================
hi
Repository is nothing but a set of tables created in a DB.it stores all metadata of the infa objects.
Repository server is one which communicates with the repository i.e DB. all the metadata is retrived
from the DB through Rep server.All the client tools communicate with the DB through Rep server.
Infa server is one which is responsible for running the WF tasks etc... Infa server also communicates
with the DB through Rep server.
=======================================
power center server-power center server does the extraction from the source and loaded it to the target.
repository server-it takes care of the connection between the power center client and repository.
repository-it is a place where all the metadata information is stored.repository server and power center
server access the repository for managing the data.
=======================================
107.Informatica - how to create the staging area in your database
QUESTION #107 client having database throught that data base u get all
sources Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
November 02, 2005 11:56:42 #1
Chandran
=======================================
If You have defined all the staging tables as tragets use option Targets--> Generate sql in warehouse
designer
=======================================
A Staging area in a DW is used as a temporary space to hold all the records from the source system. So
more or less it should be exact replica of the source systems except for the laod startegy where we use
truncate and reload options.
So create using the same layout as in your source tables or using the Generate SQL option in the
Warehouse Designer tab.
=======================================
creating of staging tables/area is the work of data modellor/dba.just like create table <tablename>......
the tables will be created. they will have some name to identified as staging like dwc_tmp_asset_eval.
=======================================
108.Informatica - what does the expression n filter
transformations do in Informatica Slowly growing target wizard?
QUESTION #108
No best answer available. Please pick the good answer available or submit your
answer.
November 02, 2005 23:10:06 #1
sivapreddy Member Since: November 2005 Contribution: 1
Filter transformation filters the rows that are not flagged and passes the flagged rows to the Update
strategy transformation
=======================================
Expression finds the Primary key is or not and calculates new flag
Cheers
Sithu
=======================================
You can use the Expression transformation to calculate values in a single row before you write to the
target. For example you might need to adjust employee salaries concatenate first and last names or
convert strings to numbers.
=======================================
109.Informatica - Briefly explian the Versioning Concept in
Power Center 7.1.
QUESTION #109
No best answer available. Please pick the good answer available or submit your
answer.
November 29, 2005 11:29:11 #1
Manoj Kumar Panigrahi
Hi manoj
thanks
sri
=======================================
When you create a version of a folder referenced by shortcuts all shortcuts continue to reference their
original object in the original version. They do not automatically update to the current folder version.
For example if you have a shortcut to a source definition in the Marketing folder version 1.0.0 then you
create a new folder version 1.5.0 the shortcut continues to point to the source definition in version 1.0.0.
Maintaining versions of shared folders can result in shortcuts pointing to different versions of the
folder. Though shortcuts to different versions do not affect the server they might prove more difficult to
maintain. To avoid this you can recreate shortcuts pointing to earlier versions but this solution is not
practical for much-used objects. Therefore when possible do not version folders referenced by shortcuts.
Cheers
Sithu
=======================================
110.Informatica - How to join two tables without using the Joiner
Transformation.
QUESTION #110
No best answer available. Please pick the good answer available or submit your
answer.
December 01, 2005 07:49:58 #1
RE: How to join two tables without using the Joiner Tr...
Itz possible to join the two or more tables by using source qualifier.But provided the tables should have
relationship.
When u drag n drop the tables u will getting the source qualifier for each table.Delete all the source
qualifiers.Add a common source qualifier for all.Right click on the source qualifier u will find EDIT
click on it.Click on the properties tab u will find sql query in that u can write ur sqls
=======================================
The Joiner transformation requires two input transformations from two separate pipelines. An input
transformation is any transformation connected to the input ports of the current transformation.
Cheers
Sithu
=======================================
Cheers
Sithu
=======================================
joiner transformation is used to join n (n>1) tables from same or different databases but source qualifier
transformation is used to join only n tables from same database .
=======================================
simple
In the session property user defined options is there by using this we can join with out joiner
=======================================
use Source Qualifier transformation to join tables on the SAME database. Under its properties tab you
can specify the user-defined join. Any select statement you can run on a database.. you can do also in
Source Qualifier.
Note: you can only join 2 tables with Joiner Transformation but you can join two tables from different
databases.
Cheers
Ray Anthony
=======================================
hi
u can join 2 RDBMS sources of same database using a SQ by specifying user defined joins.
=======================================
111.Informatica - Identifying bottlenecks in various components
of Informatica and resolving them.
QUESTION #111
No best answer available. Please pick the good answer available or submit your
answer.
December 20, 2005 08:13:47 #1
kalyan
hai
The best way to find out bottlenecks is writing to flat file and see where the bottle neck is .
kalyan
=======================================
112.Informatica - How do you decide whether you need ti do
aggregations at database level or at Informatica level?
QUESTION #112
No best answer available. Please pick the good answer available or submit your
answer.
December 05, 2005 04:45:35 #1
Rishi
It depends upon our requirment only.If you have good processing database you can create aggregation
table or view at database level else its better to use informatica. Here i'm explaing why we need to use
informatica.
what ever it may be informatica is a thrid party tool so it will take more time to process aggregation
compared to the database but in Informatica an option we called Incremental aggregation which will
help you to update the current values with current values +new values. No necessary to process entire
values again and again. Unless this can be done if nobody deleted that cache files. If that happend total
aggregation we need to execute on informatica also.
=======================================
hi
see informatica is basically a integration tool.it all depends on the source u have and ur requirment.if u
have a EMS Q or flat file or any source other than RDBMS u need info to do any kind of agg functions.
if ur source is a RDBMS u r not only doing the aggregation using informatica right?? there will be a
bussiness logic behind it. and u need to do some other things like looking up against some table or
joining the agg result with the actual source. etc...
if in informatica if u r asking whether to do it in the mapping level or at DB level then fine its always
better to do agg at the DB level by using SQL over ride in SQ if only aggr is the main purpose of ur
mapping. it definetly improves the performance.
=======================================
113.Informatica - Source table has 1000 rows. In session
configuration --- target Load option-- \
QUESTION #113
No best answer available. Please pick the good answer available or submit your
answer.
April 10, 2007 23:51:51 #1
Surendra Kumar
select
sal
from
where
rownum < 3;
=======================================
SELECT sal
FROM (SELECT sal FROM my_table ORDER BY sal DESC)
WHERE ROWNUM < 4;
=======================================
hai
kalyan
=======================================
since this is informatica.. you might as well use the Rank transformation. check out the help file on how
to use it.
Cheers
Ray Anthony
=======================================
select max(sal) from emp;
=======================================
the following is the query to find out the top three salaries
select * from emp e where 3>(select count (*) from emp where
=======================================
You can write the query as follows.SQL> select * from 2 (select ename sal from emp order by sal desc)
3 where rownum< 3;
=======================================
115.Informatica - which objects are required by the debugger to
create a valid debug session?
QUESTION #115
No best answer available. Please pick the good answer available or submit your
answer.
December 05, 2005 03:15:32 #1
Rishi
source target lookups expressions should be availble min 1 break point should be available for debugger
to debug your session.
=======================================
hi
We can create a valid debug session even without a single break-point. But we have to give valid
database connection details for sources targets and lookups used in the mapping and it should contain
valid mapplets (if any in the mapping).
=======================================
Informatica server must run
=======================================
Cheers
Sithu
=======================================
116.Informatica - What is the limit to the number of sources and
targets you can have in a mapping
QUESTION #116
No best answer available. Please pick the good answer available or submit your
answer.
December 05, 2005 03:21:24 #1
Rishi
As per my knowledge there is no such restriction to use this number of sources or targets inside a
mapping.
Question is if you make N number of tables to participate at a time in processing what is the position of
your database. I orginzation point of view it is never encouraged to use N number of tables at a time It
reduces database and informatica server performance
=======================================
the restriction is only on the database side. how many concurrent threads r u allowed to run on the db
server?
=======================================
=======================================
117.Informatica - What is difference between IIF and DECODE
function
QUESTION #117
No best answer available. Please pick the good answer available or submit your
answer.
December 16, 2005 10:27:07 #1
VJ
You can use nested IIF statements to test multiple conditions. The following example tests for various
conditions and returns 0 if sales is zero or negative:
IIF( SALES > 0 IIF( SALES < 50 SALARY1 IIF( SALES < 100 SALARY2 IIF( SALES < 200
SALARY3 BONUS))) 0 )
You can use DECODE instead of IIF in many cases. DECODE may improve readability. The following
shows how you can use DECODE instead of IIF :
=======================================
u can use decode in conditioning coloumns also while we cann't use iff but u can use case. but by using
decode retrieveing data is quick
=======================================
Decode can be used in select statement whereas IIF cannot be used.
=======================================
118.Informatica - What are variable ports and list two situations
when they can be used?
QUESTION #118
No best answer available. Please pick the good answer available or submit your
answer.
December 19, 2005 20:41:34 #1
Rajesh
RE: What are variable ports and list two situations wh...
you can also use as for example consider price and quantity and total as a varaible we can mak a sum on
the total_amt by giving
sum(tatal_amt)
=======================================
For example if you are trying to calculate bonus from emp table
Bonus sal*0.2
Totalsal sal+comm.+bonus
=======================================
=======================================
Variable Ports usually carry intermediate data (values) and can be used in Expression transformation.
=======================================
119.Informatica - How does the server recognise the source and
target databases?
QUESTION #119
No best answer available. Please pick the good answer available or submit your
answer.
January 01, 2006 00:53:24 #1
reddeppa
RE: How does the server recognise the source and targe...
=======================================
During the execution of workflow all the rejected rows will be stored in bad files(where your
informatica server get installed;C:\Program Files\Informatica PowerCenter 7.1\Server) These bad files
can be imported as flat a file in source then thro' direct maping we can load these files in desired format.
=======================================
121.Informatica - How to lookup the data on multiple tabels.
QUESTION #121
No best answer available. Please pick the good answer available or submit your
answer.
Hi
I have two sources or target tables i want to lookup that two sources or target tables. How can i. It is
possible to SQL Override.
=======================================
=======================================
How to lookup the data on multiple tabels.
=======================================
if u want to lookup data on multiple tables at a time u can do one thing join the tables which u want then
lookup that joined table. informatica provieds lookup on joined tables hats off to informatica.
=======================================
Hi
When you create lookup transformation that time INFA asks for table name so you can choose either
source target import and skip.
So click skip and the use the sql overide property in properties tab to join two table for lookup.
=======================================
join the two source by using the joiner transformation and then apply a look up on the resaulting table
=======================================
hi
if the two tables are relational then u can use the SQL lookup over ride option to join the two tables in
the lookup properties.u cannot join a flat file and a relatioanl table.
eg: lookup default query will be select lookup table column_names from lookup_table. u can now
continue this query. add column_names of the 2nd table with the qualifier and a where clause. if u want
to use a order by then use -- at the end of the order by.
=======================================
122.Informatica - What is the procedure to load the fact table.Give
in detail?
QUESTION #122
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 14:26:22 #1
Guest
Based on the requirement to your fact table choose the sources and data and transform it based on
your business needs. For the fact table you need a primary key so use a sequence generator
transformation to generate a unique key and pipe it to the target (fact) table with the foreign keys
from the source tables.
=======================================
we use the 2 wizards (i.e) the getting started wizard and slowly changing dimension wizard to load the
fact and dimension tables by using these 2 wizards we can create different types of mappings according
to the business requirements and load into the star schemas(fact and dimension tables).
=======================================
first dimenstion tables need to be loaded then according to the specifications the fact tables should be
loaded. dont think that fact tables r different in case of loading it is general mapping as we do for other
tables. specifications will play important role for loading the fact.
=======================================
hi
usually source records are looked up with the records in the dimension table.DIM tables are called
lookup or reference table. all the possible values are stored in DIM table. e.g product all the existing
prod_id will be in DIM table. when data from source is looked up against the dim table the
corresponding keys are sent to the fact table.this is not the fixed rule to be followed it may vary as per
ur requirments and methods u follow.some times only the existance check will be done and the prod_id
itself will be sent to the fact.
=======================================
123.Informatica - What is the use of incremental aggregation?
Explain me in brief with an example.
QUESTION #123
No best answer available. Please pick the good answer available or submit your
answer.
January 29, 2006 11:58:51 #1
its a session option. when the informatica server performs incremental aggr. it passes new source data
through the mapping and uses historical chache data to perform new aggregation caluculations
incrementaly. for performance we will use it.
=======================================
Incremental aggregation is in session properties i have 500 records in my source and again i got 300
records if u r not using incremental aggregation what are calculation r using on 500 records again that
calculation will be done on 500+ 300 records if u r using incremental aggregation calculation will be
done one only what are new records (300) that will be calculated dur to this one performance will
increasing.
=======================================
124.Informatica - How to delete duplicate rows in flat files source
is any option in informatica
QUESTION #124 Submitted by: gazulas
use a sorter transformation , in that u will have a "distinct" option make use of
it .
use a sorter transformation in that u will have a distinct option make use of it .
=======================================
hi
=======================================
Instead we can use 'select distinct' query in Source qualifier of the source Flat file. Correct me if I am
wrong.
=======================================
You cannot write SQL override for flat file
=======================================
125.Informatica - how to use mapping parameters and what is
their use
QUESTION #125
No best answer available. Please pick the good answer available or submit your
answer.
January 29, 2006 11:47:14 #1
gazulas Member Since: January 2006 Contribution: 17
in designer u will find the mapping parameters and variables options.u can assign a value to them in
designer. comming to there uses suppose u r doing incremental extractions daily. suppose ur source
system contains the day column. so every day u have to go to that mapping and change the day so that
the particular data will be extracted . if we do that it will be like a layman's work. there comes the
concept of mapping parameters and variables. once if u assign a value to a mapping variable then it will
change between sessions.
=======================================
mapping parameters and variables make the use of mappings more flexible.and also it avoids creating
of multiple mappings. it helps in adding incremental data.mapping parameters and variables has to
create in the mapping designer by choosing the menu option as Mapping ----> parameters and variables
and the enter the name for the variable or parameter but it has to be preceded by $$. and choose type as
parameter/variable datatypeonce defined the variable/parameter is in the any expression for example in
SQ transformation in the source filter prop[erties tab. just enter filter condition and finally create a
parameter file to assgn the value for the variable / parameter and configigure the session properties.
however the final step is optional. if ther parameter is npt present it uses the initial value which is
we can use but the update flag will not be remain.but we can use passive transformation
=======================================
I guess no update can be placed just before to the target qs per my knowledge
=======================================
You can use aggregator after update strategy. The problem will be once you perform the update strategy
say you had flagged some rows to be deleted and you had performed aggregator transformation for all
rows say you are using SUM function then the deleted rows will be subtracted from this aggregator
transformation.
=======================================
127.Informatica - why dimenstion tables are denormalized in
nature ?
QUESTION #127
No best answer available. Please pick the good answer available or submit your
answer.
January 31, 2006 04:05:43 #1
Rahman
Because in Data warehousing historical data should be maintained to maintain historical data means
suppose one employee details like where previously he worked and now where he is working all details
should be maintain in one table if u maintain primary key it won't allow the duplicate records with same
employee id. so to maintain historical data we are all going for concept data warehousing by using
surrogate keys we can achieve the historical data(using oracle sequence for critical column).
so all the dimensions are marinating historical data they are de normalized because of duplicate entry
means not exactly duplicate record with same employee number another record is maintaining in the
table.
=======================================
dear reham thanks for ur responce First of all i want to tell one thing to all users who r using this site.
please give answers only if u r confident about it. refer it once again in the manual its not wrong. If we
give wrong answers lot of people who did't know the answer thought it as the correct answer and may
fail in the interview. the site must be helpfull to other please keep that in the mind.
i had discussed with my project manager about this. what he told is :->
The attributes in a dimension tables are used over again and again in queries. for efficient query
performance it is best if the query picks up an attribute from the dimension table and goes directly to the
fact table and do not thru the intermediate tables. if we normalized the dimension table we will create
such intermediate tables and that will not be efficient
=======================================
Yes what your manager told is correct. Apart from this we maintain Hierarchy in these tables.
Maintaining Hierarchy is pretty important in the dwh environment. For example if there is a child table
and then a parent table. if both child and parent are kept in different tables one has to every time join or
query both these tables to get the parent child relation. so if we have both child and parent in the same
table we can always refer immediately. this may be a case.
Similary if we have a hierarchy something like this...county > city > state > territory > division > region
> nation
If we have different tables for all it would be a waste of database space and also we need to query all
these tables everytime. Thats why we maintain hierarchy in dimension tables and based on the business
we decide whether to maintain in the same table or different tables.
=======================================
hello everyone
i don't know the answer of this question but i ve to tell u that how can we say that dimension table is de-
normalized because in snowflake schema we normalized all the dimension tables.
=======================================
I am a beginner to DW
but as I know fact tables - Denormalized
And Dimension Tables - Normalized.
If I am wrong please correct.
=======================================
De-normalization is basically the concept of keeping all the dimension hierarchies in a single
dimensions tables. This causes less number of joins while retriving data from dimensions and hence
faster data retrival. This is why dimensions in OLAP systems are de-normalized.
=======================================
128.Informatica - In a sequential Batch how can we stop single
session?
QUESTION #128 Submitted by: prasadns26
hi,
we can stop it using PMCMD command or in the monitor right click on that
perticular session and select stop.this will stop the current session and the
sessions next to it.
=======================================
hi
we can stop it using PMCMD command or in the monitor right click on that perticular session and
select stop.this will stop the current session and the sessions next to it.
=======================================
129.Informatica - How do you handle decimal places while
importing a flatfile into informatica?
QUESTION #129
No best answer available. Please pick the good answer available or submit your
answer.
February 11, 2006 20:44:03 #1
rajendar
while geeting the data from flat file in informatica ie import data from flat file it will ask for the
precision just enter that
=======================================
while importing the flat file the flat file wizard helps in configuring the properties of the file so that
select the numeric column and just enter the precision value and the scale. precision includes the scale
for example if the number is 98888.654 enter precision as 8 and scale as 3 and width as 10 for fixed
width flat file
=======================================
you can handle that by simply using the source analyzer window and then go to the ports of that flat file
representations and changing the precision and scales.
=======================================
hi
while importing flat file definetion just specify the scale for a neumaric data type. in the mapping the
flat file source supports only number datatype(no decimal and integer). In the SQ associated with that
source will have a data type as decimal for that number port of the source.
source ->number datatype port ->SQ -> decimal datatype.Integer is not supported. hence decimal is
taken care.
=======================================
130.Informatica - If you have four lookup tables in the workflow.
How do you troubleshoot to improve performance?
QUESTION #130
No best answer available. Please pick the good answer available or submit your
answer.
February 10, 2006 15:51:01 #1
swapna
there r many ways to improve the mapping which has multiple lookups.
1) we can create an index for the lookup table if we have permissions(staging area).
2) divide the lookup mapping into two (a) dedicate one for insert means: source - target these r new
rows . only the new rows will come to mapping and the process will be fast . (b) dedicate the second
one to update : source target these r existing rows. only the rows which exists allready will come into
the mapping.
=======================================
131.Informatica - How do I import VSAM files from source to
target. Do I need a special plugin
QUESTION #131
No best answer available. Please pick the good answer available or submit your
answer.
February 13, 2006 08:52:18 #1
swati
Hi
In mapping Designer we have direct option to import files from VSAM Navigation : Sources > Import
from file > file from COBOL.
Thanks
=======================================
Yes you will need PowerExchange. With the product you can read from and write to VSAM. I have
used it to read VSAM from one mainframe platform and write to a different platform. Have worked on
KSDS and ESDS file types. You will need PowerExchange client on your platform and a
PowerExchange listener on each of the mainframe platform(s) that you wish to work on.
=======================================
PowerExchange does not need to copy your VSAM file to Oracle unless you want to do that. It can do a
direct read/write to VSAM.
=======================================
132.Informatica - Differences between Normalizer and
Normalizer transformation.
QUESTION #132
No best answer available. Please pick the good answer available or submit your
answer.
March 08, 2006 06:03:58 #1
ravi kumar guturi
it's change the rows into coloums and columns into rows
=======================================
133.Informatica - What is IQD file?
QUESTION #133
No best answer available. Please pick the good answer available or submit your
answer.
simply man
sampling: just smaple the data throug send the data from source to target
=======================================
135.Informatica - How to import oracle sequence into Informatica.
QUESTION #135
No best answer available. Please pick the good answer available or submit your
answer.
Hi sunil
I got a problem with this...Can you jsut tell me a procedure to generate sequence number in SQl like...
if i give n no of emplyees it should generate seq.and i want to use them in informatica...using stored
procedure and load them into target..
thanks
=======================================
136.Informatica - With out using Updatestretagy and sessons
options, how we can do the update our target table?
QUESTION #136
No best answer available. Please pick the good answer available or submit your
answer.
February 14, 2006 23:48:35 #1
Saritha
insert
update
insert as update
update as update
like that
=======================================
By default all the rows in the session is set as insert flag you can change it in the session general
properties -- Treate source rows as :update
so all the incoming rows will be set with update flag.now you can update the rows in the target table
=======================================
hi
update override in target properties is used basically for updating the target table based on a non key
column.e.g update by ename.its not a key column in the EMP table.But if u use a UPD or session level
properties it necessarily should have a PK.
=======================================
We can connect two relational table in one sq Transformation.No errors will be perform
Regards
R.Karthikeyan
=======================================
1. Both the table should have primary key/foreign key relation ship
2. Both the table should be available in the same schema or same database
=======================================
138.Informatica - what are partition points?
QUESTION #138 Submitted by: saritha
Partition points mark the thread boundaries in a source pipeline and divide
Partition points mark the thread boundaries in a source pipeline and divide
=======================================
139.Informatica - what are cost based and rule based approaches
and the difference
QUESTION #139
No best answer available. Please pick the good answer available or submit your
answer.
March 02, 2006 17:18:19 #1
Gayathri
RE: what are cost based and rule based approaches and ...
Cost based and rule based approaches are the optimization techniques which are used in related to
databases where we need to optimize a sql query.
Basically Oracle provides Two types of Optimizers (indeed 3 but we use only these two techniques. bcz
the third has some disadvantages.)
When ever you process any sql query in Oracle what oracle engine internally does is it reads the query
and decides which will the best possible way for executing the query. So in this process Oracle follows
these optimization techniques.
1. cost based Optimizer(CBO): If a sql query can be executed in 2 different ways ( like may have path 1
and path2 for same query) then What CBO does is it basically calculates the cost of each path and the
analyses for which path the cost of execution is less and then executes that path so that it can optimize
the quey execution.
2. Rule base optimizer(RBO): this basically follows the rules which are needed for executing a query.
So depending on the number of rules which are to be applied the optimzer runs the query.
Use:
If the table you are trying to query is already analysed then oracle will go with CBO.
For the first time if table is not analysed Oracle will go with full table scan.
=======================================
140.Informatica - what is mystery dimention?
QUESTION #140
No best answer available. Please pick the good answer available or submit your
answer.
March 05, 2006 23:55:59 #1
Reddy
=======================================
141.Informatica - what is difference b/w Informatica 7.1 and
Abinitio
QUESTION #141
No best answer available. Please pick the good answer available or submit your
answer.
in Informatica there is the concept of co-operating system which makes the mapping in parallel fashion
=======================================
=======================================
142.Informatica - What is Micro Strategy? Why is it used for? Can
any one explain in detail about it?
QUESTION #142
No best answer available. Please pick the good answer available or submit your
answer.
ya shoor Just right click on the particular session and going to recovery option
or
=======================================
144.Informatica - what is the gap analysis?
QUESTION #144
No best answer available. Please pick the good answer available or submit your
answer.
April 11, 2007 10:55:26 #1
The source that does not the meet the requiremnts specified in BRD using the source given in the SSSD
is treated as gap analysis. or in one word the difference between 1 and 2 is called gap analysis.
=======================================
145.Informatica - what is the difference between stop and abort
QUESTION #145
No best answer available. Please pick the good answer available or submit your
answer.
March 02, 2006 15:17:45 #1
Sirisha
stop: _______If the session u want to stop is a part of batch you must stop the batch
if the batch is part of nested batch Stop the outer most bacth\
Abort:----
You can issue the abort command it is similar to stop command except it has 60 second time out .
If the server cannot finish processing and commiting data with in 60 sec
=======================================
Stop: In this case data query from source databases is stopped immediately but whatever data has been
loaded into buffer there transformations and loading contunes.Abort: Same as Stop but in this case
maximum time allowed for buffered data is 60 Seconds.
=======================================
Stop: In this case data query from source databases is stopped immediately but whatever data has been
loaded into buffer there transformations and loading contunes. Abort: Same as Stop but in this case
maximum time allowed for buffered data is 60 Seconds.
=======================================
146.Informatica - can we run a group of sessions without using
workflow manager
QUESTION #146
No best answer available. Please pick the good answer available or submit your
answer.
March 05, 2006 23:48:38 #1
Reddy
ya Its Posible using pmcmd Command with out using the workflow Manager run the group of session.
=======================================
147.Informatica - what is meant by complex mapping,
QUESTION #147
No best answer available. Please pick the good answer available or submit your
answer.
March 13, 2006 02:46:29 #1
satyam_un Member Since: March 2006 Contribution: 5
Complex maping means involved in more logic and more business rules.
Meny customer is there in my bank project They r after taking loans relocated in to another place
that time i feel to diffcult maintain both prvious and current adresses
=======================================
148.Informatica - explain use of update strategy transformation
QUESTION #148
No best answer available. Please pick the good answer available or submit your
answer.
March 22, 2006 00:00:52 #1
satyambabu
whereas in mapping variable the values change and the informatica server saves the values in the
repository and uses next time when u run the session.
=======================================
If we need to change certain attributes of a mapping after every time the session is run it will be very
difficult to edit the mapping and then change the attribute. So we use mapping parameters and variables
and define the values in a parameter file. Then we could edit the parameter file to change the attribute
values. This makes the process simple.
Mapping parameter values remain constant. If we need to change the parameter value then we need to
edit the parameter file .
But value of mapping variables can be changed by using variable function. If we need to increment the
attribute value by 1 after every session run then we can use mapping variables .
In a mapping parameter we need to manually edit the attribute value in the parameter file after every
session run.
=======================================
How can you edit the parameter file? Once you setup a mapping variable how can you define them in a
parameter file?
=======================================
150.Informatica - what is worklet and what use of worklet and in
which situation we can use it
QUESTION #150 Submitted by: SSekar
1)timer2)decesion3)command4)eventwait5)eventrise6)mail etc......
=======================================
Worklet is a set of tasks. If a certain set of task has to be reused in many workflows then we use
worklets. To execute a Worklet it has to be placed inside a workflow.
=======================================
Worklet is reusable workflows. It might contain more than on task in it. We can use these worklets in
other workflows.
=======================================
Besides the reusability of a worklet as mentioned above we can also use a worklet to group related
sessions together in a very big workflow. Suppose we have to extract a file and then load a fact table in
the workflow we can use one worklet to load/update dimensions.
=======================================
151.Informatica - How do you configure mapping in informatica
QUESTION #151
No best answer available. Please pick the good answer available or submit your
answer.
March 17, 2006 05:34:39 #1
suresh
For transformations that use data cache (such as Aggregator Joiner Rank and Lookup transformations)
limit connected input/output or output ports. Limiting the number of connected input/output or output
ports reduces the amount of data the transformations store in the data cache.
You can also perform the following tasks to optimize the mapping:
q
Configure single-pass reading.
q
Optimize datatype conversions.
q
Eliminate transformation errors.
q
Optimize transformations.
q
Optimize expressions. You should configure the mapping with the least number of
transformations and expressions to do the most amount of work possible. You should minimize
the amount of data moved by deleting unnecessary links between transformations.
For transformations that use data cache (such as Aggregator Joiner Rank and Lookup
transformations) limit connected input/output or output ports. Limiting the number of connected
input/output or output ports reduces the amount of data the transformations store in the data
cache.
You can also perform the following tasks to optimize the mapping:
r
Configure single-pass reading.
r
Optimize datatype conversions.
r
Eliminate transformation errors.
r
Optimize transformations.
r
Optimize expressions.
=======================================
152.Informatica - Can i use a session Bulk loading option that
time can i make a recovery to the session?
QUESTION #152 Submitted by: SSekar
If the session is configured to use in bulk mode it will not write recovery
information to recovery tables. So Bulk loading will not perform the recovery as
required.
no why because in bulk load u wont create redo log file when u normal load we create redo log file
=======================================
153.Informatica - what is difference between COM & DCOM?
QUESTION #153
No best answer available. Please pick the good answer available or submit your
answer.
October 05, 2006 01:23:48 #1
balaji
Hai
DCOM is the protocol which enables the s/w componenets in different machine to communicate with
each other through n/w .
=======================================
154.Informatica - what are the enhancements made to Informatica
7.1.1 version when compared to 6.2.2 version?
QUESTION #154
No best answer available. Please pick the good answer available or submit your
answer.
April 04, 2006 01:07:29 #1
sn3508 Member Since: April 2006 Contribution: 20
In 7+ versions
- there is propogate option i.e if we change any datatype of a field all the linked columns will reflect
that change
=======================================
2.lookup on flatfile
5.version controlling
6.data proffiling
8.ldap authentication
=======================================
155.Informatica - how do you create a mapping using multiple
lookup transformation?
QUESTION #155
No best answer available. Please pick the good answer available or submit your
answer.
March 30, 2006 16:26:57 #1
Sri
=======================================
Domain in Informatica means - A central Global repository (GDR) along with the registered Local
repositories (LDR) to this GDR. This is possible only in PowerCenter and not PowerMart.
=======================================
In Database parlance you can define a domain a set of all possible permissible values for any attribute .
Like you can say the domain for attribute Credit Card No# consists of all possible valid 16 digit
numbers.
Thanks.
=======================================
157.Informatica - what is the hierarchies in DWH
QUESTION #157
No best answer available. Please pick the good answer available or submit your
answer.
May 01, 2006 16:37:54 #1
kalyan
Rank:
1
2<--2nd position
2<--3rd position
4
5
Dense Rank:
1
2<--2nd position
2<--3rd position
3
4
Same ranks are assigned to same totals/numbers/names. the next rank follows
the serial number.
Rank:
1
2<--2nd position
2<--3rd position
4
5
Same Rank is assigned to same totals/numbers. Rank is followed by the Position. Golf game ususally
Ranks this way. This is usually a Gold Ranking.
Dense Rank:
1
2<--2nd position
2<--3rd position
3
4
Same ranks are assigned to same totals/numbers/names. the next rank follows the serial number.
=======================================
159.Informatica - can anyone explain about incremental
aggregation with an example?
QUESTION #159
No best answer available. Please pick the good answer available or submit your
answer.
April 09, 2006 21:15:24 #1
When you use aggregator transformation to aggregate it creates index and data caches to store the data 1.
Of group By columns 2. Of aggreagte columns
the incremental aggreagtion is used when we have historical data in place which will be used in
aggregation incremental aggregation uses the cache which contains the historical data and for each
group by column value already present in cache it add the data value to its corresponding data cache
value and outputs the row in case of a incoming value having no match in index cache the new values
for group by and output ports are inserted into the cache .
=======================================
Incremental aggregation is specially used for tune the performance of the aggregator. It captures the
change each time (incrementally) you run the transformation and then performs the aggregation
function to the changed rows and not to the entire rows. This improves the performance because you are
not reading the entire source each time you run the session.
=======================================
160.Informatica - What is meant by Junk Attribute in Informatica?
QUESTION #160
No best answer available. Please pick the good answer available or submit your
answer.
April 17, 2006 10:10:23 #1
raghavendra
Hi
In the requirment collection phase all the attributes that are likely to be used in any dimension will be
gathered. while creating a dimension we use all the related attributes of that dimesion from the gathered
list. At the last a dimension will be created with all the left over attributes which is usually called as
JUNK Dimension and the attributes are called JUNK Attributes.
=======================================
161.Informatica - Informatica Live Interview Questions
QUESTION #161 here are some of the interview questions i could not answer,
any body can help giving answers for others also.
thanks in advance.
No best answer available. Please pick the good answer available or submit your answer.
April 17, 2006 07:27:07 #1
binoy_pa Member Since: April 2006 Contribution: 5
=======================================
confirmed dimension one dimension that shares with two fact table
factless means fact table without measures only contains foreign keys-two types of factless table one is
event tracking and other is covergae table
Metedata is data about data here every thing is stored example-mapping sessions privileges other data
in informatica we can see the metedata in the repository.
System catalog that we used in the cognos that also contains data tables privileges predefined filter etc
using this catalog we generate reports
group cross tab is a type of report in cognos where we have to assign 3 measures for getting the result
=======================================
Hi Bin
I doubt your answer about the Grouped Cross Tab where you said 3 measure are to be specified which i
feel is wrong.
I think that grouped cross tab has only one measure but the side and row headers are grouped like
India China
=======================================
The cursor which is not declared in the declaration section but in executable section where we can give
the table name dynamically there.so that the cursor can fetch the data from that table
=======================================
grouped cross tab is the single report which contains number of crosstab report based on the grouped
items. like
INDIA
M1 M2
Banglore 542 542
Hyderabad 255 458
Chennai 45 254
USA
M1 M2
LA 578 5876
Chicago 4785 546
Washington DC 548 556
PAKISTAN
M1 M2
Lahore 457 875
Karachi 458 687
Islamabad 7894 64
Thanks
=======================================
Hi
The purpose of a DWH is to provide users data through which they can make their critical besiness
decisions.
DSS data base is nothing but a DWH. OLAP tools obviously use data from a DWH which is
transformed to generate reports. These reports are used by the users analysts to extract strategic
information which helps in decision making.
=======================================
The Default Index type for DWH is bitmap(non-unique)index
=======================================
162.Informatica - how do we remove the staging area
QUESTION #162
No best answer available. Please pick the good answer available or submit your
answer.
June 08, 2006 12:39:29 #1
Hanu
If u want any staging area dont create staging DB. Load data directly into Target
Hanu.
=======================================
hi
this question is logically not correct. staging area is just a set of intermediate tables.u can create or
maintain these tables in the same database as of ur DWH or in a different DB.These tables will be used
to store data from the source which will be cleaned transformed and undergo some business logic.Once
the source data is done with the above process data from STG will be populated to the final Fact table
through a simple one to one mapping.
=======================================
Hi
No best answer available. Please pick the good answer available or submit your answer.
May 01, 2006 16:32:38 #1
kalyan
=======================================
It displays the updated information about the session in the monitor window. The monitor window
displays the status of each session when you poll the Informatica server.
=======================================
164.Informatica - What is Transaction?
QUESTION #164
No best answer available. Please pick the good answer available or submit your
answer.
April 14, 2006 09:08:18 #1
vishali
=======================================
transaction is nothing but changing one window to another window during process
=======================================
Transaction is a set of rows bound by commit or rollback.
=======================================
Hi
=======================================
165.Informatica - what happens if you try to create a shortcut to a
non-shared folder?
QUESTION #165
No best answer available. Please pick the good answer available or submit your
answer.
April 14, 2006 14:59:17 #1
sunil_reddy Member Since: April 2006 Contribution: 2
Joiner transformation compares each row of the master source against the detail source. The fewer
unique rows in the master the fewer iterations of the join comparison occur which speeds the join
process.
=======================================
167.Informatica - Where is the cache stored in informatica?
QUESTION #167
No best answer available. Please pick the good answer available or submit your
answer.
April 15, 2006 02:14:01 #1
Hi
Cache is stored in the Informatica server memory and over flowed data is stored on the disk in file
format which will be automatically deleted after the successful completion of the session run. If you
want to store that data you have to use a persistant cache.
=======================================
168.Informatica - can batches be copied/stopped from server
manager?
QUESTION #168
No best answer available. Please pick the good answer available or submit your
answer.
May 08, 2006 05:24:58 #1
MOOTATI RAGHAVENDROA REDDY
It is an active transformation which is used to identify the top and bottom values based on the numerics .
by deafult it will create a rankindex port to caliculate the rank
=======================================
170.Informatica - Can Informatica load heterogeneous targets
from heterogeneous sources?
QUESTION #170
No best answer available. Please pick the good answer available or submit your
answer.
April 26, 2006 01:22:19 #1
Anant
Yes it can. For example...Flat File and Relations sources are joined in the mapping and later Flat File
and relational targets are loaded.
=======================================
yes informatica for load the data form the heterogineous sources to heterogeneous target.
=======================================
171.Informatica - how do you load the time dimension.
QUESTION #171
No best answer available. Please pick the good answer available or submit your
answer.
April 25, 2006 08:32:33 #1
Appadu Dora P
Time Dimension will generally load manually by using PL/SQL shell scripts proc C etc......
=======================================
create a procedure to load data into Time Dimension. The procedure needs to run only once to popullate
all the data. For eg the code below fills up till 2015. You can modify the code to suit the feilds in ur
table.
LastSeqID
Trunc(loaddate)
Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2)
)
TO_FLOAT(TO_CHAR(loaddate 'MM'))
TO_FLOAT(TO_CHAR(loaddate 'Q'))
trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) +
ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7)
TO_FLOAT(TO_CHAR(loaddate 'YYYY'))
TO_FLOAT(TO_CHAR(loaddate 'DD'))
TO_FLOAT(TO_CHAR(loaddate 'D'))
TO_FLOAT(TO_CHAR(loaddate 'DDD'))
1
1
1
1
1
TO_FLOAT(TO_CHAR(loaddate 'J'))
((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) +
TO_number(TO_CHAR(loaddate 'MM'))
((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) +
TO_number(TO_CHAR(loaddate 'Q'))
TO_FLOAT(TO_CHAR(loaddate 'J'))/7
TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713
TO_CHAR(load_date 'Day')
TO_CHAR(loaddate 'Month')
Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday')
Trunc(loaddate 'DAY') + 1
Decode(Last_Day(loaddate) loaddate 'y' 'n')
to_char(loaddate 'YYYYMM')
to_char(loaddate 'YYYY') || ' Half' ||
Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2)
)
TO_CHAR(loaddate 'YYYY / MM')
TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate
'Q')) )
TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number( TO_CHAR(loaddate
'WW')))
TO_CHAR(loaddate 'YYYY'));
If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') Then
Exit;
End If;
End Loop;
commit;
end Insert_W_DAY_D_PR;
=======================================
172.Informatica - what is hash table informatica?
QUESTION #172
No best answer available. Please pick the good answer available or submit your
answer.
May 03, 2006 15:13:30 #1
uma bojja
Hash partitioning is the type of partition that is supported by Informatica where the hash user keys are
specified .
=======================================
I donot know exact answer uma but i am telling as per my knowledge.Hash table is used to extract the
data through Java Virtual Machine.If u know more about this plz send to me
=======================================
Hash partitions are some what similar to database partitions. This will allow user to partition the data by
rage which is fetched from source.
--Kr
=======================================
when you want the Informatica Server to distribute rows to the partitions by group.
=======================================
In hash partitioning the Informatica Server uses a hash function to group rows of data among partitions.
The Informatica Server groups the data based on a partition key.Use hash partitioning when you want
the Informatica Server to distribute rows to the partitions by group. For example you need to sort items
by item ID but you do not know how many items have a particular ID number.
cheers karthik
=======================================
173.Informatica - What is meant by EDW?
QUESTION #173
No best answer available. Please pick the good answer available or submit your
answer.
May 04, 2006 10:06:21 #1
Uma Bojja Member Since: May 2006 Contribution: 7
EDW
~~~~~
Its a big data warehouses OR centralized data warehousing OR the old style of warehouse.
Its a single enterprise data warehouse (EDW) with no associated data marts or operational data store
(ODS) systems.
=======================================
=======================================
EDW is Enterprise Datawarehouse which means that its a centralised DW for the whole organization.
this apporach is the apporach on Imon which relies on the point of having a single warehouse/
centralised where the kimball apporach says to have seperate data marts for each vertical/department.
2. Same point of source of data for all the users acroos the organization.
to over come is the time it takes to develop and also the management that is required to build a
centralised database.
Thanks
Yugandhar
=======================================
174.Informatica - how to load the data from people soft hrm to
people soft erm using informatica?
QUESTION #174
No best answer available. Please pick the good answer available or submit your
answer.
May 08, 2006 14:00:35 #1
Uma Bojja Member Since: May 2006 Contribution: 7
RE: how to load the data from people soft hrm to peopl...
2.Import the source and target from people soft using ODBC connections
3.Define connection under Application Connection Browser for the people soft source/target in
workflow manager .
select the proper connection (people soft with oracle sybase db2 and informix)
=======================================
175.Informatica - what are the measure objects
QUESTION #175
No best answer available. Please pick the good answer available or submit your
answer.
May 15, 2006 00:50:56 #1
karthikeyan
Aggregate calculation like sum avg max min these are the measure objetcs.
=======================================
176.Informatica - what is the diff b/w STOP & ABORT in
INFORMATICA sess level ?
QUESTION #176
No best answer available. Please pick the good answer available or submit your
answer.
RE: what is the diff b/w STOP & ABORT in INFORMATICA s...
Abort:WE cant restart the session.We should truncate all the pipeline after that start the session
=======================================
Stop : After issuing stop PCS processes all those records which it got from source qualifier and writes
to the target.
Abort: It works in the same way as stop but there is a time out period of 60sec.
=======================================
177.Informatica - what is surrogatekey ? In ur project in which
situation u has used ? explain with example ?
QUESTION #177
No best answer available. Please pick the good answer available or submit your
answer.
May 22, 2006 09:14:55 #1
afzal
types
default----passthrough partition
hash partiotion
=======================================
Hi
In informatica we can tune performance in 5 different levels that is at source level target level mapping
level session level and at network level.
So to tune the performance at session level we go for partitioning and again we have 4 types of
partitioning
round robin can not be applied at source level it can be used at some transformation level
=======================================
hi nimmi please explain me regarding complete partition. I need clear picture. what tranmission it will
ristrict how it will ristric where we have to give.
thanks
madhu.
=======================================
keep aggregator between source qualifier and target and choose group by field key it will eliminate the
duplicate records.
=======================================
Hi Before loading to target use an aggregator transformation and make use of group by function to
eleminate the duplicates on columns .Nanda
=======================================
Use Sorter Transformation. When you configure the Sorter Transformation to treat output rows as
distinct it configures all ports as part of the sort key. It therefore discards duplicate rows compared
during the sort operation
=======================================
Hi Before loading to target Use an aggregator transformation and use group by clause to eliminate the
duplicate in columns.Nanda
=======================================
Use sorter transformation select distinct option duplicate rows will be eliminated.
=======================================
if u want to delete the duplicate rows in flat files then we go for rank transformation or oracle external
procedure tranfornation
select all group by ports and select one field for rank its easily dupliuctee now
=======================================
Hi
using Sorter Transformation we can eliminate the Duplicate Rows from Flat file
Thanks
N.Sai
=======================================
to eliminate the duplicate in flatfiles we have distinct property in sorter transformation. If we enable that
property automatically it will remove duplicate rows in flatfiles.
=======================================
180.Informatica - How to Generate the Metadata Reports in
Informatica?
QUESTION #180
No best answer available. Please pick the good answer available or submit your
answer.
June 01, 2006 07:27:14 #1
balanagdara Member Since: April 2006 Contribution: 4
Hi Venkatesan
You can generate PowerCenter Metadata Reporter from a browser on any workstation.
Bala Dara
=======================================
Hi
You can generate PowerCenter Metadata Reporter from a browser on any workstation even a
workstation that does not have PowerCenter tools installed.
Bala Dara
=======================================
hey bala can you be more specific about that how to generate the metadata report in informantica ???
=======================================
yes we can generate reports using Metadata Reporter... It is a web based application used only for
creating metadata reports in informatica.
Using metadata reporter we can connect to repository and get the metadata without the knowledge of
Sql and technical skills.
kumar
=======================================
181.Informatica - Can u tell me how to go for SCD's and its types.
Where do we use them mostly
QUESTION #181
No best answer available. Please pick the good answer available or submit your
answer.
June 08, 2006 08:53:46 #1
priyamayee Member Since: June 2006 Contribution: 3
Disadvantages: - Type 3 will not be able to keep all history where an attribute is changed more than
once. For example if Christina later moves to Texas on December 15 2003 the California information
will be lost. Usage: Type 3 is rarely used in actual practice. When to use Type 3: Type III slowly
changing dimension should only be used when it is necessary for the data warehouse to track historical
changes and when such changes will only occur for a finite number of time.
=======================================
182.Informatica - How to export mappings to the production
environment?
QUESTION #182
No best answer available. Please pick the good answer available or submit your
answer.
June 13, 2006 19:15:18 #1
UmaBojja
In the designer go to the main menu and one can see the export/import options.
Import the exported mapping in to the production repository with replace options.
Thanks
Uma
=======================================
183.Informatica - how u will create header and footer in target
using informatica?
QUESTION #183
No best answer available. Please pick the good answer available or submit your
answer.
June 13, 2006 19:05:25 #1
UmaBojja
If you are focus is about the flat files then one can set it in file properties while creating a mapping or at
the session level in session properties
Thanks
Uma
=======================================
hi uma
thanks for the answer i want the complete explanation regarding to this question how to create header
and footer in target?
=======================================
you can always create a header and a trailer in the target file using an aggregator transformation.
One will be your header and other will be your trailer coming from aggregator.
Concatenate the header and the main file in post session command usnig shell script.
=======================================
184.Informatica - what is the difference between constraind base
load ordering and target load plan
QUESTION #184
No best answer available. Please pick the good answer available or submit your
answer.
June 16, 2006 14:16:55 #1
Uma Bojja Member Since: May 2006 Contribution: 7
example:
Table 1---Master
Tabke 2---Detail
If the data in table1 is dependent on the data in table2 then table2 should be loaded first.In such cases to
control the load order of the tables we need some conditional loading which is nothing but constraint
based load
In Informatica this feature is implemented by just one check box at the session level.
Thanks
Uma
=======================================
Target load order comes in the designer property..Click mappings tab in designer and then target load
plan.It will show all the target load groups in the particular mapping. You specify the order there the
server will loadto the target accordingly.
Where as constraint based loading is a session proerty. Here the multiple targets must be generated from
one source qualifier. The target tables must posess primary/foreign key relationships. So that the server
loads according to the key relation irrespective of the Target load order plan.
=======================================
If you have only one source it s loading into multiple target means you have to use Constraint based
loading. But the target tables should have key relationships between them.
If you have multiple source qualifiers it has to be loaded into multiple target you have to use Target
load order.
=======================================
Constraint based loading : If your mapping contains single pipeline(flow) with morethan one target
(If target tables contain Master -Child relationship) you need to use constraint based load in session
level.
Target Load plan : If your mapping contains multipe pipeline(flow) (specify execution order one by
one.example pipeline 1 need to execute first then pipeline 2 then pipeline 3) this is purly based on
pipeline dependency
=======================================
185.Informatica - How do we analyse the data at database level?
QUESTION #185
No best answer available. Please pick the good answer available or submit your
answer.
June 16, 2006 14:20:55 #1
Uma Bojja Member Since: May 2006 Contribution: 7
If you want to view the data on source/target we can preview the data but with some limitations.
Thanks
Uma
=======================================
186.Informatica - why sorter transformation is an active
transformation?
QUESTION #186
No best answer available. Please pick the good answer available or submit your
answer.
June 16, 2006 15:02:41 #1
Kiran Kumar Cholleti
It allows to sort data either in ascending or descending order according to a specified field. Also used to
configure for case-sensitive sorting and specify whether the output rows should be distinct. then it
will not return all the rows
=======================================
active transformation:is no of records and thier rowid that pass through the transformation will differ.
=======================================
This is type of active transformation which is responsible for sorting the data either in the ascending
order or descending order according to the key specifier. the port on which the sorting takes place is
called as sortkeyport
properties
=======================================
If any transformation has the distinct option then it will be a active one bec active transformation is
nothing but the transformation which will change the no. of o/p records.So distinct always filters the
duplicate rows which inturn decrease the no of o/p records when compared to i/n records.
One more thing is An active transformation can also behave like a passive
=======================================
187.Informatica - how is the union transformation active
transformation?
QUESTION #187
No best answer available. Please pick the good answer available or submit your
answer.
June 18, 2006 09:53:25 #1
zafar
Active Transformation is one which can change the number of rows i.e input rows and output rows
might not match. Number of rows coming out of Union transformation might not match the incoming
rows.
Zafar
=======================================
Active Transformation: the transformation that change the no. of rows in the Target.
Source (100 rows) ---> Active Transformation ---> Target (< or > 100 rows)
Passive Transformation: the transformation that does not change the no. of rows in the Target.
Source (100 rows) ---> Passive Transformation ---> Target (100 rows)
Union Transformation: in Union Transformation we may combine the data from two (or) more sources.
Assume Table-1 contains '10' rows and Table-2 contains '20' rows. If we combine the rows of Table-1
and Table-2 we will get a total of '30' rows in the Target. So it is definetly an Active Transformation.
=======================================
thank u very munch sai venkatesh for ur answer but inthat case look up transformation should be an
active transformation but it is a passive transformation .
=======================================
Active transformation number of records passing through the transformation and their rowid will be
different it depends on rowid also.
=======================================
This is a type of passive transformation which is responsible for merging the data comming from
different sources. the union transformation functions very similar to union all statement in oracle.
=======================================
Hi Saivenkatesh ur answer is very nice thanking you.
=======================================
Ya since the Union Trnsformation may lead a change to the no of rows incoming it is definitely an
active type.
In the other case Look-up in no way can change the no. of row that are passing thru it. The
transformation just looks to the refering table. The no. of records increases or decreases by the
transformations that follow the look-up transformation.
=======================================
=======================================
Ya Surely Lookup is a passive one
=======================================
Hi u are saying source rows also 10+20 30 it passes all 30 rows to the target according to active
definition while it passes the rows to the next t/r should be d/t but it passes all 30 rows.
The option comes in the properties tab of transformations. By default it remains Normal . Can be
Verbose Initialisation
Verbose Data
Normal
or Terse.
=======================================
189.Informatica - How can we join 3 database like Flat File,
Oracle, Db2 in Informatrica..Thanks in advance
QUESTION #189
No best answer available. Please pick the good answer available or submit your
answer.
June 24, 2006 18:28:27 #1
sandeep
hi
=======================================
You have to use two joiner transformations.fIRST one will join two tables and the next one will join the
third with the resultant of the first joiner.
=======================================
190.Informatica - Is a fact table normalized or de-normalized?
QUESTION #190 Submitted by: lakshmi
Hi
Regards
Well!! A fact table is always DENORMALISED table. It consists of data from dimension table
(Primary Key's) and Fact table has Foreign keys and measures.
Thanks!!
=======================================
the main funda of DW is de-normalizing the data for faster access by the reporting tool...so if ur
building a DW ..90 it has to be de-normalized and off course the fact table has to be de normalized...
=======================================
the fact table is always DE-NORMALIZED.somebody answered it as normalized.See if u dont know
the answers plz dont post them.Just dont make lottery by posting wrong answers.
=======================================
Hi
I read the above comments. I confused. then we should ask Kimball know. Here is the comment..
Ref:http://www.kimballgroup.com/html/commentarysub2.html
=======================================
Hi
Regards
=======================================
Hi
Dimension table can be normalized or de-normalized. But fact table is always normalized
=======================================
Dimension table may be normalized or denormalized according to your schema but Fact table always
will be denormalized.
=======================================
Hi all
Dimension table may be normalized or denormalized according to your schema but Fact table always
will be denormalized.
Regards
Umesh
BSIL(Mumbai)
=======================================
hi
http://72.14.253.104/search?q cache:lkFjt6EmsxMJ:www.kimballgroup.com/html/commentarysub2.
I am highlighting what Kimball says here: Dimensional models combine normalized and
denormalized table structures. The dimension tables of descriptive information are highly
denormalized with detailed and hierarchical roll-up attributes in the same table . Meanwhile the fact
tables with performance metrics are typically normalized . While we advise against a fully normalized
with snowflaked dimension attributes in separate tables (creating blizzard-like conditions for the
business user) a single denormalized big wide table containing both metrics and descriptions in the
same table is also ill-advised.
Regards
lakshmi
=======================================
191.Informatica - What is the difference between PowerCenter 7
and PowerCenter 8?
QUESTION #191
No best answer available. Please pick the good answer available or submit your
answer.
August 03, 2006 11:31:21 #1
satish
1)lookup the flat files in informatica 7.X but we cann't lookup flat files in informatica 6.X
2) External Stored Procedure Transformation is not available in informatica 7.X but this transformation
included in informatica 6.X
=======================================
Also Union Transformation is not there in 6.x where as its there in 7.x
Pradeep
=======================================
q
Also custom transformation is not available in 6.x
q
The main difference is the version control available in 7.x
q
Session level error handling is available in 7.x
q
XML enhancements for data integration in 7.x
=======================================
Hi
=======================================
union transformation
custom transformation
function like
soundex metaphone
=======================================
193.Informatica - How to move the mapping from one database to
another?
QUESTION #193
No best answer available. Please pick the good answer available or submit your
answer.
July 06, 2006 21:12:49 #1
martin
Do you mean migration between repositories? There are 2 ways of doing this.
1. Open the mapping you want to migrate. Go to File Menu - Select 'Export Objects' and give a name -
an XML file will be generated. Connect to the repository where you want to migrate and then select File
Menu - 'Import Objects' and select the XML file name.
2. Connect to both the repositories. Go to the source folder and select mapping name from the object
navigator and select 'copy' from 'Edit' menu. Now go to the target folder and select 'Paste' from 'Edit'
menu. Be sure you open the target folder.
=======================================
u can also do it this way. connect to both the repositories open the respective folders. keep the
destination repository as active. from the navigator panel just drag and drop the mapping to the work
area. it will ask whether to copy the mapping say YES. its done.
=======================================
if we go by the direct meaning of your question....there is no need for a new mapping for a new databse
you just need to change the connections in the workflow manager to run the mapping on another
database
=======================================
194.Informatica - How do we do complex mapping by using
flatfiles / relational database?
QUESTION #194
No best answer available. Please pick the good answer available or submit your
answer.
September 28, 2006 05:56:56 #1
srinivas
if we are using more business reules or more transformations then it is called complex mapping. If we
have flat files then we use the flat as a sourse or we can take relational sources depends on the
availability.
=======================================
195.Informatica - How to define Informatica server?
QUESTION #195
No best answer available. Please pick the good answer available or submit your
answer.
July 06, 2006 02:45:14 #1
deeprekha
=======================================
197.Informatica - how can we store previous session logs
QUESTION #197
No best answer available. Please pick the good answer available or submit your
answer.
July 12, 2006 02:54:55 #1
Hareesh
Just run the session in time stamp mode then automatically session log will not overwrite current
session log.
Hareesh
=======================================
Hi
We can do this way also. using $PMSessionlogcount(specify the number of runs of the session log to
save)
=======================================
Hi
Go to Session-->right click -->Select Edit Task then Goto -->Config Object then set the property
=======================================
198.Informatica - how can we use pmcmd command in a
workflow or to run a session
QUESTION #198
No best answer available. Please pick the good answer available or submit your
answer.
July 14, 2006 02:31:34 #1
abc
Changed Data Capture (CDC) helps identify the data in the source system that has changed since the
last extraction. With CDC data extraction takes place at the same time the insert update or delete
operations occur in the source tables and the change data is stored inside the database in change tables.
The change data thus captured is then made available to the target systems in a controlled manner.
=======================================
200.Informatica - Wat is QTP in Data Warehousing?
QUESTION #200
No best answer available. Please pick the good answer available or submit your
answer.
November 15, 2006 00:26:44 #1
srinuv_11 Member Since: October 2006 Contribution: 23
Well
If the sources are databases then we can go for SQL-override in source qualifier by changing the
default SQL query. I mean selecting the check box called [] select distinct.
and
if the sources are heterogeneous i mean from different file systems then we can use SORTER
Transformation and in transformation properties select the check box called [] select distinct same as in
source qualifier we can get distinct values.
=======================================
202.Informatica - what transformation you can use inplace of
lookup?
QUESTION #202
No best answer available. Please pick the good answer available or submit your
answer.
July 17, 2006 05:52:56 #1
venkat
Hi
So if you can a bit particular about the scenarioo that you are talking about it will be easy to interpret.
=======================================
Hi
lookup's either we can use first or last value. for suppose lookup have more than one record matching
we need all matching records in that situation we can use master or detail outer join instead of lookup.
(according to logic)
=======================================
You can use joiner in place of lookup
=======================================
You can join that table which you wanted to use in lookup in the source qualifier using SQL override to
avoid using lookup transformation
=======================================
203.Informatica - Why and where we are using factless fact table?
QUESTION #203
No best answer available. Please pick the good answer available or submit your
answer.
July 18, 2006 05:08:53 #1
kumar
Hi
Iam not sure but you can confrirm with other people.
EX: Temperature in fact table will note it as Moderate Low High. This type of things are called Non-
additive measures.
Cheers
Kumar.
=======================================
Factless Fact Tables are the fact tables with no facts or measures(numerical data). It contains only the
foriegn keys of corresponding Dimensions.
=======================================
such fact tables are required to avoid flaking of levels within dimension and to define them as a separate
cube connected to the main cube.
=======================================
transaction can occur without the measure then it is factless fact table or coverage table
=======================================
Fact table will contains metrics and FK's corresponding to the Dimesional table but factless Fact table
will contains only FK's of corresponding dimensions without any metrics regards rma
=======================================
204.Informatica - tell me one complecated mapping
QUESTION #204
No best answer available. Please pick the good answer available or submit your
answer.
September 14, 2006 02:44:20 #1
srinivas
if we are using more business rules or more transformations then it is a complex mapping like SCD type
2(version no effective date range flag current date)
=======================================
Mapping is nothing but flow of data from source to Target we r giving instructions to power center
server to move data from source to targets accourding to our business rules if more business rules r
there in our mapping then its copmlex mapping.regards rma
=======================================
205.Informatica - how do we do unit testing in informatica?how
do we load data in informatica ?
QUESTION #205
No best answer available. Please pick the good answer available or submit your
answer.
July 22, 2006 04:25:39 #1
Praveen kumar
Testing
1. Quantitaive testing
2.Qualitative testing
Steps.
Once the session is succeeded the right click on session and go for statistics tab.
There you can see how many number of source rows are applied and how many number of rows loaded
in to targets and how many number of rows rejected.This is called Quantitative testing.
If once rows are successfully loaded then we will go for qualitative testing.
Steps
1.Take the DATM(DATM means where all business rules are mentioned to the corresponding source
columns) and check whether the data is loaded according to the DATM in to target table.If any data is
not loaded according to the DATM then go and check in the code and rectify it.
=======================================
206.Informatica - how do we load data by using period
dimension?
QUESTION #206
No best answer available. Please pick the good answer available or submit your
answer.
September 23, 2006 07:46:32 #1
hi
thanks
madhu
=======================================
207.Informatica - How many types of facts and what are they?
QUESTION #207
No best answer available. Please pick the good answer available or submit your
answer.
July 21, 2006 14:54:14 #1
Bala
I know some there are Additive Facts Semi-Additive Non-Additive Accumulating Facts Factless facts
Periodic fact table Transaction Fact table.
Thanks
Bala
=======================================
There are
Factless Facts:Facts without any measures.
=======================================
hi
additive
semi additive
non additive
thanks
madhu
=======================================
hai
semi Additive Fact- Fact which is used some dimesion and not with some
Non Additive Fact- Fact which is not used any of the dimesion
=======================================
1. Regular Fact - With numeric values
2.Factless Fact - Without numeric values
=======================================
208.Informatica - How can you say that union Transormation is
Active transformation.
QUESTION #208
No best answer available. Please pick the good answer available or submit your
answer.
July 22, 2006 07:26:42 #1
kirankumarvema
hi
We can merge multiple source qualier query records in union trans at same time its not like expresion
trans (each row). so we can say it is active.
=======================================
Hi
Union Transformation is a active transformation because it changes the number of rows through the
pipeline.
It normally has multiple input groups to add on it compare to other transformation.Before union
transformation was implement the funda on number of rows was right i.e before 7.0 but now its not the
exact benchmark to determine the active transformation
Thanks
Uma
=======================================
Hi
=======================================
Hi all
some people saying that a union T/R eliminates duplicates. but it is woring . As of now it wont
eliminate duplicates.
The union T/R is active and also passive depends upon the property "is active" which is present at union
T/R properties tab. this Specifies whether this transformation is an active or a passive transformation.
When you enable this option the transformation can generate 0 1 or more output rows for each input
row. Otherwise it can generate only 0 or 1 output row for each input row.
but this property is disabled in informatica 7.1.1. I think it may developed in future.
regards
kiran
=======================================
209.Informatica - how many types of dimensions are available in
informatica?
QUESTION #209
No best answer available. Please pick the good answer available or submit your
answer.
Sorting Options Page 1 ofFirst
Oldest 2 «First 1 2 > Last »
1.star schema
2.snowflake schema
3.glaxy schema
=======================================
1.stand-alone
2 local.
3.global
=======================================
That are
1. General dimensions
2. Confirmed dimensions
3. Junk Dimensions
=======================================
The Slowly Changing Dimensions Wizard creates mappings to load slowly changing dimension tables:
q
Type 1 Dimension mapping. Loads a slowly changing dimension table by inserting new
dimensions and overwriting existing dimensions. Use this mapping when you do not want a
history of previous dimension data.
q
Type 2 Dimension/Version Data mapping. Loads a slowly changing dimension table by
inserting new and changed dimensions using a version number and incremented primary key to
track changes. Use this mapping when you want to keep a full history of dimension data and to
track the progression of changes.
q
Type 2 Dimension/Flag Current mapping. Loads a slowly changing dimension table by
inserting new and changed dimensions using a flag to mark current dimension data and an
incremented primary key to track changes. Use this mapping when you want to keep a full
history of dimension data tracking the progression of changes while flagging only the current
dimension.
q
Type 2 Dimension/Effective Date Range mapping. Loads a slowly changing dimension table
by inserting new and changed dimensions using a date range to define current dimension data.
Use this mapping when you want to keep a full history of dimension data tracking changes with
an exact effective date range.
q
Type 3 Dimension mapping. Loads a slowly changing dimension table by inserting new
dimensions and updating values in existing dimensions. Use this mapping when you want to
keep the current and previous dimension values in your dimension table.
=======================================
I want each and every one of you who is answering please don't make fun out of this.
some one gave the answer no . some one gave the answer star flake schema snow flake schema etc how
can a schema come under a type of dimension
ANSWER:
type1 SCD: If you want to load an updated row of previously existed row the previous data will be
replaced. So we lose historical data.
type2 SCD: Here we will add a new row for updated data. So we have both current and past records
which aggrees with the concept of datawarehousing maintaining historical data.
CONFORMED DIMENSION: The dimension which gives the same meaning across different star
schemas is called Conformed dimension.
ex: Time dimension. where ever it was gives the same meaning
=======================================
wat r those three dimensions tht r available in inforamtica here we get multiple answers cud anyone tell
me the exact once.........
thq
hari krishna
=======================================
casual dimension confimed dimension degenrate dim junk dimension raged dim dirty dim
=======================================
210.Informatica - How can you improve the performance of
Aggregate transformation?
QUESTION #210
No best answer available. Please pick the good answer available or submit your
answer.
=======================================
by using sorted input
=======================================
hi
3.Give input/output what you need in the transformation.i.e reduce number of input and output ports.
=======================================
IN Aggregator transformations select the sorted input check list in the properties tab and write sql query
in source qulifier.Its improoves the performance.
=======================================
Hi
=======================================
211.Informatica - Why did you use stored procedure in your ETL
Application?
QUESTION #211
No best answer available. Please pick the good answer available or submit your
answer.
August 11, 2006 13:16:09 #1
sudha
RE: Why did you use stored procedure in your ETL Appli...
hi
=======================================
=======================================
to execute database procedures
=======================================
Hi
Using of stored procedures plays important role.Suppose ur using oracle database where ur doing some
ETL changes you may use informatica .In this every row of the table pass should pass through
informatica and it should undergo specified ETL changes mentioned in transformations. If use stored
procedure i..e..oracle pl/sql package this will run on oracle database(which is the databse where we
need to do changes) and it will be faster comapring to informatica because it is runing on the oracle
databse.Some things which we cant do using tools we can do using packages.Some jobs make take
hours to run ........in order to save time and database usage we can go for stored procedures.
=======================================
212.Informatica - why did u use update stategy in your
application?
QUESTION #212
No best answer available. Please pick the good answer available or submit your
answer.
August 08, 2006 12:39:03 #1
angeletteeye
The basic thing one should understand about this is it is essential transformation to perform DML
operations on already data populated targets(i.e targets which contain some records before this mapping
loads data)
when records come to this transformation depending on our requirement we can decide whether to
For example take an input row if it is already there in the target(we find this by lookup transformation)
update it otherwise insert it.
We can also specify some conditions based on which we can derive which update strategy we have to
use.
DD_INSERT DD_UPDATE DD_DELETE DD_REJECT are called as decode options which can
perform the respective DML operations.
DECODE(0) DECODE(1) DECODE(2) DECODE(3) for insertion updation deletion and rejection
=======================================
The basic thing one should understand about this is it is essential transformation to perform DML
operations on already data populated targets(i.e targets which contain some records before this mapping
loads data)
when records come to this transformation depending on our requirement we can decide whether to
insert or update or reject the rows flowing in the mapping.
For example take an input row if it is already there in the target(we find this by lookup transformation)
update it otherwise insert it.
We can also specify some conditions based on which we can derive which update strategy we have to
use.
DD_INSERT DD_UPDATE DD_DELETE DD_REJECT are called as decode options which can
perform the respective DML operations.
DECODE(0) DECODE(1) DECODE(2) DECODE(3) for insertion updation deletion and rejection
=======================================
thanks
madhu
=======================================
213.Informatica - How do you create single lookup transformation
using multiple tables?
QUESTION #213
No best answer available. Please pick the good answer available or submit your
answer.
August 10, 2006 16:46:28 #1
Srinivas
just we can create the view by using two table then we can take that view as lookup table
=======================================
If you want single lookup values to be used in multiple target tables this can be done !!!
For this we can use Unconnected lookup and can collect the values from source table in any target table
depending upon the business rule ...
=======================================
214.Informatica - In update strategy target table or flat file which
gives more performance ? why?
QUESTION #214
No best answer available. Please pick the good answer available or submit your
answer.
August 10, 2006 04:10:37 #1
prasad.yandapalli
stored procedure
Pros: Loading Sorting Merging operations will be faster as there is no index concept and Data will be in
ASCII mode.
=======================================
=======================================
215.Informatica - How to load time dimension?
QUESTION #215
No best answer available. Please pick the good answer available or submit your
answer.
August 15, 2006 04:08:18 #1
Mahesh
U can load time dimension manually by writing scripts in PL/SQL to load the time dimension table with
values for a period.
Ex:- M having my business data for 5 years from 2000 to 2004 then load all the date starting from 1-1-
2000 to 31-12-2004 its around 1825 records. Which u can do it fast writing scripts.
Bhargav
=======================================
hi
thanks
madhu
=======================================
For loading data in to other dimensions we have respective tables in the oltp systems..
But for time dimension we have only one base in the OLTP database. Based on that we have to load
time dimension. We can loan the time dimension using ETL procedures which calls the procedure or
function created in the database. If the columns are more in the time dimension we have to creat it
manually by using Excel sheet.
=======================================
create a procedure to load data into Time Dimension. The procedure needs to run only once to popullate
all the data. For eg the code below fills up till 2015. You can modify the code to suit the feilds in ur
table.
'Q')) )
TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number( TO_CHAR(loaddate
'WW')))
TO_CHAR(loaddate 'YYYY'));
If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') Then
Exit;
End If;
End Loop;
commit;
end Insert_W_DAY_D_PR;
=======================================
216.Informatica - what is the architecture of any Data
warehousing project? what is the flow?
QUESTION #216
No best answer available. Please pick the good answer available or submit your
answer.
August 21, 2006 06:52:22 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
1)The basic step of datawarehousing starts with datamodelling. i.e creation dimensions and facts.
2)datawarehouse starts with collection of data from source systems such as OLTP CRM ERPs etc
3)Cleansing and transformation process is done with ETL(Extraction Transformation Loading) tool.
4)by the end of ETL process target databases(dimensions facts) are ready with data which accomplishes
the business rules.
5)Now finally with the use of Reporting tools(OLAP) we can get the information which is used for
decision support.
=======================================
thanks
=======================================
nice answer i have more doubts and can u give me ur mail id
=======================================
217.Informatica - What are the questions asked in PDM Round
(Final Hr round)
QUESTION #217
No best answer available. Please pick the good answer available or submit your
answer.
November 14, 2006 02:08:00 #1
srinuv_11 Member Since: October 2006 Contribution: 23
Hi friend
mail me at satya.neerumalla@tcs.com
=======================================
A materialized view provides indirect access to table data by storing the results of a query in a separate
schema object unlike an ordinary view which does not take up any storage space or contain data.
Materialized views are schema objects that can be used to summarize precompute replicate and
distribute data. E.g. to construct a data warehouse.
The definition of materialized view is very near to the concept of Cubes where we keep summarized
data. But cubes occupy space.
Coming to datamart that is completely different concept. Datawarehouse contains overall view of the
organization. But datamart is specific to a subjectarea like Finance etc...
we can combine different data marts of a compnay to form datawarehouse or we can split a
datawarehouse into different data marts.
=======================================
hi
it wont contain data and what we ask it will fire on table and give the data.
ex: sum(sal).
materilized view are indirect connect to data and stored in separte scheema. it is just like cube in dw. it
will have data. what ever we ask summerised information it will give and stores it. when ever we ask
=======================================
219.Informatica - In workflow can we send multiple email ?
QUESTION #219
No best answer available. Please pick the good answer available or submit your
answer.
August 28, 2006 15:13:20 #1
prudhvi
yes
=======================================
Yes only on the UNIX version of Workflow and not Windows based version.
=======================================
220.Informatica - How do we load from PL/SQL script into
Informatica mapping?
QUESTION #220
No best answer available. Please pick the good answer available or submit your
answer.
August 28, 2006 09:43:04 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
when we run the session containing this transformation the pl/sql procedure will get executed.
=======================================
hi
for database procedure we have procedure transmission. we can use that one.
thanks
madhu
=======================================
You can actually create a view and import it as source in mapping ....
=======================================
221.Informatica - can any one tell why we are populating time
dimension only with scripts not with mapping?
QUESTION #221
No best answer available. Please pick the good answer available or submit your
answer.
September 23, 2006 07:07:31 #1
calltomadhu Member Since: September 2006 Contribution: 34
RE: can any one tell why we are populating time dimens...
hi
becuase time dimension is rapidly changing dimenction. if you use mapping it is very big stuff and that
to be it very big problem in performence wise.
thanks
madhu
=======================================
How can time dimension be a rapidly chnging dimensions. Time dimension is one table where you load
date and time related information so that the key can be used in facts. this way you dont have to use
entire date in the fact and can rather use the time key. there are a number of advantages in performance
and simplicity of design with this strategy.
You use a script to laod time dimension becuase you load it one time. as i said earlier all it contains are
dates starting from one point of time say... 01/01/1800 to some date in future say 01/01/3001
=======================================
222.Informatica - What about rapidly changing dimensions?Can u
analyze with an example?
QUESTION #222
No best answer available. Please pick the good answer available or submit your
answer.
September 04, 2006 09:05:10 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
hi
a rapidly changing dimensions are those in which values are changing continuously and giving lot of
difficulty to maintain them.
i am giving one of the best real world example which i found in some website while browsing. go
through it. i am sure you like it.
so a better option would be shift those attributes into a fact table as facts which solves the problem.
=======================================
hi If you dont mine plz tell me how to creat rapidly changing dimensinons. and one more question....
please tell me what is the use of custom transformation.. thanking u.bye.
=======================================
best example is ATM transactions(BANKS).The data being changes continuesly and concurently for
each second so it is very difficult to capture this dimensions.
=======================================
hi
question itself there that data is quite freequently changing. Changing means it can be modify or added.
Example.
=======================================
223.Informatica - What are Data driven Sessions?
QUESTION #223 The informatica server follows instructions coded into update
strategy transformations with in the session mapping to determine how to flag
records for insert,update,delete or reject. If you do not choose data driven
optionn setting, the informatica server ignores all update strategy
transformations in the mapping Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
September 07, 2006 07:38:33 #1
fazal
=======================================
Once you load the data in your DW you can update the new data with the following options in your
session properties:-
1.update 2.insert2.delete and datadriven and all these options are present in your session properties now
if you select the datadriven option informatica takes the logic to update delete or reject data from your
designer update strategy transformation.it will look some thing like this
IIF( JOB 'MANAGER' DD_DELETE DD_INSERT ) this expression marks jobs with an ID of manager
for deletion and all other items for insertion.
=======================================
224.Informatica - what are the transformations that restrict the
partitioning of sessions?
QUESTION #224 *Advanced External procedure transformation and External
procedure transformation:
This Transformation contains a check box on the properties tab to allow
partitioning.
*Aggregator Transformation:
If you use sorted ports you cannot partition the associated source
*Joiner Transformation:
you can not partition the master source for a joiner transformation
*Normalizer Transformation
*XML targets.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
September 07, 2006 14:32:12 #1
Manasa
=======================================
1)source defination
2)Sequence Generator
3)Unconnected Transformation
=======================================
225.Informatica - Wht is incremental loading?Wht is versioning in
7.1?
QUESTION #225
No best answer available. Please pick the good answer available or submit your
answer.
Incremental loading in DWH means to load only the changed and new records.i.e. not to load the ASIS
records which already exist.
Versioning in Informatica 7.1 is like a confugration management system where you have every version
of the mapping you ever worked upon. Whenever you have checkedin and created a lock noone else can
work on the same mapping version. This is very helpful in n environment where you have several users
working on a single feature.
=======================================
Hi
The Type 2 Dimension/Version Data mapping filters source rows based on user-defined comparisons
and inserts both new and changed dimensions into the target. Changes are tracked in the target table by
versioning the primary key and creating a version number for each dimension in the table. In the Type 2
Dimension/Version Data target the current version of a dimension has the highest version number and
the highest incremented primary key of the dimension.
Use the Type 2 Dimension/Version Data mapping to update a slowly changing dimension table when
you want to keep a full history of dimension data in the table. Version numbers and versioned primary
keys track the order of changes to each dimension.
Shivaji Thaneru
=======================================
Hi
Please see the incremental loding answer in CDC. Now i will tell you about versioning.
Simply if any body is working in Programming language then it is SOS. Means source of site. Means
If any body dont know about this consept read the below.
See i developed some software in one storing area. After that some enhancement is happend then i will
download the data i will do the modification and keep that in same area but with some othere file name.
If at all anothere developer wants this one simply he will download this data and he will modify or add
again he will keep that source code in that same place but with other filename. so like this hystory will
be maintained. If we found there is bug in previous version. so what we will do simply we will revert
back the changes bye downloading the source .
thanks
madhu
=======================================
226.Informatica - What is ODS ?what data loaded from it ? What
is DW architecture?
QUESTION #226
No best answer available. Please pick the good answer available or submit your
answer.
September 11, 2006 10:58:25 #1
A Agarwal
ODS--Operational Data Source Normally in 3NF form. Data is stored with least redundancy.
OLTP System--> ODS ---> DWH( Denomalised Star or Snowflake vary case to case)
=======================================
Assume that i have one 24/7 company. Peak hours is 9-9. ok in this one per day around 40000 records
are added or modified. Now at 9'o clock i had taken a backup and left. after 9 to 9 again instead of
stroing the data in the same server i will keep from 9 p.m to 9 a.m. data i will store saperatly. Assume
that 10000 records are added in this time. so that next day moring when i am dumping the data there is
no need to take 40000+10000. It is very slow in performance wise. so i can take directly 10000 records.
this consepct what we call as ODS. Operational Data Source.
ODS to WH
ODS StagingArea wh
madhu
=======================================
ODS is an Integrated view of Operational sources(OLTP).
=======================================
thq from ur answer i can come to one conclusion tht ODS is used to store the current data.
i can assume tht by defalut it will add tht 40000 records to this current data 10000 records n gives the
reslut 50000.
thq...
=======================================
227.Informatica - what are the type costing functions in
informatica
QUESTION #227
No best answer available. Please pick the good answer available or submit your
answer.
September 22, 2006 10:02:20 #1
question was not clear can u repeat this question with full descritpion because there is no specific
costing functions in informatica.
thanks
madhu.
=======================================
228.Informatica - what is the repository agent?
QUESTION #228
No best answer available. Please pick the good answer available or submit your
answer.
September 12, 2006 11:07:42 #1
Shivat Member Since: September 2006 Contribution: 9
Hi
The Repository Agent is a multi-threaded process that fetches inserts and updates metadata in the
repository database tables. The Repository Agent uses object locking to ensure the consistency of
metadata in the repository.
ShivajiThaneru
=======================================
Hi
The Repository Server uses a process called Repository agent to access the tables from Repository
database.The Repository sever uses multiple repository agent processes to manage multiple repositories
on different machines on the network using native drivers.
=======================================
Hi
The Repository Server uses a process called Repository agent to access the tables from Repository
database.The Repository sever uses multiple repository agent processes to manage multiple repositories
on different machines on the network using native drivers.
chsrgeekI
=======================================
Hi
Name itself it is saying that agent means meadiator between and repositary server and reposatary
database tables. simply reposatary agent means who speaks with reposatary.
thanks
madhu
=======================================
229.Informatica - what is the basic language of informatica?
QUESTION #229
No best answer available. Please pick the good answer available or submit your
answer.
September 15, 2006 14:38:58 #1
vick
Hi
Madhu D.
=======================================
The basic language of Informatica is SQL plus.Then only it will under stand the data base language.
=======================================
230.Informatica - What is CDC?
QUESTION #230
No best answer available. Please pick the good answer available or submit your
answer.
September 18, 2006 08:42:07 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
Changed Data Capture (CDC) helps identify the data in the source system that has changed since the
last extraction. With CDC data extraction takes place at the same time the insert update or delete
operations occur in the source tables and the change data is stored inside the database in change tables.
The change data thus captured is then made available to the target systems in a controlled manner.
satya.neerumalla@tcs.com
=======================================
CDC Changed Data Capture. Name itself saying that if any data is changed it will how to get the values.
for this one we have type1 and type2 and type3 cdc's are there. depending upon our requirement we can
fallow.
thanks
madhu
=======================================
Whenever any source data is changed we need to capture it in the target system also this can be
basically in 3 ways
Complete changes can be captured as different records & stored in the target table(Type 2)
=======================================
231.Informatica - what r the mapping specifications? how
versionzing of repository objects?
QUESTION #231
No best answer available. Please pick the good answer available or submit your
answer.
September 19, 2006 02:45:04 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
Mapping Specification
1.Mapping Name
3.Source System
Initial Rows
Short Description
Refresh Frequency
Preprocessing
Post Processing
Error Strategy
Reload Strategy
Unique Source Fields
4.Target System
Rows/Load
5.Sources
Tables
Files
6.Targets
Target Table Target Column Data-type Source Table Source Column Data-type
1.Checkout: when some user is modifying an object(source target mapping) he can checkout it. That is
he can lock it. So that until he release no body can access it.
2.Checkin:
When you want to commit an object u use this checkin feature.
=======================================
Mapping Specification
1.Mapping Name
3.Source System
Initial Rows
Short Description
Refresh Frequency
Preprocessing
Post Processing
Error Strategy
Reload Strategy
Unique Source Fields
4.Target System
Rows/Load
5.Sources
Tables
Files
6.Targets
Target Table Target Column Data-type Source Table Source Column Data-type
1.Checkout: when some user is modifying an object(source target mapping) he can checkout it. That is
he can lock it. So that until he release no body can access it.
2.Checkin:
When you want to commit an object u use this checkin feature.
=======================================
Please let me know the answer for this question.
=======================================
hi
mapping is nothing but flow or work. where the data is coming and where data is going. for this we need
mapping name
source table
target table
session.
thanks
madhu
=======================================
232.Informatica - what is bottleneck in informatica?
QUESTION #232
No best answer available. Please pick the good answer available or submit your
answer.
September 26, 2006 09:46:17 #1
opbang Member Since: March 2006 Contribution: 46
Bottleneck in Informatica
Bottleneck in ETL Processing is the point by which the performance of the ETL Process is slowr.
When ETL Process is in progress first thing login to workflow monitor and observe performance
statistic. I.e. observe processing rows per second. In SSIS and Datastage when you run the job you can
see at every level how many rows per second is processed by the server.
Mostly bottleneck occurs at source qualifier during fetching data from source joiner aggregator Lookup
Cache Building Session.
=======================================
233.Informatica - What is the differance between Local and
Global repositary?
QUESTION #233
No best answer available. Please pick the good answer available or submit your
answer.
September 26, 2006 09:55:03 #1
opbang Member Since: March 2006 Contribution: 46
q
Global repository. The global repository is the hub of the domain. Use the global repository to
store common objects that multiple developers can use through shortcuts. These objects may
include operational or Application source definitions reusable transformations mapplets and
mappings.
q
Local repositories. A local repository is within a domain that is not the global repository. Use
local repositories for development. From a local repository you can create shortcuts to objects in
shared folders in the global repository. These objects typically include source definitions
common dimensions and lookups and enterprise standard transformations. You can also create
=======================================
234.Informatica - Explain in detail about Key Range & Round
Robin partition with an example.
QUESTION #234
No best answer available. Please pick the good answer available or submit your
answer.
October 11, 2006 02:03:38 #1
srinivas vadlakonda
RE: Explain in detail about Key Range & Round Robin pa...
key range: The informatica server distributes the rows of data based on the st of ports that u specify as
the partition key.
Round robin: The informatica server distributes the equal no of rows for each and every partition.
=======================================
235.Informatica - COMMITS: What is the use of Source-based
commits ? PLease tell with an example ?
QUESTION #235
No best answer available. Please pick the good answer available or submit your
answer.
September 29, 2006 02:20:42 #1
srinivas vadlakonda
if you selected commit type is target once the cache is holding some 10000 records server will commit
the records here server will be least bother of the number of source records processed.
if you selected commit type is source once after 10000 records are queried immedialty server will
commit that here server will be least bother how many records inserted in the target.
=======================================
236.Informatica - What Bulk & Normal load? Where we use Bulk
and where Normal?
QUESTION #236
No best answer available. Please pick the good answer available or submit your
answer.
October 01, 2006 10:10:23 #1
Vamsi Krishna.K
RE: What Bulk & Normal load? Where we use Bulk and whe...
Hello
when we try to load data in bulk mode there will be no entry in database log files so it will be tough to
recover data if session got failed at some point. where as in case of normal mode entry of every record
will be with database log file and with the informatica repository. so if the session got failed it will be
easy for us to start data from last committed point.
we use bulk mode to load data in databases it won't work with text files using as target where as normal
mode will work fine with all type of targets.
=======================================
in case of bulk for group of records a dml statement will created and executed
but in the case of normal for every recorda a dml statement will created and executed
=======================================
Bulk mode is used for Oracle/SQLserver/Sybase. This mode improves performance by not writing to
the database log. As a result when using this mode recovery is unavailable. Further this mode doesn't
work when update transformation is used and there shouldn't be any indexes or constraints on the table.
Ofcourse one can use the pre-session and post-session SQLs to drop and rebuild indexes/constraints.
=======================================
237.Informatica - Which transformation has the most complexity
Lookup or Joiner?
QUESTION #237
No best answer available. Please pick the good answer available or submit your
answer.
October 11, 2006 01:51:17 #1
srinivas vadlakonda
lookup trans is check the condition like joiner but it has the more feature like
=======================================
lookupbut it will reduse the complexity of the solutions and improves the performance of the workflows.
=======================================
238.Informatica - Where we use Star Schema & where Snowflake?
QUESTION #238
No best answer available. Please pick the good answer available or submit your
answer.
October 05, 2006 01:20:33 #1
Raj
its depends for client requirements.initially we have implement high level design .so the client want to
be normalized (snow flake schema) or de-normaliized data (star schema) which is used for their
analysis.so we have implemented whatever their requirements
=======================================
239.Informatica - Can we create duplicate rows in star schema?
QUESTION #239
No best answer available. Please pick the good answer available or submit your
answer.
October 25, 2006 09:02:00 #1
Ashwani
Hello
By default informatica server will generate sql query for every action if that query is not able to perform
the exact task we can modify that query or we can genrate new once with new conditions and with new
constraints.
1. source qualifier
2. lookup
3. target
=======================================
242.Informatica - Can you update the Target table?
QUESTION #242
No best answer available. Please pick the good answer available or submit your
answer.
October 01, 2006 09:55:26 #1
Vamsi Krishna.K
hello
we have to update target table. if you are loading type-1 dimension type-2 dimension data at target surly
you have to do.
Thanks
vamsi.
=======================================
243.Informatica - At what frequent u load the data?
QUESTION #243
No best answer available. Please pick the good answer available or submit your
answer.
October 04, 2006 01:39:58 #1
opbang Member Since: March 2006 Contribution: 46
Loading frequency depends on the requirement of Business Users It could be daily during mid night...
or weekly... Depending on the frequency... ETL process should take care... how the updated transaction
data will replace in fact tables...
Other factors How fast OLTP is updating Data Volume Available time window for Extracting Data.
=======================================
244.Informatica - What is a Materialized view? Diff. Between
Materialized view and view
QUESTION #244
No best answer available. Please pick the good answer available or submit your
answer.
October 11, 2006 01:45:10 #1
srinivas.vadlakonda
materialized views are used in datawarehousing to precompute and store aggregated data such sum of
sales and it will used for increase the speed of the query when we are taking the large data bases.
view nothing but it is a small table which meats our criteria it will not occupy the space
=======================================
View doesn't occupy any storage space in table space but meterilized view will occupy space
=======================================
Materialized view stores the result set of a query but normal view does not store the same. We can
refresh the Materialised View when any changes are made in the master table. Normal view is only for
view the records. We can perform DML operation and direct path insert operation in Materialized View.
=======================================
245.Informatica - Is it possible to refresh the Materialized view?
QUESTION #245
No best answer available. Please pick the good answer available or submit your
answer.
October 05, 2006 05:37:16 #1
lingesh
Yaa we can refresh materialized view. While creating of materialized views we can give options such as
refresh fast and we can mention time so that it can refresh automatically and fetch new data i..e
updateed data and inserted data. For active dataware house i mean inorder to have real time dataware
house we can have materialized views.
=======================================
246.Informatica - What are the common errors that you face daily?
QUESTION #246
No best answer available. Please pick the good answer available or submit your
answer.
October 30, 2006 04:44:35 #1
shiva
RE: What are the common errors that you face daily?
mostly we are geting oracle fatal error.whilw the server is not able to connect the oracle server.
=======================================
247.Informatica - What is Shortcut? What is use of it?
QUESTION #247
No best answer available. Please pick the good answer available or submit your
answer.
October 03, 2006 04:17:18 #1
Vamsi Krishna K.
Factless Fact table are fact table which is not having any measures.
For example - You want to store the attendance information of the student. This table will give you
datewise whether the student has attended the class or not. But there is no measures because fees paid
etc is not daily.
=======================================
=======================================
249.Informatica - while Running a Session, what are the two files
it will create?
QUESTION #249
No best answer available. Please pick the good answer available or submit your
answer.
October 04, 2006 01:20:21 #1
opbang Member Since: March 2006 Contribution: 46
RE: while Running a Session, what are the two files it...
Loading data from flatfiles to database is faster. You are receiving a data from a remote location. At
remote location the required data can be converted into flatfile and same you can use at target location
for loading. This minimizes the requirement of bandwidth faster transmission.
=======================================
Hi Flat files have some advantages which normal table does not have. First one is explained by first
post. Secondly Flat file can work with case sensitivity issue of data easily. where normal table has
errors.Kapil Goyal
=======================================
251.Informatica - Architectural diff b/w informatica 7.1 and 5.1?
QUESTION #251
No best answer available. Please pick the good answer available or submit your
answer.
October 12, 2006 14:57:59 #1
zskhan Member Since: June 2006 Contribution: 3
1. v7 has repository server & pmserver v5 had pmserver only. pmserver does not directly talks to
repository database it talks to repository server which in turn talks to database.
=======================================
252.Informatica - what r the types of data flows in workflow
manager
QUESTION #252
No best answer available. Please pick the good answer available or submit your
answer.
hi
Target load plan is the process thru which u can decide the preference of the Target laod.
lets say u have three S.Q n three instances of Targets by default informatica will load the data in the
first target but using the Target load plan u can change this sequence ...by selecting vich target u want to
be laoded first
=======================================
1. in mapping
2. in session
=======================================
There are two types of target load typesthey are1. bulk2. normal
=======================================
254.Informatica - Can we use lookup instead of join? reason
QUESTION #254
No best answer available. Please pick the good answer available or submit your
answer.
October 11, 2006 01:28:11 #1
srinivas vadlakonda
RE: In a flat file sql override will work r not? what ...
Nope.
=======================================
In Flat file SQL Override will not work. We have different properties to set for a flat file. If you are
talking about the flat file as a source it can be of any extension like .dat .doc etc. Yes if it is an Target
file it will have extension as .out. Which can be altered in the Target Properties.
Regards
Rajesh
=======================================
Using flat files sql override willnot work. the extension of the flat files are .txt .doc .dat and the output
or target flatfile extension is .out
=======================================
257.Informatica - what is cost effective transformation b/w lookup
and joiner
QUESTION #257
No best answer available. Please pick the good answer available or submit your
answer.
October 11, 2006 11:14:23 #1
Myk
During the logical design phase you defined a model for your data warehouse consisting of entities
attributes and relationships. The entities are linked together using relationships.
During the physical design process you translate the expected schemas into actual database structures.
At this time you have to map:
q
Entities to tables
q
Relationships to foreign key constraints
q
Attributes to columns
q
Primary unique identifiers to primary key constraints
q
Unique identifiers to unique key constraints
=======================================
259.Informatica - what is surrogate key ? how many surrogate key
used in ur dimensions?
QUESTION #259
No best answer available. Please pick the good answer available or submit your
answer.
October 12, 2006 01:10:52 #1
srinivas vadlakonda
DWH does not depends upon the primary key it is used to identify the internal records each diemension
should have atleast one surrogate key.
=======================================
Surrogate key or warehouse key acts like a composite primary key. If the target doesn't have a unique
key the surrogate key helps to address a particular row. It is like a primary key in the target.
=======================================
260.Informatica - what r the advantages and disadvantagesof a
star schema and snoflake schema.thanks in advance.
QUESTION #260
No best answer available. Please pick the good answer available or submit your
answer.
October 16, 2006 02:14:54 #1
srinivas.vadlakonda
Schemas are two types starflake and snowsflake schema In starflake fact table is in normalized format
and dimention table is in denormalized format
In snowsflake both fact and dimention tables are in normalized format only
if u r taking snowsflake it requires more dimention table and more foreign keys it will reduse the query
performance. It will reduce the redundency.
=======================================
2)fewer tables
3)less database
In snowflake schema
=======================================
261.Informatica - why do we use reusable sequencegenerator
transformation only in mapplet?
QUESTION #261
No best answer available. Please pick the good answer available or submit your
answer.
November 16, 2006 06:05:48 #1
Sarada
Reusable Sequence generator are preferably used when we want the same sequence (that is the next
value of the sequence) to be used in more than one mapping (may be because this next value loading the
same field of same table in different mappings and to maintain the continuity its required)
suppose there are two mappings using a reusable sequence generator and we run the two mappings one
by one. here if the last value for the sequence generator for the mapping1 run is 999 then the sequence
generator value for second mapping will start from 1000.
=======================================
Hi
Question is Are you sure its going to start with the 1000 number after the 999. By default for reusable
SEQUENCE it has a cache value set to 1000 that's why it takes 1000 else if you only have 595 records
for the 1st session and automatically the 2nd session will start with 1000 as the sequence number bcos
the cache value is set to it.
Is it possible to change Number of Cached Values always to 1 instead of changing it after each time the
session/sessions is/are run.
Thanks
Philip
=======================================
Hi
The solution provided is correct. I would like to add more information into it.
Mapplet is basically to reuse a mapping as such if a non- reusable sequence generator is used in mapplet
then the sequence of numbers it generates mismatches and creates problem. Thus it is made a must.
=======================================
262.Informatica - in which particular situation we use
unconnected lookup transformation?
QUESTION #262 Submitted by: sridhar39
hi,
We can use the unconnected lookup transformation when i need to return the only one port at that time i
will use the unconnected lookup transformation instead of connected. We can also use the connected to
return the one port but if u r taking unconnected lookup transformation it is not connected to the other
transformation and it is not a part of data flow that why performance will increase.
=======================================
The major advantage of unconnected lookup is its reusability. We can call an unconnected lookup
multiple times in the mapping unlike connected lookup.
=======================================
We can use the unconnected lookup transformation when we need to return the output from a single
port .
If we want the output from a multiple ports at that time we have to use connected lookup
Transformation.
=======================================
Use of connected and unconnected Lookup ic completely based on the logic which we need.
However i just wanted to clear that we can get multiple rows data from an Unconnected lookup also.
Just concatinate all the values which you want and get the result from the return row of unconnected
lookup and then furthur split it in the expression.
However using Unconnected lookup takes more time as it breaks the flow and goes to an unconnected
lookup to fetch the results.
=======================================
hi
both unconnected and connected will provide single output. if it is the case that we can use either
unconnected or connected i prefer unconnected why because unconnected doesnot participate in the
dataflow so informatica server creates a seperate cache for unconnected and processing takes place
parallely. so performance increases.
=======================================
263.Informatica - in which particular situation we use dynamic
lookup?
QUESTION #263
No best answer available. Please pick the good answer available or submit your
answer.
October 17, 2006 01:06:34 #1
srinivas vadlakonda
Specific situation is not there to use the dynamic lookup if u r using the dynamic it will increase the
performance and also it will minimize the transformation like sequence generator.
=======================================
If the no. of records are in hundreds one doesn't see much difference whether a static cache is used or
dynamic cache. If there are thousands of records dynamic cache kills time because it commits the
database for each insert or update it makes.
=======================================
264.Informatica - is there any relationship between java &
inforematica?
QUESTION #264
No best answer available. Please pick the good answer available or submit your
answer.
October 18, 2006 07:04:27 #1
phanimv Member Since: July 2006 Contribution: 41
1.Delimtedwidth
2.Fixdwidth
=======================================
=======================================
266.Informatica - what is the event-based scheduling?
QUESTION #266
No best answer available. Please pick the good answer available or submit your
answer.
October 25, 2006 15:42:32 #1
sn3508 Member Since: April 2006 Contribution: 20
In time based scheduling the jobs run at the specified time. In some situations we've to run a job based
on some events like if a file arrives then only the job has to run whatever the time it is. In such cases
event based scheduling is used.
=======================================
event based scheduling is using for row indicator file. when u dont no where is the source data that time
we use shellcommand script batch file to send to the local directory of the informatica
server is waiting for row indiactor file befor running the session.
=======================================
267.Informatica - what is the new lookup port in look-up
transformation?
QUESTION #267
No best answer available. Please pick the good answer available or submit your
answer.
October 25, 2006 15:37:38 #1
sn3508 Member Since: April 2006 Contribution: 20
Seems u r talking newlook up row when u configure u lookup as dyanamic lkp cache by default it will
generates newlkprow it tells the informatica whether
the row u got is existing or new row if it is new row it passes data. or else it discards.
=======================================
This port is added by PC Client :-'Designer to a Lookup transformation when ever dynamic cache is
used.This port is a indicator to Infa server action whether it is inserts or upates the dynamic cache
through a numeric value.[0 1 2].
=======================================
The new port in lookup transformation is Associative port
Cheers
Bobby
=======================================
268.Informatica - what is dynamic insert?
QUESTION #268
No best answer available. Please pick the good answer available or submit your
answer.
November 15, 2006 00:22:38 #1
srinuv_11 Member Since: October 2006 Contribution: 23
Hi srinu
Thanks
Sri.
=======================================
269.Informatica - how did you handle errors?(ETL-Row-Errors)
QUESTION #269
No best answer available. Please pick the good answer available or submit your
answer.
Hi friend
1. row-based errors
=======================================
270.Informatica - HOw do u setup a schedule for data loading
from scratch?
QUESTION #270
No best answer available. Please pick the good answer available or submit your
answer.
December 12, 2006 17:08:30 #1
hanug Member Since: June 2006 Contribution: 24
whether you are loading data from scratch(for the first time) or subsequent loads there is no changes to
the scheduling. The change is how to pickup the delta data.
Hanu.
=======================================
271.Informatica - HOw do u select duplicate rows using
informatica?
QUESTION #271
No best answer available. Please pick the good answer available or submit your
answer.
October 20, 2006 08:52:55 #1
Sharmila
=======================================
Hi
You can write SQL override in the source qualifier (to eliminate duplicates). For that we can use
distinct keyword.
For example : consider a table dept(dept_no dept_name) and having duplicate records in that. then write
if you want to have only duplicate records then write the following query in the Source Qualifier SQL
Override
=======================================
i think we cant select duplicates from rank transformation if it is possible means explain how to do it
=======================================
We can get the duplicate records by using the rank transformation.
=======================================
we can aso use sorter transfermation.seect distinct one check box.
=======================================
272.Informatica - How to load data to target where the source and
targets are XML'S?
QUESTION #272
No best answer available. Please pick the good answer available or submit your
answer.
October 25, 2006 15:23:31 #1
sn3508 Member Since: April 2006 Contribution: 20
RE: How to load data to target where the source and ta...
q
If you don't have the structures create the Source or Target structure of the XML file by going to
Sources or Targets menu and selecting 'Import XML' and follow the steps
q
Follow the regular steps you would do to create an ordinary mapping/session.
q
In the session you've to mention the location and name of source/target.
q
Once the session is success the xml file will be generated in the specified location.
=======================================
273.Informatica - What TOAD and for what purpose it will be
used?
QUESTION #273
No best answer available. Please pick the good answer available or submit your
answer.
October 24, 2006 04:54:29 #1
srinivas vadlakonda
Toad s SQL Editor provides an easy and efficient way to write and test scripts and queries and its
powerful data grids provide an easy way to view and edit Oracle data.
q
View the Oracle Dictionary
q
Create browse or alter objects
q
Graphically build execute and tune queries
q
Edit PL/SQL and profile stored procedures
q
Manage your common DB tasks from one central window
q
Find and fix database problems with constraints triggers extents indexes and grants
q
Create code from shortcuts and templates
q
Create custom code templates
q
Control code access and development (with or without a third party version control product)
using Toad's cooperative source control feature.
=======================================
274.Informatica - What is target load order ?
QUESTION #274
No best answer available. Please pick the good answer available or submit your
answer.
October 24, 2006 04:47:53 #1
ram gopal
In a mapping if there are more than one target table then we need to give in which order the target tables
should be loaded
1. customer
2. Audit table
first customer table should be populated than Audit table for that we use target load order
Hope u undeerstood
=======================================
Target oad pan specifies the order in which the data being extracted from the source qualifier.
=======================================
275.Informatica - How to extract 10 records out of 100 records in a
flat file
QUESTION #275
No best answer available. Please pick the good answer available or submit your
answer.
October 31, 2006 16:52:24 #1
sridhar
4. query the external table to access records like u would do a normal table
=======================================
hi
=======================================
276.Informatica - How many types of TASKS we have in
Workflomanager? What r they?
QUESTION #276
No best answer available. Please pick the good answer available or submit your
answer.
=======================================
1) session2) command 3) email4) event-wait5) event-raise6) assignment7) control8) decision9)
timer10) worklet3) 8) 9) are self explanatory. 1) run mappings. 2) run OS commands/scripts. 4 + 5)
raise user-defined or pre-defined events and wait for the the event to be raised. 6) assign values to
workflow var 10) run worklets.
=======================================
The following Tasks we r having in Workflow manager Assignment Control Command decision E-mail
Session Event-Wait Event-raise and Timer. The Tasks developed in the task developer rreusable tasks
and taske which r developed by useing workflow or worklet r non reusable. Among these tasks only
Session Command and E-mail r the reusable remaining tasks r non reusable.Regards rma
=======================================
1. Session
=======================================
We have Session Command and Email tasks
Cheers
Thana
=======================================
277.Informatica - What is user defined Transformation?
QUESTION #277
No best answer available. Please pick the good answer available or submit
your answer.
n k rajkumar
=======================================
278.Informatica - what is the difference between connected stored
procedure and unconnected stored procedure?
QUESTION #278
No best answer available. Please pick the good answer available or submit your
answer.
November 15, 2006 00:11:33 #1
srinuv_11 Member Since: October 2006 Contribution: 23
hi....
this is ramesh.....(if any one feel there is some conceptual problem with my solution plz let me now)
1.additive
2.semi additive
3. non-additive
additve means when a any measure is queried of the fact table if the result relates to all the diemension
table which are linked to the fact
semi-additve when a any measure is queried from the fact table the results relates to some of the
diemension table
non-additive when a any measure is queried from the fact table if it does n't relate to any of the
diemension and the result is driectly from the measures of the same fact table ex: to calculate the total
percentage of loan just we take the value from the fact measure(loan) divide it with 100 we get it
without the diemension...
=======================================
hi
=======================================
hi can u give one example of additive and semi additive facts it ll be better for understand
Akhi
=======================================
average monthly balance of your bank account is a semi-additive fact.
=======================================
A measurable data on which simple addition can be performed is called fully additive such measurable
data's no need to combine two or more dimensions for it's meaning
Ex:
Product wise total sales
Branch wise total sales
A measurable data on which simple addition can't be performed is called semi additive such measurable
data's need to combine two or more dimensions for it's meaning
Ex: customer wise total sales amount ---------> Has no meaning
=======================================
280.Informatica - what is DTM process?
QUESTION #280
No best answer available. Please pick the good answer available or submit your
answer.
November 06, 2006 04:40:39 #1
prasanna alur
DTM means data transformation manager.in informatica this is main back ground process.it run after
complition of load manager.in this process informatica server search source and tgt connection in
repository if it correct then informatica server fetch the data from source and load it to target.
=======================================
dtm means data transmission manager. It is one of the component in informatica architecture. it will
collect the data and load the data.
=======================================
Load manager Process: Starts the session creates the DTM process
and sends post-session email when the session completes.
The DTM process. Creates threads to initialize the session read
write and transform data and handle pre- and post-session operations.
=======================================
281.Informatica - what is Powermart and Power Center?
QUESTION #281
No best answer available. Please pick the good answer available or submit your
answer.
November 06, 2006 04:34:18 #1
prasanna alur
=======================================
Hi
power center will support global and local repositories and also it supports ERP packages. but
Powermart will support local repositories only and it doesn't support ERP packages .
Power center normally used for enterprise data warehouses where as power mart will use for Low/Mid
range data warehouses.
Siva Prasad.
=======================================
Power Center supports Partitioning process where as Power mart only does simple pass through.
=======================================
282.Informatica - what are the differences between informatica6.1
and informatica7.1
QUESTION #282
No best answer available. Please pick the good answer available or submit your
answer.
November 06, 2006 16:09:38 #1
calltomadhu Member Since: September 2006 Contribution: 34
load manager is a manager who will take care about the loading process from source to target
=======================================
in informatica7.1
3.dataprofiling&versioning
thanks ®ards
sivaprasad
=======================================
283.Informatica - hi, how we validate all the mappings in the
repository at once
QUESTION #283
No best answer available. Please pick the good answer available or submit your
answer.
November 07, 2006 00:03:46 #1
srinuv_11 Member Since: October 2006 Contribution: 23
Hi
You can not validate all the mappings in one go. But you can validate all the mappings in a folder in
one go and continue the process for all the folders.
For dooing this log on to the repository manager. Open the folder then the mapping sub folder then
select all or some of the mappings(by pressing the shift or control key ctrl+A does not work) and then
rightclick and validate.
=======================================
hi
thanks
madhu
=======================================
Still we dont have such facility in informatica.
=======================================
Yes. We can validate all mappings using the Repo Manager.
=======================================
284.Informatica - hw to work with pmcmd on windows platform
QUESTION #284
No best answer available. Please pick the good answer available or submit your
answer.
November 08, 2006 15:31:22 #1
calltomadhu Member Since: September 2006 Contribution: 34
hi
establish link between session and pmcmd. if session executes successfully pmcmd command executes.
thanks
madhu
=======================================
Hi Frnd Can u plz tell me where is the PMCMD option in workflomanager.ThanksSwati.
=======================================
Hi Swathi
In commandline only we can execute pmcmd pmrep commands. In workflow manger we execute
directly session task.
=======================================
C:Program FilesInformatica PowerCenter 7.1.3Serverbinpmcmd.exe
=======================================
C:-->Program Files-->Informatica PowerCenter 7.1.3-->Server-->bin-->pmcmd.exe
=======================================
285.Informatica - inteview questionwhy do u use a reusable
sequence genator tranformation in mapplets?
QUESTION #285
No best answer available. Please pick the good answer available or submit your
answer.
November 14, 2006 02:03:43 #1
no it is not reversible...
when you open a transformation in edit mode there is a check box named REUSABLE... if you tick it
will give you a message saying that making reusable is not reversible...
=======================================
No. Once we declared a transformation as a Reusable we cant able to revert.
=======================================
If you change the properties of a reusable transformation in a mapping you can revert to the original
reusable transformation properties by clicking the Revert button.
=======================================
No we CANNOT revert back the reusable transformation. There is a revert button that can revert the
last changes made in the transformation.
=======================================
=======================================
YES...we can.
1) Drag the reusable transformation from Repository Navigator into Mapping Designer by pressing the
2 ) and then press the Ctrl key before releasing the left button of mouse.
4) Enjoy....:-)
Thanks
Santu
=======================================
the last answer though correct in a way is not completely correct. by using the ctrl key we are making a
copy of the original transformation but not changing the original transformation into a non-reusable one.
=======================================
I think if the transformation is created in Mapping designer and make it as reusable then we can revert
back that one by the option "revert Back".
But if we create transformation in Mapplet we can make it as non reusable one
=======================================
290.Informatica - How do we delete staging area in our project?
QUESTION #290
No best answer available. Please pick the good answer available or submit your
answer.
November 15, 2006 03:59:08 #1
narayana
If your database is oracle then we can apply CDC(change data capture) and load data which is only
changed recently after previous data load.
=======================================
if staging area is storing only incremental data ( means changed or new data with respect to previous
load) then you can truncate the staging area.
But if you maintain historical information in staging area then you can not truncate your staging area.
=======================================
If we use Type 2 of slowly changing dimensions we can delete staging area. because SCD Type 2 stores
previous data with version number and time stamp.
=======================================
291.Informatica - what is referential Intigrity error? how ll u
rectify it?
QUESTION #291
No best answer available. Please pick the good answer available or submit your
answer.
November 27, 2006 06:48:12 #1
Sravan
You have set the session for constraint-based loading but the PowerCenter Server is unable to determine
dependencies between target tables possibly due to errors such as circular key relationships.
=======================================
292.Informatica - what is constraint based error? how ll u clarify
it?
QUESTION #292
No best answer available. Please pick the good answer available or submit your
answer.
November 13, 2006 00:07:44 #1
srinuv_11 Member Since: October 2006 Contribution: 23
1. when data from a single source needs to be loaded into multiple targets
=======================================
293.Informatica - why exactly the dynamic lookup?plz can
any bady can clarify it?
QUESTION #293
No best answer available. Please pick the good answer available or submit
your answer.
RE: why exactly the dynamic lookup?plz can any bady ca...
hii...
y dynamic lookup.....
suppose u r looking up a table that is changing frequently i.e u want to lookup recent data then u have to
go for dynamic lookup......ex: online trancation data(ATM)
=======================================
Dynamic Lookup generally used for connect lkp transformation when the data is cgenged then is
updating insert or its leave without changing .....
=======================================
Dynamic lookup cache is used in case of connected lookups only It is also called as read and write
cache when a new record is inserted into the target table then cache is also updated with the new record
it is saved in the cache for faster lookup of data from the target. Generally we use this in case of slowly
changing dimensions.
Pallavi
=======================================
294.Informatica - How many mappings you have done in your
project(in a banking)?
QUESTION #294
No best answer available. Please pick the good answer available or submit your
answer.
November 13, 2006 00:01:56 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: how can we delete duplicate rows from flat files ?...
In the mapping read the flat file through a Source Definition and SQ. Apply a Sorter Transformation in
the property tab select distinct . out put will give a sorter distinct data hence you get rid of duplicates.
You can also use an Aggegator Transformation and group by the PK. Gives the same result.
=======================================
Use Sorter Transformation and check Distinct option. It will remove the duplicates.
=======================================
297.Informatica - 1.can u look up a flat file ? how ?2.what is test
load?
QUESTION #297
No best answer available. Please pick the good answer available or submit your
answer.
December 03, 2006 22:03:36 #1
sravan kumar
By Using Look up transformation we can look up A flat file.When u click Look up transformation it
shows u the message.Follow that.
Test load is nothing but checking whether the data is moving correctly to the target or not.
=======================================
Test load is the property we can set at the session property level by which Informatica performs all pre
and post session tasks but does not save target data(in RDBMS target table it writes the data to check
the constraints but rolls it back). If the target is flat file then it does not write anything in the file. We
can specify number of source rows to test load the mapping. This is another way of debugging the
mapping without loading the target.
=======================================
298.Informatica - what is auxiliary mapping ?
QUESTION #298
No best answer available. Please pick the good answer available or submit your
answer.
December 26, 2006 03:27:18 #1
Kuldeep Kumar Verma
Auxiliary Mapping is used to reflect change in one table when ever there is a change in the other table.
Example:
In Siebel we have S_SRV_REQ and S_EVT_ACT table Lets say that we have a image table defined for
S_SRV_REQ from where our mappings read data. Now if there is any change in S_EVT_ACT then it
wont be captured in S_SRV_REQ if our mappings are using image table for S_SRV_REQ. To
overcome this we define a mapping beteen S_SRV_REQ and S_EVT_ACT such that if there is any
change in second it will be reflected as an update in the first table.
=======================================
299.Informatica - what is authenticator ?
QUESTION #299
No best answer available. Please pick the good answer available or submit your
answer.
December 18, 2006 02:38:20 #1
Reddappa C. Reddy
answer.
December 03, 2006 22:04:48 #1
sravan kumar
HI:
In both cases u have to aggregate get the keys of the dimentional tables and load into Fact.
In case of increments u use date value to pickup only delta data while loading into fact.
Hanu.
=======================================
Fact tables always maintain the history records and mostly consists of keys and measures So after all
teh dimension tables are populated teh fact tables can be loaded.
The load is always going to be a incremental load except for the first time which is a history load.
=======================================
303.Informatica - How to load the time dimension using
Informatica ?
QUESTION #303
No best answer available. Please pick the good answer available or submit your
answer.
December 22, 2006 00:32:54 #1
srinivas
HI
=======================================
Hi
=======================================
Hi Kiran
thanks.
=======================================
304.Informatica - What is the process of loading the time
dimension?
QUESTION #304
No best answer available. Please pick the good answer available or submit your
answer.
December 29, 2006 08:20:34 #1
manisha.sinha Member Since: December 2006 Contribution: 30
create a procedure to load data into Time Dimension. The procedure needs to run only once to popullate
all the data. For eg the code below fills up till 2015. You can modify the code to suit the feilds in ur
table.
to_char(loaddate 'YYYYMM')
to_char(loaddate 'YYYY') || ' Half' ||
Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2)
)
TO_CHAR(loaddate 'YYYY / MM')
TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate
'Q')) )
TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number( TO_CHAR(loaddate
'WW')))
TO_CHAR(loaddate 'YYYY'));
If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') Then
Exit;
End If;
End Loop;
commit;
end Insert_W_DAY_D_PR;
=======================================
305.Informatica - how can we remove/optmize source bottlenecks
using "query hints"
QUESTION #305
No best answer available. Please pick the good answer available or submit your
answer.
January 08, 2007 16:29:36 #1
creativehuang Member Since: January 2007 Contribution: 5
use the hints after and it is powerful so be careful with the hints.
=======================================
306.Informatica - how can we eliminate source bottleneck using
query hint
QUESTION #306
No best answer available. Please pick the good answer available or submit your
answer.
March 12, 2007 06:28:50 #1
sreedhark26 Member Since: January 2007 Contribution: 25
You can identify source bottlenecks by executing the read query directly against the source database.
Copy the read query directly from the session log. Execute the query against the source database with a
query tool such as isql. On Windows you can load the result of the query in a file. On UNIX systems
you can load the result of the query in /dev/null.
Measure the query execution time and the time it takes for the query to return the first row. If there is a
long delay between the two time measurements you can use an optimizer hint to eliminate the source
bottleneck.
=======================================
307.Informatica - where from we get the source data or how we
access the source data
QUESTION #307
No best answer available. Please pick the good answer available or submit your
answer.
January 03, 2007 00:58:49 #1
phani
Hi
We get souce data in the form of excel files flat files etc. By using source analyzer we can access the
souce data
=======================================
hisource data exists in OLTP systems of any form (flat file relational database. xml definitions.)u can
acces the source data with any of the source qualifier transformationsw and normaliser tranaformation.
=======================================
308.Informatica - What are all the new features of informatica 8.1?
QUESTION #308
No best answer available. Please pick the good answer available or submit your
answer.
February 03, 2007 01:57:27 #1
Sonjoy
=======================================
309.Informatica - Explain the pipeline partition with real time
example?
QUESTION #309
No best answer available. Please pick the good answer available or submit your
answer.
January 11, 2007 14:56:16 #1
saibabu Member Since: January 2007 Contribution: 14
ftp targetaddress
=======================================
Hi U can transfer file from one server to other..............In unix there is an utililty XCOMTCP which
transfer file from one server to other. But lot of constraints they are for this..... U need to mention target
server name and directory name where u need to send.The server directory should have write
permetion.....Check in detail in UNIX by typing MAN XCOMTCP command which guides u i guess.
=======================================
311.Informatica - What's the difference between source and target
object definitions in Informatica?
QUESTION #311
No best answer available. Please pick the good answer available or submit your
answer.
January 05, 2007 09:06:50 #1
Sravan Kumar
Source system is the system which provides the business data is called as source system
=======================================
source definition is the structure of the source database existed in the OLTP system . using source
definition u can extrcact the transactional data from OLTP SYSTEMS.TARGET DEFINITION IS THE
STRUCTURE GIVEN BY THE DBA's TO POPULATE THE DATA FROM SOURCE DEFINITION
ACCORDING BUSINESS RULES FOR THE PURPOSE OF MAKING EFFECTIVE DESITIONS
FOR THE ENTERPRISE.
=======================================
Hai what Saibabu wrote is correct.Source definition means defining the structure of the source from
which we have to extract the data to transform and then load to the target.Target definition means
defining the structure of the target (relation table or flat file)
=======================================
312.Informatica - how many types of sessions are there in
informatica.please explain them.
QUESTION #312
No best answer available. Please pick the good answer available or submit your
answer.
January 08, 2007 15:56:28 #1
creativehuang Member Since: January 2007 Contribution: 5
=======================================
Total 10 SESSIONS
=======================================
Session is a type of workflow task and set of instructions that describe how to move Data from Source
to targets using a mapping
1. sequential: When Data moves one after another from source to target it is sequential
2.Concurrent: When whole data moves simultaneously from source to target it is Concurrent
=======================================
Hi
Vidya
=======================================
HI Sreedhark26ur answer is wrong . dont minguide the members .what u wrote is types of tasks used
fby the workflow manager.there r two types of sessions1.non resuable session2.reusable session
=======================================
313.Informatica - What is meant by source is changing
incrementally? explain with example
QUESTION #313
No best answer available. Please pick the good answer available or submit your
answer.
January 09, 2007 07:30:59 #1
Sravan Kumar
Source is changing Incrementally means the data in the source is keep on changing and your capturing
that changes with time stamps keyrange and with triggers scds.Capture these changes to load the data
incrementally.If We cannot capture this source data incrementally the data loading process will be very
difficult.For example we have a source where the data is changing.On 09/01/07 we have loaded all the
data in to the target.On 10/01/07 the source is updated with some new rows .It is very difficult to load
all the rows again in the target so what we have to do is capture the data whch is not loaded and load
only the changed rows into the target.
=======================================
I have a good example to explain this . Think about HR Data for a very big Company. The data keep on
changing every minute. Now We had build a downstream system to capture a chunk of data for specific
purpose say New Hirees . Every record in the source will have time stamp when we load data today we
check the records which are updated/inserted today and will load them (This avoids to reprocess all the
data). We used incremental Refresh method to process such data. Infact most of the sources in OLTP
are incremental changing/constantly changing.
=======================================
314.Informatica - what is the diffrence between SCD and
INCREMENTAL Aggregation?
QUESTION #314
No best answer available. Please pick the good answer available or submit your
answer.
January 09, 2007 23:19:44 #1
srinivas
Hi
2.VERSION NO MAPPING
=======================================
hi scd means 'slowly changing dimentions'since dimention table maintains master data the column
values occationally changed .so dimention tables are called as scd tables and the fields in the scd tables
are called as slowly changing dimentions .in order to maintain those changes we are following three
types of methods.1. SCD TYPE1 this method maintains only current data2. SCD TYPE2 this method
maintains whole history of the dimentions here three methods to identify which record is current one .
1> flag current data 2> version number mapping 3> effective date range3. SCD TYPE3 this method
maintains current data and one time historical data.INCREMENTAL AGGRIGATIONsome
requirements (daily weekly every 15 days quartly..........) need to aggrigate the values of certain colums.
HERE U have to do the same job every time (according to requirement) and add the aggrigate value to
the previous aggrigate value(previous run value) of those column.THE PROCESS CALLED AS
INCREMENTAL AGGRIGATION.
=======================================
Stored Procedures are Hard to maintain and Debug . Maintaince is simpler in Informatica. Its a User
friendly tool.
Given the Logic. Its eaier to create a Mapping in Informatica than write a Stored Procedure.
Thanks
Gayathri
=======================================
a stored procedure call is made thru an ODBC connection over a network (sometimes the info server
resides on the same box as the db)...since there is an overhead in making the call it is inherently slower.
=======================================
317.Informatica - How a LOOKUP is passive?
QUESTION #317
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2007 15:54:06 #1
monicageller Member Since: January 2007 Contribution: 3
Hi
Unconnected lookup is used for updating Slowly Chaging Dimensions...so it is used to determine
whether the rows are already in the target or not but it doesn't change the no. of rows ...so it is passive.
Connected lookup transformations are used to get a related value based on some value or to perform a
calculation.....in either case it will either increase no. of columns or not...but doesn't change row count...
so it is passive.
In lookup SQL override property we can add a WHERE statement to the default SQL statement but it
doesn't change no. of rows passing through it it just reduces the no. of rows included in the cache.
cheers
Monica.
=======================================
the fact that a failed lookup does not erase the row makes it passive.
=======================================
Lookup is using for geta a related value and perform calculation. lookup is using for search value from
relational table.
=======================================
318.Informatica - How to partition the Session?(Interview
question of CTS)
QUESTION #318
No best answer available. Please pick the good answer available or submit your
answer.
January 23, 2007 15:53:48 #1
kirangvr Member Since: January 2007 Contribution: 5
When you create or edit a session you can change the partitioning information for each pipeline in a
mapping. If the mapping contains multiple pipelines you can specify multiple partitions in some
pipelines and single partitions in others. You update partitioning information using the Partitions view
on the Mapping tab in the session properties.
You can configure the following information in the Partitions view on the Mapping tab:
q
Add and delete partition points.
q
Enter a description for each partition.
q
Specify the partition type at each partition point.
q
Add a partition key and key ranges for certain partition types.
=======================================
By default when we create the session workflow creates pass-through partition points at Source
Qualifier transformations and target instances.
=======================================
319.Informatica - which one is better performance wise joiner or
lookup
QUESTION #319
No best answer available. Please pick the good answer available or submit your
answer.
January 25, 2007 13:41:43 #1
anu
if recover is not possible truncate the target table and load again.
recovery option is set in the informatica server. i just forgotten where to do recovery i will tell you next
time or you search for it
=======================================
If the recover option is not set and 10 000 records are committed then delete the record from the target
table using audit fields like update_date.
=======================================
for example
1) Removing transformation errors.
2) Filtering the records at the earliest.
3) Using sorted data before an aggregator.
4) using less of those transformations which use cache.
5) using an external loader like sql loader etc to load the data faster.
6) less no of conversions like numeric to char and char to numeric etc
7) writing an override instead of using filter etc.
8)increasing the network packet size.
9) all the source systems in the server machine to make it run faster etc pal.
=======================================
Hai Friends Performance tuning means techniques for improving the performance.1. Identify the
Bottlenecks(issues that reduces performance)2. Configure the Bottlenecks.The Hierarchy we have to
follow in performance tuning is a) Target b) Source c) Mapping d) Session e) SystemIf anything wrong
in this plz. tell me. because I am still in the learning stage.
=======================================
=======================================
Hi Friends
Thanks
Thana
=======================================
Bottle neck means drawbacks or problems
=======================================
Target commit intervals and Source - based commit intervals.
=======================================
323.Informatica - How to use incremental aggregation in real time?
QUESTION #323
No best answer available. Please pick the good answer available or submit your
answer.
February 21, 2007 22:48:09 #1
Hanu Ch Rao
hegde
Master outer joiner means Matched rows of both t/s and unmatched from master table..
if any doubt reply me back.. i will give another general example to you
=======================================
A master outer join keeps all rows of data from the detail source and the matching rows from the master
source. It discards the unmatched rows from the master source.
=======================================
I hope The Joiner transformation is only for join the hetrogeneous sources.For Eg when we use one
source from Flat file and another one is from Relational.I have a question as "How Joiner
transformation overrides the Lookup" however both transformation performs different actions.
Cheers
Thana
=======================================
Lookup is nothing but Outer Join. So Look up t/s can be easily replaced by a joiner in any tool.
=======================================
325.Informatica - What is Dataware house key?
QUESTION #325
No best answer available. Please pick the good answer available or submit your
answer.
February 21, 2007 23:26:44 #1
Ravichandra
Every data warehouse key should be a surrogate key because the data warehouse DBA must have the
flexibility to respond to changing descriptions and abnormal conditions in the raw data.
=======================================
Surrogate key is know as datawarehouse key. Surrogate key is used as a unique key in warehouse where
dupicate keys may exits due to type 2 changes.
=======================================
326.Informatica - What is inline view?
QUESTION #326
No best answer available. Please pick the good answer available or submit your
answer.
February 21, 2007 22:40:51 #1
Hanu Ch Rao
A common use for in-line views in Oracle SQL is to simplify complex queries by removing join
operations and condensing several separate queries into a single query
=======================================
used in oracle to simplify queries
ex: select rowid from (select ename from emp group by sal) where rowid<2
=======================================
327.Informatica - What is the exact difference between joiner and
lookup transformation
QUESTION #327
No best answer available. Please pick the good answer available or submit your
answer.
February 20, 2007 12:21:06 #1
pal
for lookup to work the table may not exist in the mapping but for a joiner to work the table has to exist
in the mapping.
pal.
=======================================
a lookup may be unconnected while a joiner may not
=======================================
lookup may not participate in mapping
lookup does only non equi join
q
Informatica Workflow Manger
q
Using Cron in Unix
q
Using Opcon sheduler
=======================================
329.Informatica - How do you handle two sessions in Informatica
QUESTION #329 can any one tell me the option between the two session . if
previous session execute successfully than run the next session... Click Here to view
complete document
No best answer available. Please pick the good answer available or submit your answer.
February 28, 2007 02:09:40 #1
Divya Ramanathan
=======================================
You can handle 2 session by using a link condition (id $ PrevTaskStatus SUCCESSFULL)
or you can have a decision task between them. I feel since its only one session dependent on one have a
link condition
=======================================
=======================================
where exactly do we need to use this link condition (id $ PrevTaskStatus SUCCESSFULL)
=======================================
you can drag and drop more than one session in a workflow.
there will be linking different and is
sequential linking
concurrent linking
in sequential linking you can run which ever session you require or if the workflow runs all the sessions
sequentially.
in concurrent linking you can't run any session you want.
=======================================
330.Informatica - How do you change change column to row in
Informatica
QUESTION #330
No best answer available. Please pick the good answer available or submit your
answer.
March 02, 2007 11:38:19 #1
Hanu Ch Rao
First u can ask what type of data change columns to rows in informatica?
Hi
Assume if you put the surrogate key in target (Dept table) like p_key and
then
=======================================
Select Acct.* Rank() Over ( partition by ch_key_id order by version desc) as Rank
from Acct
where Rank() 1
=======================================
select business_key max(version) from tablename group by business_key
=======================================
332.Informatica - Explain about Informatica server process that
how it works relates to mapping variables?
QUESTION #332
No best answer available. Please pick the good answer available or submit your
answer.
March 09, 2007 09:52:01 #1
hemasundarnalco Member Since: December 2006 Contribution: 2
The PowerCenter Server holds two different values for a mapping variable during a session run:
q
Start value of a mapping variable
q
Current value of a mapping variable
Start Value
The start value is the value of the variable at the start of the session. The start value could be a value
defined in the parameter file for the variable a value saved in the repository from the previous run of the
session a user defined initial value for the variable or the default value based on the variable datatype.
The PowerCenter Server looks for the start value in the following order:
Current Value
The current value is the value of the variable as the session progresses. When a session starts the current
value of a variable is the same as the start value. As the session progresses the PowerCenter Server
calculates the current value using a variable function that you set for the variable. Unlike the start value
of a mapping variable the current value can change as the PowerCenter Server evaluates the current
value of a variable as each row passes through the mapping.
=======================================
First load manager starts the session and it performs verifications and validations about variables and
manages post session tasks such as mail.
then it creates DTM process.
this DTM inturn creates a master thread which creates remaining threads.
master thread credtes
read thread
write thread
transformation thread
pre and post session thread etc...
Finally DTM hand overs to the load manager after writing into the target
=======================================
333.Informatica - What are the types of loading in Informatica?
QUESTION #333
No best answer available. Please pick the good answer available or submit your
answer.
March 02, 2007 11:29:17 #1
Hanu Ch Rao
Normal means it loads record by record and writes logs for that. it takes time.
Bulk load means it loads number of records at a time to target - it ignores logs ignores tracing level. It
takes less time to load data to target.
Ok...
=======================================
2 type of loading
1. Normal
2. Bulk
=======================================
Hanu
Thanks in advance..........
=======================================
Loadings are 3 types
RE: What are the steps involved in to get source from ...
Thanks in advance..
=======================================
go to source analyzer
import databases
go to warehouse designer
import databases (target definitions) else create using generate sql
go to mapping designer
drag and drop sources and targets definitions
link ports properly
save to repository
than q
=======================================
335.Informatica - What is use of event waiter?
QUESTION #335
No best answer available. Please pick the good answer available or submit your
answer.
March 05, 2007 12:38:14 #1
sreedhark26 Member Since: January 2007 Contribution: 25
=======================================
event wait is of two type
1> predefine event: this type of event wait for the indicator file to trigger it..
2> user define event: this type of event wait for the event raise to trigger the event.
thanks ravinder
=======================================
Event Wait task is a file watcher.
when ever a trigger file is touched/created this task will kick off the rest of the sessions to execute
which are there in the batch.
=======================================
Event wait: will hold the workflow until it is get other instruction or delay mentioned by the user
Cheers
Sithu
sithusithu@hotmail.com
=======================================
336.Informatica - which transformation can perform the non equi
join?
QUESTION #336
No best answer available. Please pick the good answer available or submit your
answer.
March 12, 2007 01:47:43 #1
kasireddy
Lookup transformation.
=======================================
Here look up not supports outer join
And supports non equijoin
=======================================
q
You cannot include the following objects in a mapplet:
q
Normalizer transformations
q
Cobol sources
q
XML Source Qualifier transformations
q
XML sources
q
Target definitions
q
Pre- and post- session stored procedures
q
Other mapplets
=======================================
When you add transformations to a mapplet keep the following restrictions in mind:
oIf you use a Sequence Generator transformation you must use a reusable Sequence
Generator transformation.
oIf you use a Stored Procedure transformation you must configure the Stored Procedure
Type t o b e Normal.
oYou cannot include PowerMart 3.5-style LOOKUP functions in a mapplet.
oYou cannot include the following objects in a mapplet:
- Normalizer transformations
- COBOL sources
- XML Source Qualifier transformations
- XML sources
- Tar ge t d e f i ni t i ons
- Other mapplets
=======================================
Joiner Transfermation
Normalizer transformations
Cobol sources
XML Source Qualifier transformations
XML sources
q
Target definitions
q
Pre- and post- session stored procedures
q
Other mapplets
=======================================
We can not use the following objects/ transformations in the mapplet
=======================================
338.Informatica - How to do aggregation with out using
AGGREGAROR Transformation ?
QUESTION #338
No best answer available. Please pick the good answer available or submit your
answer.
March 09, 2007 09:37:56 #1
hemasundarnalco Member Since: December 2006 Contribution: 2
=======================================
339.Informatica - how do you test mapping and what is associate
port?
QUESTION #339
No best answer available. Please pick the good answer available or submit your
answer.
March 26, 2007 11:00:14 #1
sreedhark26 Member Since: January 2007 Contribution: 25
U can test mapping in mapping Designer by using debugger. In Debugger u can test first instance and
next instance. associated port is output port.
sreedhar
=======================================
specifying the number of test load rows in the session properties.
=======================================
340.Informatica - Why can't we use normalizer transformation in
mapplet?
QUESTION #340
No best answer available. Please pick the good answer available or submit your
answer.
March 26, 2007 10:56:14 #1
sreedhark26 Member Since: January 2007 Contribution: 25
Nt is using for cobol sources u can take data in source analyzer automatically displays normalizer
transformation. u can not use in mapplet.
sreedhar
=======================================
341.Informatica - Strategy Transformation
QUESTION #341 By using Update Strategy Transformation we use to maintain
historical data using Type2 & Type3. By both of this which is better to use?
Why? Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
March 26, 2007 09:12:45 #1
sai
=======================================
using type 2 u can maintain total historical data along with current data;
using type 3 u can maintain only one time historical data and currrent data:
dased upon requrirement we have to choose the best suitable.
=======================================
342.Informatica - Source Qualifier in Informatica
QUESTION #342 What is the Technical reason for having Source Qualifier in
Informatica? Can a mapping be implemented without it? (Please don't mention
the functionality of SQ) but the main reason why a mapping can't do without
it... Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
March 27, 2007 06:32:28 #1
chowdary
=======================================
in informatica
source qualifier will read data from sources.
=======================================
SQ reads data from the sources when the informatica server runs the session. data to the any other
transformation is not allowed with out this SQ trans.
It has got other qualities also it can be used as a filter.
it can be used as a sorter and is also used to select distinct values.
it can be used as a joiner if the data is coming from the same source.
=======================================
In informatica source qualifier acts as a staging area and it will create its own data types which are
related to source data types
=======================================
343.Informatica - what is the difference between mapplet and
reusable Transformation?
QUESTION #343
No best answer available. Please pick the good answer available or submit your
answer.
March 30, 2007 02:38:57 #1
veera_kk Member Since: March 2007 Contribution: 3
Hi
1. In LOOKUP SQL override if you add or subtract ports from the SELECT statement the
session fails.
2. In LOOKUP if you override the ORDER BY statement the session fails if the ORDER BY
statement does not contain the condition ports in the same order they appear in the Lookup
condition
=======================================
You can use SQL override in lookup if you have
1. More than one look up table
2. If you use where condition to reduce the records in the cache.
=======================================
If you write a query in source qualifier(to override using sql editor) and press validate you can
recognise whether the querry written is right or wrong.
But in lookup override if the querry is wrong and if you press validate button.You cannot recognise but
when you run a session you will get error message and session fails
=======================================
345.Informatica - How can you call trigger in stored procedure
transformation
QUESTION #345
No best answer available. Please pick the good answer available or submit your
answer.
May 07, 2007 22:54:43 #1
hanug Member Since: June 2006 Contribution: 24
Hi:
Trigger can not be called from Stored procedure. Trigger will execute implicitly when u are performing
DML operation on the table or view(instead of views).
U can get the difference between trigger and sp anywhere in the doc.
Hanu.
=======================================
346.Informatica - How to assign a work flow to multiple servers?
QUESTION #346 I have multiple servers, I want to assign a work flow to
multiple servers Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
October 19, 2007 15:08:36 #1
krishna
=======================================
Informatica server will use Load manager process to run the workflow
load manager will do assign the workflow process to the multiple servers.
=======================================
347.Informatica - what types of Errors occur when you run a
session, can you describe them with real time example
QUESTION #347
No best answer available. Please pick the good answer available or submit your
answer.
January 24, 2008 19:12:31 #1
RE: what types of Errors occur when you run a session, can you describe them with real time
example
RE: how do you add and delete header , footer records ...
1) Within the informatica session we can sequence the data as such that the header flows in first and
footer flows in last. This only holds true when you have the header and footer the same format as the
detail record.
2) As soon as the sesion to generate detail record file finishes we can call unix script or unix command
through command task which will concat the header file detail file and footer file and generrate the
required file
=======================================
349.Informatica - Can we update target table without using
update strategy transformation? why?
QUESTION #349
No best answer available. Please pick the good answer available or submit your
answer.
Use slowly changing dimension wizard and make necessary changes as per your logic.
Hanu.
=======================================
351.Informatica - What are the general reasons of session failure
with Look Up having Dynamic Cache?
QUESTION #351
No best answer available. Please pick the good answer available or submit your
answer.
April 24, 2007 15:52:27 #1
shanthi1 Member Since: March 2007 Contribution: 6
By using source qualifier transfomation we can filter out the records for only relational sources but by
using Filter Transformation we can filter out the records of any source..
=======================================
By using Source Qualifier we can filter out records from only relational sources. But by using Filter
Transformation we can filter out records from any sources.
In Filter Transformation we can use any expression to prepare filter condition which evaluates to TRUE
or FALSE. The same cannot be done using Source Qualifier.
=======================================
A Source Qualifier transformation is the starting point in the mapping where in we are bringing the
incoming data or the source data is extracted from this transformation after connecting to the source
data base.
A filter transformation is a transformation which is placed in the mapping pipe line in order to pass the
data to the data following some specific conditions that has to be followed by the passing records.
Of course the same purpose can be solved by the Source Qualifier transformation if this is extracting
data from a relational source where as if the data is going to be extracted from a flat file then we cannot
do it using source qualifier.
=======================================
353.Informatica - How do you recover a session or folder if you
accidentally dropped them?
QUESTION #353
No best answer available. Please pick the good answer available or submit your
answer.
May 07, 2007 22:58:05 #1
hanug Member Since: June 2006 Contribution: 24
u can find ur backup and restore from the backup. If u dont have backup u lost everything. u cant get it
back.
Thats why we should always take the backup of the objects that we create.
Hanu.
=======================================
354.Informatica - How do you automatically execute a batch or
session?
QUESTION #354
No best answer available. Please pick the good answer available or submit your
answer.
May 07, 2007 22:51:18 #1
hanug Member Since: June 2006 Contribution: 24
Hanu.
=======================================
355.Informatica - What is the best way to modify a mapping if the
target table name is changed?
QUESTION #355
No best answer available. Please pick the good answer available or submit your
answer.
June 20, 2007 03:12:31 #1
yuva010
RE: How can you join two tables without using joiner a...
If one of the table contains single record u dont need to use SQL OVERRIDE or Joiner to join records.
Let it perform catesion product. This way u dont need to use both(sql override joiner)
Hanu.
=======================================
you can join two tables with in the same database by using lookup query override
=======================================
If the sources are homogenous we use source qualifier
if the sources have same structure we can use union transformation
=======================================
359.Informatica - What does Check-In and Check-Out option refer
to in the mapping designer?
QUESTION #359
No best answer available. Please pick the good answer available or submit your
answer.
May 15, 2007 11:21:21 #1
ramgan_tryst Member Since: May 2007 Contribution: 2
Joiner Cache will use in Joiner t/r to improve the performance. While using joiner cache informatica
server first read the data from master source and built data index & data cache in the master rows. After
building the cache joiner t/r reads records from detail source to performs joins.
=======================================
361.Informatica - where does the records goes which does not
satisfy condition in filter transformation?
QUESTION #361
No best answer available. Please pick the good answer available or submit your
answer.
July 06, 2007 05:14:44 #1
pkonakalla Member Since: May 2007 Contribution: 2
RE: where does the records goes which does not satisfy...
It goes to the default group. If you connect default group to an output the powercenter processes the
data. Otherwise it doesnt process the default group.
=======================================
The rows which are not satisfing the filter transformation are discarded. It does not appear in the session
logfile or reject files.
=======================================
There is no default group in Filter Transformation. The records which does not satisfy filter condition
are discarded and not written to reject file or session log file
=======================================
362.Informatica - How can we access MAINFRAME tables in
INFORMATICA as a source ?
QUESTION #362 ex-: Suppose a table EMP is in MAINFRAME then how can we
access this table as SOURCE TABLE in informatica? Click Here to view complete
document
No best answer available. Please pick the good answer available or submit your answer.
May 26, 2007 14:32:55 #1
vishnukirank Member Since: May 2007 Contribution: 1
=======================================
Use Informatica Power Connect to connect to external systems like Mainframes and import the source
tables.
=======================================
Use the Normalizer transformation to take the Mainframe sources(COBOL)
=======================================
363.Informatica - How to run a workflow without using GUI i.e,
Worlflow Manager, Workflow Monitor and pmcmd?
QUESTION #363
No best answer available. Please pick the good answer available or submit your
answer.
August 03, 2007 03:56:21 #1
balaetl Member Since: November 2005 Contribution: 3
Unless the job is scheduled you cannot manually run a workflow without using a GUI.
=======================================
364.Informatica - How to implement de-normalization concept in
Informatica Mappings?
QUESTION #364
No best answer available. Please pick the good answer available or submit your
answer.
January 24, 2008 19:04:25 #1
kasisarath Member Since: January 2008 Contribution: 6
RE: What are the Data Cleansing Tools used in the DWH?...
RE: What is Repository size, What is its min and max ...
Select Col_1 Col_2 Col..n from source group by Col_1 Col_2 Col..n having count(*)>1
Best Regrads
Samir Desai.
=======================================
If you take a ex:EMP TABLE having the duplicate records
The query is
SELECT *FROM EMP WHERE EMPNO IN SELECT EMPNO FROM EMP GROUP BY EMPNO
HAVING COUNT(*)>1;
Sanjeeva Reddy
=======================================
370.Informatica - What are the Commit & Commit Intervals?
QUESTION #370
No best answer available. Please pick the good answer available or submit your
answer.
July 31, 2007 13:05:53 #1
rasmi Member Since: June 2007 Contribution: 20
Rekha
=======================================
hi
opb_srvr_recovery table and notes the rowid of the last row commited
from the next row_id. if session recovery should take place atleast one
=======================================
372.Informatica - Explain pmcmd?
QUESTION #372
No best answer available. Please pick the good answer available or submit your
answer.
July 31, 2007 11:54:03 #1
rasmi Member Since: June 2007 Contribution: 20
=======================================
pmcmd means powermart command prompt which used to perform the tasks from command prompt
and not from Informatica GUI window
=======================================
It a command line program. It performs the following tasks
=======================================
373.Informatica - What does the first column of bad file (rejected
rows) indicate? Explain
QUESTION #373
No best answer available. Please pick the good answer available or submit your
answer.
RE: What does the first column of bad file (rejected r...
Row indicator : Row indicator tells the writer what to do with the row of wrong data
0 Insert target/writer
1 update target/writer
2 delete target/writer
3 reject writer
If the row indicator is 3 the writer rejects the row because the update starategy expression is marked as
reject
=======================================
374.Informatica - What is the size of data mart?
QUESTION #374
No best answer available. Please pick the good answer available or submit your
answer.
August 22, 2007 05:24:31 #1
hpadala
Data mart is a part of the data warehouse.for example In an organization we can manage Employee
personal information as one data mart and Project information as one data mart one data ware house
may have any number of data marts.The size of the data mart depends on your business needs it varies
business to business. some times OLTP database may act as data mart for ware house.
Cheers
Thana
=======================================
375.Informatica - What is meant by named cache?At what
situation we can use it?
QUESTION #375
No best answer available. Please pick the good answer available or submit your
answer.
August 24, 2007 13:54:43 #1
sn3508 Member Since: April 2006 Contribution: 20
By default there will be no name for the cache in lookup transformation. Everytime you run the session
the cache will be rebuilt. If you give a name to it it is called Persistent Cache. In this case the first time
you run the session the cache will be build and the same cache is used for any no. of runs. This means
the cache doesn't have any changes reflected to it even if the lookup source is changed. You can rebuilt
it again by deleting the cache
=======================================
376.Informatica - How do you define fact less Fact Table in
Informatica
QUESTION #376
No best answer available. Please pick the good answer available or submit your
answer.
RE: what are the main issues while working with flat files as source and as targets ?
We need to specify correct path in the session and mension either that file is 'direct' or 'indirect'. keep
that file in exact path which you have specified in the session .
-regards
rasmi
=======================================
1. We can not use SQL override. We have to use transformations for all our requirements
2. Testing the flat files is a very tedious job
3. The file format (source/target definition) should match exactly with the format of data file. Most of
the time erroneous result come when the data file layout is not in sync with the actual file.
(i) Your data file may be fixed width but the definition is delimited----> truncated data
(ii) Your data file as well as definition is delimited but specifying a wrong delimiter (a) a delimitor
other than present in actual file or (b) a delimiter that comes as a character in some field of the file---
>wrong data again
(iii) Not specifying NULL character properly may result in wrong data
(iv) there are other settings/attributes while creating file definition which one should be very careful
4. If you miss link to any column of the target then all the data will be placed in wrong fields. That
missed column wont exist in the target data file.
Please keep adding to this list. There are tremendous challenges which can be overcome by being a bit
careful.
=======================================
378.Informatica - When do you use Normal Loading and the Bulk
Loading, Tell the difference?
QUESTION #378
No best answer available. Please pick the good answer available or submit your
answer.
September 19, 2007 11:35:05 #1
rama krishna
RE: When do you use Normal Loading and the Bulk Loadin...
If we use SQL Loder connections then it will be to go for Bulk loading. And if we use ODBC
connections for source and target definations then it is better to go for Normal loading.
If we use Bulk loading then the session performence will be increased.
how means... if we use the bulk loading the data will be BYPASS through the DATALOGS. So
automatically performence will be increased.
=======================================
Normal Load: It loads the records one by one Server writes log file for each record So it takes more
time to load the data.
Bulk load : It loads the number of records at a time it does not write any log files or tracing levels so it
takes less time.
=======================================
You would use Normal Loading when the target table is indexed and you would use bulk loading when
the target table is not indexed. Running Bulk Load in an indexed table will cause the session to fail.
=======================================
379.Informatica - What is key factor in BRD?
QUESTION #379
No best answer available. Please pick the good answer available or submit your
answer.
January 24, 2008 19:05:37 #1
kasisarath Member Since: January 2008 Contribution: 6
we do. we have to update or insert a row in the target depending upon the data from the sources. so
inorder to split the rows either to update or insert into the target table we use the lookup transformation
in reference to target table and compared with source table.
=======================================
382.Informatica - Which SDLC suits best for the datawarehousing
project.
QUESTION #382
No best answer available. Please pick the good answer available or submit your
answer.
September 12, 2007 17:28:25 #1
anjanroy Member Since: September 2005 Contribution: 4
Datawarehousing projects are different from the traditional OLTP project. First of all they are ongoing.
A datawarehouse project is never "complete". Here most of the time the business users would say -
"give us the data and then we will tell you what do we want". So here a traditional waterfall model is
not the optimal SDLC approach.
The best approach here is of a phased iteration - where you implement and deliver projects in small
manageable chunks (90 days ~ 1 qtr) and keep maturing your data warehouse.
=======================================
383.Informatica - What is the difference between source
definition database and source qualifier?
QUESTION #383
No best answer available. Please pick the good answer available or submit your
answer.
September 24, 2007 13:26:52 #1
vemurisasidhar Member Since: August 2007 Contribution: 10
RE: What is the logic will you implement to load data ...
we can do this by using mapping wizard. there are of basically two types:
gettign started wizard is used when there is no need to change the previous data.
scd can hold the historical data.
=======================================
385.Informatica - What is a Shortcut and What is the difference
between a Shortcut and a Reusable Transformation?
QUESTION #385
No best answer available. Please pick the good answer available or submit your
answer.
January 29, 2008 23:32:01 #1
Sant_parkash Member Since: October 2007 Contribution: 22
RE: What is a Shortcut and What is the difference between a Shortcut and a Reusable
Transformation?
I am having 10000 records in flat file, in that there are 100 records are duplicate
records..
soo we want to eliminate those records..which is the best method we have to
follow.
Regards
Mahesh Reddy
Click Here to view complete document
Submitted by : vivek1708
Or
by using the sorter and distinct option, load the unique rows in a temp table
followed by a truncate on the original table
and moving data back to it from the temp table.
hope it helps.
Thanks
kumar
=======================================
Or
by using the sorter and distinct option load the unique rows in a temp table
followed by a truncate on the original table
and moving data back to it from the temp table.
hope it helps.
=======================================
use aggregate on primary keys
=======================================
388.Informatica - what is the difference between reusable
transformation and mapplets?
QUESTION #388
No best answer available. Please pick the good answer available or submit your
answer.
November 20, 2007 03:19:32 #1
ramesh raju
No best answer available. Please pick the good answer available or submit your answer.
November 17, 2007 06:59:15 #1
Thananjayan Member Since: November 2007 Contribution: 15
=======================================
Look up transformation compares the soure with specified target table and forwarded the records to
next tranformation only matched records.If not match it returns NULL
=======================================
390.Informatica - How do you maintain Historical data and how
to retrieve the historical data?
QUESTION #390
No best answer available. Please pick the good answer available or submit your
answer.
October 23, 2007 12:02:59 #1
ravi
=======================================
You can maintain the historical data by desing the mapping using Slowly changing dimensions types.
If you need to insert new and update old data best go for Update strategy.
If you need to maintain the histrory of the data for ex The cost of the product change happen frequently
but you would like to maintain all the rate history go to SCD Type2.
The design change as per your requirement.If you make your question more clear I can provide your
more information.
Cheers
Thana
=======================================
391.Informatica - What is difference between cbl (constaint based
RE: What is difference between cbl (constaint based commit) and target based commit?When we
use cbl?
=======================================
392.Informatica - Repository deletion
QUESTION #392 what happens when a repository is deleted?
If it is deleted for some time and if we want to delete it permanently?
where is stored (address of the file) Click Here to view complete document
No best answer available. Please pick the good answer available or submit your answer.
November 28, 2007 22:36:19 #1
chandrarekha Member Since: May 2007 Contribution: 14
=======================================
I too want to know the answer for this question
I tried to delete the repository that time it was deleted after that i tried to create the repository with the
same name what i was deleted earliar it is showing the repositort exist in the target
we have to delete the repository in the target database
=======================================
repository is stored in database go in repository admin console right click on repository then choose
delete . a dialog box is displayed fill up the user name(database user wr ur repository is reside) give
password and fill all the field .thereafter u can delete it. if any queries plz mail me sajjan.s25@gmail.
com
=======================================
393.Informatica - What is pre-session and post-session?
QUESTION #393
No best answer available. Please pick the good answer available or submit your
answer.
November 13, 2007 23:55:12 #1
vizaik Member Since: March 2007 Contribution: 30
INF Basic Data flow means Extraction Transformation and Loading of data from Source to the target.
Cheers
Nick
=======================================
395.Informatica - which activities can be performed using the
repository manager?
QUESTION #395
No best answer available. Please pick the good answer available or submit your
answer.
October 27, 2007 00:47:25 #1
karuna
You can join the data from multiple tables in the same database by using lookup override
=======================================
Lookup override can be used to get some specific records(using filters in where clause) from the lookup
table. Adavantages are that the whole table need not be looked up..
=======================================
398.Informatica - What are tracing levels in transformation?
QUESTION #398
No best answer available. Please pick the good answer available or submit your
answer.
November 19, 2007 00:18:10 #1
Abhishek Shukla
Thanks
Abhishek Shukla
=======================================
Tracing level in the case of informatica specifies the level of detail of information that can be recorded
in the session log file while executing the workflow.
1.Normal: It specifies the initialization and status information and summerization of the success rows
and target tows and the information about the skipped rows due to transformation errors.
3. Verbose Initialisation : In addition to the Normal tracing specifies the location of the data cache
files and index cache files that are treated and detailed transformation statistics for each and every
transformation within the mapping.
4. Verbose data: Along with verbose initialisation records each and every record processed by the
informatica server
For better performance of mapping execution the tracing level should be specified as TERSE
Verbose initialisation and verbose data are used for debugging purpose.
=======================================