Professional Documents
Culture Documents
Implementing HR
Analytics using
Universal Adaptors
- A technical documentation of various aspects of the product as applies to
Oracle Business Intelligence Applications HR Universal Adaptors
Oracle Corporation |
2.
3.
6.
7.
8.
8.1.
Core Workforce Fact Process ................................................................................................................... 15
8.1.1.
ETL Flow .................................................................................................................................................15
8.1.2.
Workforce Fact Staging (W_WRKFC_EVT_FS) .......................................................................................18
8.1.3.
Workforce Base Fact (W_WRKFC_EVT_F) .............................................................................................20
8.1.4.
Workforce Age Fact (W_WRKFC_EVT_AGE_F) ......................................................................................21
8.1.5.
Workforce Period of Work Fact (W_WRKFC_EVT_POW_F) ..................................................................22
8.1.6.
Workforce Merge Fact (W_WRKFC_EVT_MERGE_F) .............................................................................23
8.1.7.
Workforce Month Snapshot Fact (W_WRKFC_EVT_MONTH_F) ...........................................................24
8.1.8.
Workforce Aggregate Fact (W_WRKFC_BAL_A) ....................................................................................25
8.1.9.
Workforce Aggregate Event Fact (W_WRKFC_EVT_A) ..........................................................................27
8.1.10.
Handling Deletes ....................................................................................................................................29
8.1.11.
Propagating to derived facts ..................................................................................................................30
8.1.12.
Date-tracked Deletes .............................................................................................................................30
8.1.13.
Purges ....................................................................................................................................................30
8.1.14.
Primary Extract ......................................................................................................................................31
8.1.15.
Identify Delete .......................................................................................................................................31
8.1.16.
Soft Delete .............................................................................................................................................32
8.1.17.
Date-Tracked Deletes - Worked Example ..............................................................................................33
8.2.
Recruitment Fact Process ........................................................................................................................ 34
8.2.1.
ETL Flow .................................................................................................................................................34
8.2.2.
Job Requisition Event & Application Event Facts (W_JOB_RQSTN_EVENT_F & W_APPL_EVENT_F) ....36
8.2.3.
Job Requisition Accumulated Snapshot Fact (W_JOB_RQSTN_ACC_SNP_F) ........................................37
8.2.4.
Applicant Accumulated Snapshot Fact (W_APPL_ACC_SNP_F) .............................................................38
8.2.5.
Recruitment Pipeline Event Fact (W_RCRTMNT_EVENT_F) ..................................................................39
8.2.6.
Recruitment Job Requisition Aggregate Fact (W_RCRTMNT_RQSTN_A) ..............................................42
Oracle Corporation |
8.3.
Absence Fact Process .............................................................................................................................. 47
8.3.1.
ETL Flow .................................................................................................................................................47
8.3.2.
Absence Event Fact (W_ABSENCE_EVENT_F) ........................................................................................49
8.4.
Learning Fact Process .............................................................................................................................. 52
8.4.1.
ETL Flow .................................................................................................................................................52
8.4.2.
Learning Enrollment Accumulated Snapshot Fact (W_LM_ENROLLMENT_ACC_SNP_F) ......................54
8.4.3.
Learning Enrollment Event Fact (W_LM_ENROLLMENT_EVENT_F) ......................................................55
8.5.
Payroll Fact Process ................................................................................................................................. 56
8.5.1.
ETL Flow .................................................................................................................................................56
8.5.2.
Payroll Fact (W_PAYROLL_F) .................................................................................................................58
8.5.3.
Payroll Aggregate Fact (W_PAYROLL_A) ................................................................................................59
9.
Oracle Corporation |
The purpose of this document is to provide enough information one might need while attempting an
implementation of HR Analytics using the Oracle BI Applications Universal Adaptors. There are several
myths around what needs to be done while implementing Universal Adaptors, where can things go
wrong if not configured correctly, what columns are to be populated as a must, how to provide delta
data set while shooting for incremental ETL runs and so on. All of these topics are discussed in this
document.
Apart from understanding the entry points that are required to implement HR Analytics, it also helps to
know the process details of some key components of HR Analytics. A few of these key facts and
dimensions are also discussed and an overview of their process/usages is provided towards the end.
Knowing the list of files/tables to be populated (entry points), the grain of such tables, the kind of data
that they expect, has always been a key pain for the implementers. This is also discussed vividly and an
excel spreadsheet (HR_Analytics_UA_Lineage.xlsx) is provided along with this document that addresses
all such needs.
This document is intended for Oracle BI Applications Releases 7.9.6, 7.9.6.1, 7.9.6.2 as well as 7.9.6.3.
For upcoming releases, this document will be updated in due course of time.
Oracle Corporation |
Oracle BI Applications provide packaged ETL mappings against source OLTP systems like Oracle EBusiness Suites, PeopleSoft, JD Edwards and Siebel across various business areas such as Human
Resources, Supply Chain & Procurements, Order Management, Financials, Service and so on. However,
Oracle BI Applications does acknowledge that there can be quite a few other source systems, including
home-grown ones, typically used by SMB customers. And to that extent, some of the enterprise
customers may also be using SAP as their source. Until it gets to a point where Oracle BI Applications can
deliver pre-built ETL adaptors against each of these source systems, the Universal Adaptor becomes a
viable choice.
A mixed OLTP system where one of them has pre-built adaptor for and the other doesnt is also a
scenario that calls for the usage of Universal Adaptors. For instance, the core portion of Human
Resources may be in PeopleSoft systems, but the Talent Management portion may be maintained in a
3rd party system like Taleo.
In order for customers to enable pulling in data from non-supported source systems into the Data
Warehouse, Oracle BI Applications have created a so called Universal Adaptor. The reason this was
doable in the first place was the fact that the ETL architecture of Oracle BI Applications had the evident
support for this. Oracle BI Applications Data Warehouse consists of a huge set of facts, dimensions and
aggregate tables. The portion of the ETL that loads to these end tables are typically Source
Independent (loaded using the Informatica folder SILOS). These ETL maps start from a staging table and
load data incrementally into the corresponding end table. Aggregates are created upstream, and have
no relation to which source system the data came from. The ETL portion, Source Dependent Extract,
that extracts into these staging tables (also called Universal Stage Tables) are the ones that go against a
given source system, like EBS or PSFT and so on. For Universal, they go against a similarly structured CSV
file. Take any Adaptor the universal stage tables are exactly the same structurally. The grain
expectation is also exactly the same for all adaptors. And no wonder, while all these conditions are met,
the SILOS part will load the data (extracted from Universal) from the universal stage tables seamlessly.
Why did Oracle BI Applications decide to source from CSV files? In short, the answer to this is to
complete the end-to-end extract-transform-load story. We will cover this in a bit more details and what
the options are, in the next section.
Oracle Corporation |
One myth that implementers have while implementing Universal Adaptors is that: Data for the universal
staging tables should always be presented to Oracle BI Applications in the required CSV file format.
If your source data is already present in a relational database, why dump it to CSV files and give it to
Oracle BI Applications? You will anyway have to write brand new ETL mappings that read from those
relational tables to get to the right grain and right columns. Then why target those to CSV files and then
use the Oracle BI Applications Universal Adaptor to read from them and write to the universal staging
tables? Why not directly target those custom ETL maps to the universal staging tables? In fact, when
your source data is in relational tables, this is the preferred approach.
OLTP
(Relational DB)
Custom Extract
to files
Data Warehouse
Custom Extract
to staging tables
CSV Files
Universal Adaptor
to staging tables
However, if your source data comes from 3rd party sources which you have outsourced, and probably
have agreements with them to send you data files/reports once in a while, and if that 3rd party source
doesnt allow you to access their relational schema, then probably CSV files is the only alternative. A
typical example would be Payroll data. A lot of organizations typically outsource their Payroll to 3rd party
companies like ADP systems and so on. In those cases, ask for the data in the same manner that you
expect in the Oracle BI Applications CSV files.
Also, if your source data lies in IBM mainframe systems, where it is typically easier to write COBOL
programs or whatever to extract the data in flat files, presenting CSV files to Oracle BI Applications
Universal Adaptor is probably easier. Irrespective of how to populate the universal staging tables
(relational sources or CSV sources) five very important points should always be kept in mind:
Grain of the universal staging tables are met properly.
The uniqueness of records do exists in the (typically) INTEGRATION_ID columns.
The mandatory columns are populated the way they should be.
Oracle Corporation |
Note: For the rest of the document, we will assume that you are going the CSV file approach, although
re-iterating, it is recommended that if your source data is stored in a relational database you should
write your own extract mappings.
Oracle Corporation |
There are several entry points while implementing HR Analytics using Universal Adaptors. The base
dimension tables and base fact tables have their corresponding CSV files where you should configure the
data at the right grain and expectations. Other kinds of tables include Exchange Rate and Codes.
Exchange Rate (W_EXCH_RATE_G) has its own single CSV file, whereas the Codes table (W_CODE_D) has
a CSV file, one per each code category. To get to see all code-names well enough in the
dashboards/reports, you should configure all the required code CSV files *the list of such files are
provided in the associated spreadsheet (HR_Analytics_UA_Lineage.xlsx)].
Common HR
Dimensions
Fact Specific
HR Dimensions
HR Event
Dimensions
Code
Dimensions
Other Common
Dimensions
Common Class
Dimensions
Workforce
Fact
Other
Facts
Domain CSV
Files
Key points:
Start with populating the core HR dimension CSV files, like Employee, Job, Pay Grade, HR
Position, Employment etc.
Then configure subject area specific HR dimensions, like Job Requisitions, Recruitment Source
etc (when implementing Recruitment) or Learning Grade, Course (when implementing Learning)
or Pay Type etc (when implementing Payroll) or Absence Event, Absence Type Reason (when
implementing Absence) and so on.
Two important HR events dimensions should go next. These are Workforce Event Type
(mandatory for all implementations) and Recruitment Event Type (if implementing
Recruitment). These tables can decide the overall fate of the success of implementation.
Oracle Corporation |
Identifying events from your source system and stamping them with Oracle Business
Intelligence known domain values are they key aspects here.
Then configure related COMMON class dimensions applicable for all, like Internal Organizations
(logical/applicable partitions being Department, Company Organization, Job Requisition
Organization, Business Unit etc), or Business Locations (logical/applicable partitions being
Employee Locations) etc.
Consider other shared dimensions and helper dimensions like Status (logical/applicable partition
being Recruitment Status), Exchange Rate etc.
Then consider the code dimensions. By this time you are already aware of what all dimensions
you are considering to implement, and hence, can save time by configuring the CSVs for only the
corresponding code categories.
For facts, start with configuring the Workforce Event fact. Since the dimensions are already
configured, the natural key of the dimensions are already known to you and hence should be
easy to configure them in the fact.
Workforce fact should be followed by subject area specific HR facts, like Payroll, Job Requisition
Event, Applicant Event, Learning Enrollment, and Learning Enrollment Accumulated Snapshot.
Note that Absence fact does not need a CSV file to be populated. It is populated off of the
Absence Event dimension, the Workforce Event fact and the time dimension as a Post Load
process.
Now that all the CSV files for facts, dimensions, and helper tables are populated, you should
move your focus towards the domain value side of things. For E-Business Suite & PeopleSoft
Adaptors, we do mid-stream lookups against preconfigured lookup CSV files. The map between
source values/codes to their corresponding domain values/codes come pre-configured in these
lookup files. However, for Universal Adaptor, no such lookup files exist. This is because of the
fact that we expect that the accurate domain values/codes will be configured along-with
configuring the base dimension tables where they apply. Since everything is from a CSV file,
there is no need to have the overhead of an additional lookup file acting in the middle. Domain
value columns begin with W_ *excluding the system columns like W_INSERT_DT and
W_UPDATE_DT] and normally they are mandatory, cannot be nulls, and the value-set cannot be
changed or extended. We do relax the extension part on a case by case basis, but in no way, the
values can be changed. The recommendation at this stage is that you go to the DMR guide (Data
Model Reference Guide), get the list of table-wise domain values, understand the relationships
clearly in cases there exists any hierarchical or orthogonal relations, identify the tables where
they apply and then their corresponding CSV files, look at the source data and configure the
domain values in the same CSV files. Note that if your source data is in a relational database and
you have chosen to go the alternate route of creating all extract mappings by yourself, the
recommendation is to follow what we have done for E-Business Suite Adaptors and PeopleSoft
Adaptors and create separate domain value lookup CSV files, and do a mid-stream lookup.
Last, but not the least, configure the parameters in DAC (Data warehouse Administration
Console). Read up the documentation for these parameters, understand their expectations,
study your own business requirements and then set the values accordingly.
Oracle Corporation |
Domain values constitute a very important foundation for Oracle Business Intelligence Applications. We
use this concept heavily all across the board to equalize similar aspects from a variety of source
systems. The Oracle Business Intelligence Applications provide packaged data warehouse solutions for
various source systems such as E-Business Suite, PeopleSoft, Siebel, JD Edwards and so on. We attempt
to provide a source dependent extract type of a mapping that leads to a source independent load
type of a mapping, followed by a post load (also source independent) type of mapping. With data
possibly coming in from a variety of source systems, this equalization is necessary. Moreover, the
reporting metadata (OBIEE RPD) is also source independent. The metric calculations are obviously
source independent.
The following diagram shows how a worker status code/value is mapped onto a warehouse domain to
conform to a single target set of values. The domain is then re-used by any measures that are based on
worker status.
Active
Inactive
OLTP 1
Active
Suspended
Terminated
Source
Domain
Active
ACTIVE
Inactive
INACTIVE
Active
ACTIVE
Suspended
INACTIVE
Terminated
INACTIVE
Data Warehouse
Active
Measures
OLTP 2
Domain values help us to equalize similar aspects or attributes as they come from different source
systems. We use these values in our ETL logic, sometimes even as hard-coded filters. We use these
values in defining our reporting layer metrics. And hence, not configuring, incorrectly configuring, or
changing the values of these domain value columns from what we expect, will lead to unpredictable
Oracle Corporation |
10
Oracle Corporation |
11
The Oracle BI Applications mostly use Name and Description columns in the out-of-the-box dashboards
and reports. We use Codes only during calculations, wherever required. Therefore, it is obvious that if
the names and descriptions didnt resolve against their codes during the ETL, you will see blank values of
attributes (or in some cases, depending on the DAC parameter setting, you might see strings like
<Source_Code_Not_Supplied> or <Source_Name_Not_Supplied> and so on).
Another point to keep in mind is that all codes should have distinct name values. If two or more codes
have the same name value, at the report level you will see them merged. The metric values may
sometimes appear in different lines of the report, because OBIEE Server typically throws in a GROUP BY
clause on the lowest attribute (code).
Once implemented, you are free to promote the code columns from the logical layer to the presentation
layer. You might do this when you know your business users are more acquainted to the code values
rather than the name values. But that is a separate business decision. The general behavior is not like
that.
Oracle Corporation |
12
Although you can choose to supply the entire dataset during incremental runs, for all practical reasons,
this is not recommended. Firstly because then the ETL has to process all the records and determine what
needs to be applied and what can be rejected. Secondly, the decision ETL takes may not be accurate. ETL
decisions are based on the values of the system date columns like CHANGED_ON_DT,
AUX1_CHANGED_ON_DT, AUX2_CHANGED_ON_DT, AUX3_CHANGED_ON_DT and
AUX4_CHANGED_ON_DT columns only. We do not explicitly compare column-by-column and determine
whether an update is required. We believe that if something has changed, probably one of the four date
columns must have changed. And in that case, we simply update. If all 5 date columns are same, we
pretty much tend to reject. The base of this decision is the correctness of the date columns. If your
source system does not track the last updated date column on a record well enough, it becomes your
responsibility to force and update, no matter what. An easy way to do this is to set SESSSTARTTIME in
one of these columns during extract. This will force to detect a change, and we end up updating.
No wonder, this is not be the best idea. By all means, you should provide the true delta data set during
every incremental run. A small amount of overlap is acceptable, especially when you deal with flat files.
Our generally accepted rules for facts or large dimensions are either:
Customer does their own version of persisted staging so they can determine changes at the
earliest opportunity and only load changes into universal staging tables
If absolutely impossible to determine the delta or to go the persistent staging route,
Customer only does full load. Otherwise doing a full extract every time and processing
incrementally will take longer.
13
Oracle Corporation |
14
Dimension Aggregate
W_EMPLOYMENT_STAT_CAT_D
Dimension Aggregate
W_WRKFC_EVENT_GROUP_D
Month Dimension
W_ MONTH_D
Workforce Fact
W_ WRKFC_ EVT_F
Oracle Corporation |
15
Primary Sources
Grain
Description
W_WRKFC_EVT_FS
Flat file
Source adaptors
W_WRKFC_EVT_F
W_WRKFC_EVT_AGE_F
W_AGE_BAND_D
W_WRKFC_EVT_F
W_WRKFC_EVT_POW_F
W_PRD_OF_WRK_BAND_D
W_WRKFC_EVT_F
W_WRKFC_EVT_MERGE_F
W_WRKFC_EVT_F
W_WRKFC_EVT_AGE_F
W_WRKFC_EVT_POW_F
W_WRKFC_EVT_MERGE_F
W_MONTH_D
W_WRKFC_EVT_EQ_TMP
W_WRKFC_EVT_FS
W_WRKFC_EVT_MONTH_EQ
_TMP
W_WRKFC_EVT_EQ_TMP
W_WRKFC_EVT_F
W_MONTH_D
W_WRKFC_BAL_A_EQ_TMP
W_WRKFC_EVT_MONTH_EQ_T
MP
W_WRKFC_EVT_MONTH_F
W_EMPLOYMENT_STAT_CAT_D
W_WRKFC_EVT_A_EQ_TMP
W_WRKFC_EVT_EQ_TMP
W_WRKFC_EVT_MERGE_F
W_EMPLOYMENT_STAT_CAT_D
W_WRKFC_EVENT_GROUP_D
W_EMPLOYMENT_STAT_CAT
W_EMPLOYMENT_D
W_WRKFC_EVT_F
W_WRKFC_EVT_MONTH_F
Oracle Corporation |
16
W_WRKFC_EVENT_TYPE_D
W_WRKFC_EVENT_TYPE_DS
W_WRKFC_EVENT_GROUP_
D
W_WRKFC_EVENT_TYPE_D
W_WRKFC_BAL_A
W_EMPLOYMENT_STAT_CAT_D
W_WRKFC_EVT_MONTH_F
W_WRKFC_EVT_A
W_EMPLOYMENT_STAT_CAT_D
W_WRKFC_EVENT_GROUP_D
W_WRKFC_EVT_MERGE_F
The following table documents the minimum setup required for the target snapshot fact to be loaded
successfully. For other functionality to work it is necessary to perform other setup as documented by
the installation guide below. If this is not done it may be necessary to re-run initial load after completing
the additional setup.
Oracle Corporation |
17
Name
Description
DAC System
Parameter
INITIAL_EXTRACT_DATE
HR_WRKFC_EXTRACT_DATE
HR_WRKFC_SNAPSHOT _DT
HR_WRKFC_SNAPSHOT _TO_WID
Age Band
Domains
Notes
1) Workforce extract date should be the earliest date from which HR data is required for reporting
(including all HR facts e.g. Absences, Payroll, Recruitment). This can be later than initial extract
date if other non-HR content loads need an earlier initial extract date.
2) Snapshots should be generated for recent years only in order to improve ETL performance and
reduce the size of the snapshot fact.
When loading the workforce fact staging table from flat file the following information about the rows
and columns to populate should be taken into account.
Workforce Fact Incremental Load
No unnecessary changes should be staged
If there have been no change events for an assignment (instance of a worker in a job) then the fact
staging table should not contain any rows for that assignment. The staging table is used as a reference
for what data has actually changed and needs to be refreshed in the downstream facts. Adding
unnecessary changes will slow down the performance of the incremental refresh.
Oracle Corporation |
18
Always extract changes and subsequent events from the source transaction system
Keep a record in the staging area of each change event type that has previously been processed
in the warehouse
(this method is implemented in Oracle and PeopleSoft adaptors)
Extract only the changes into the workforce fact staging table, and then use the data already
loaded in the workforce fact to fill in the subsequent change events
Oracle Corporation |
19
Incremental
only
W_WRKFC_EVT_EQ_TMP
W_WRKFC_EVT_F
Incremental
only
W_WRKFC_EVT_FS
The workforce base fact is refreshed from the workforce fact staging table.
Oracle Corporation |
20
W_WRKFC_EVT_AGE_F
W_AGE_BAND_D
W_WRKFC_EVT_F
The age fact contains one starting row plus one row each time an assignment moves from one age band
to the next. For example, if the last age band is 55+ years then there will be an event generated for each
assignment on the 55th birthday of the worker (BIRTH_DT + 55 years). Any worker hired beyond the age
of 55 will have no additional band change events, just the starting row.
Note the age bands are completely configurable, but because of the dependencies between the age
bands and the facts any changes to the configuration will require a reload (initial load).
This fact is refreshed for an assignment whenever there is a change to the workers date of birth on the
hire record (or the first record if the hire occurred before the fact initial extract date).
Initial Load Sessions
PLP_WorkforceEventFact_Age_Full (loads new records)
Incremental Load Sessions
PLP_WorkforceEventFact_Age_Mntn (deletes records to be refreshed or obsolete)
PLP_WorkforceEventFact_Age (loads changed records)
Oracle Corporation |
21
W_WRKFC_EVT_POW_F
W_PRD_OF_WRK_BAND_D
W_WRKFC_EVT_F
The period of work fact contains one starting row plus one row each time an assignment moves from
one service band to the next. For example, if the first service band is 0-1 years then there will be an
event generated for each assignment exactly one year after hire (POW_START_DT).
Note the period of work bands are completely configurable, but because of the dependencies between
the service bands and the facts any changes to the configuration will require a reload (initial load).
This fact is refreshed whenever there is a change to the hire record (or first record if the hire was before
the fact initial extract date).
Initial Load Sessions
PLP_WorkforceEventFact_Pow_Full (loads new records)
Incremental Load Sessions
PLP_WorkforceEventFact_Pow_Mntn (deletes records to be refreshed)
PLP_WorkforceEventFact_Pow (loads changed records)
Oracle Corporation |
22
W_WRKFC_EVT_MERGE_F
W_WRKFC_EVT_F
W_WRKFC_EVT_AGE_F
W_WRKFC_EVT_POW_F
This fact contains the change events from the base, age and service facts. It is refreshed based on the
combination of assignments and (earliest) event dates in the fact staging table.
Initial Load Sessions
PLP_WorkforceEventFact_Merge_Full (loads new records)
Incremental Load Sessions
PLP_WorkforceEventFact_Merge_Mntn (deletes records to be refreshed)
PLP_WorkforceEventFact_Merge (loads changed records)
Oracle Corporation |
23
W_WRKFC_EVT_MONTH_F
Monthly
Snapshots
W_MONTH_D
Workforce
Events
W_WRKFC_EVT_MERGE_F
This fact contains the merged change events plus a generated snapshot record on the first of every
month on or after the HR_WRKFC_SNAPSHOT_DT parameter. To allow future-dated reporting snapshots
are created up to 6 months in advance.
This fact is refreshed based on:
Combination of assignments and (earliest) event dates in the fact staging table
Any snapshots required for active assignments since the last load (e.g. if the incremental load is
not run for a while, or the system date moves into a new month since the last load)
Oracle Corporation |
24
PLP
Dimension
Aggregate Load
FULL and INCR
W_WRKFC_EVT_MONTH_F
PLP
Parent level
Update
FULL and
INCR
Dimension
W_EMPLOYMENT_D
PLP
INCR only
Aggregate Fact table (W_WRKFC_BAL_A) is based on the Snapshot Fact table W_WRKFC_EVT_MONTH_F
and Aggregate dimension W_EMPLOYMENT_STAT_CAT_D so as to improve performance of Fact table
W_WRKFC_EVT_MONTH_F.
Aggregate Fact W_WRKFC_BAL_A is loaded directly by Dimension W_EMPLOYMENT_ D (essentially
remains at the grain of Dimension Aggregate W_EMPLOYMENT_ STAT_CAT_D) and Workforce Month
Snapshot Fact W_WRKFC_EVT_MONTH_F.
Oracle Corporation |
25
PLP_EmploymentDimension_ParentLevelUpdate_Full(Aggregate dimension
PLP_EmploymentDimension_ParentLevelUpdate(Aggregate dimension
W_EMPLOYMENT_STAT_CAT_D updates parent dimension W_EMPLOYMENT_D table)
Oracle Corporation |
26
Workforce Dimension
Aggregate
W_EMPLOYMENT_STAT_
CAT_D
PLP
Dimension
Aggregate Load
FULL and INCR
PLP
Parent level
Update
FULL and
INCR
Dimension
W_EMPLOYMENT_D
PLP
Dimension
Aggregate Load
FULL and INCR
PLP
INCR only
W_WRKFC_EVT_MERGE_F
Workforce Dimension
Aggregate
W_WRKFC_EVENT_GRO
UP_D
W_WRKFC_EVT_EQ_TMP
PLP
Parent level
Update
FULL and
INCR
Dimension
W_WRKFC_EVENT_TYPE_D
Oracle Corporation |
27
PLP_EmploymentDimension_ParentLevelUpdate_Full(Aggregate dimension
W_EMPLOYMENT_STAT_CAT_D updates parent dimension W_EMPLOYMENT_D table)
PLP_WorkforceEventAggregateFact_Full (loads new records into the Event Aggregate Fact table
(W_WRKFC_EVT_A) based on Workforce Fact table (W_WRKFC_EVT_MERGE_ F), Aggregate
Dimension (W_EMPLOYMENT_STAT_CAT_D) and Aggregate Dimension
(W_WRKFC_EVENT_GROUP_D). Although it gets directly loaded from W_EMPLOYMENT_D and
W_WRKFC_EVENT_TYPE_D, the Balance Aggregate Fact remains at the grain of the Aggregate
Dimensions (W_EMPLOYMENT_STAT_CAT_D and W_WRKFC_EVENT_GROUP_D))
PLP_EmploymentDimension_ParentLevelUpdate(Aggregate dimension
W_EMPLOYMENT_STAT_CAT_D updates parent dimension W_EMPLOYMENT_D table)
PLP_WorkforceEventGroupDimension_ParentLevelUpdate(Aggregate dimension
(W_WRKFC_EVENT_GROUP_D) updates parent level dimension (W_WRKFC_EVENT_TYPE_D) )
28
W_WRKFC_EVT_F
Set delete
flag
W_WRKFC_EVT_DEL_F
Records to
be deleted
W_WRKFC_EVT_F_PE
Integration keys
or assignments
Source OLTP
All the standard OBIA mappings are provided for processing deletes (Primary Extract, Identify Deletes,
and Soft Delete). However because of the added complexity of maintaining the date-track (continuous
set of effective start/end dates per assignment) the functionality differs slightly.
There are two types of delete to make a distinction between:
Date-tracked delete a single record is deleted for an assignment, but others remain
Purge all records for an assignment are deleted, the assignment no longer exists on the source
transaction system
These are discussed in more detail below.
Oracle Corporation |
29
8.1.13. Purges
To purge all records for an assignment using the standard delete mappings the distinct assignment ids
should be extracted into the primary extract table. Then the identify delete mapping will compare the
primary extract table with the fact table and the soft delete mapping will flag as deleted all records for
assignments in the fact which are not in the primary extract table.
Oracle Corporation |
30
W_WRKFC_EVT_F_PE
W_WRKFC_EVT_F_PE
DATASOURCE_NUM_ID
INTEGRATION_ID (ASSIGNMENT_ID)
Either / Or
DATASOURCE_NUM_ID
INTEGRATION_ID
Source OLTP
Extract from the source OLTP either the valid assignments or valid integration keys for the fact. The
delete process will delete fact records with no valid assignment (purge) and no valid integration key
(individual record delete). This step can be skipped if there is an alternative method (e.g. source trigger)
of detecting the purges or deletes and pushing the fact keys to delete directly to the
W_WRKFC_EVT_F_DEL table.
The recommendation is to use the purge only extract the distinct valid assignment ids. If the other
option is used then care should be taken to leave the fact consistent. See the worked example below.
W_WRKFC_EVT_DEL_F
Records to
be deleted
W_WRKFC_EVT_F_PE
W_WRKFC_EVT_F
Oracle Corporation |
31
W_WRKFC_EVT_F
Set delete
flag
W_WRKFC_EVT_DEL_F
This updates the delete flag to Y (Yes) for fact records in the delete table.
Incremental Load Sessions
SIL_WorkforceEventFact_SoftDelete
Oracle Corporation |
32
End Date
Change Type
Organization
Salary
01-Jan-2000
31-Dec-2000
HIRE
5000
01-Jan-2001
31-Dec-2001
REVIEW
6000
01-Jan-2002
31-Dec-2002
TRANSFER
6000
01-Jan-2003
01-Jan-3714
REVIEW
7000
Now suppose the transfer record was deleted on the source transaction system. If this was handled by
the primary extract identify delete soft delete mappings then there would be the following records
left in the fact table (delete flag = N):
Assignment Start Date
End Date
Change Type
Organization
Salary
01-Jan-2000
31-Dec-2000
HIRE
5000
01-Jan-2001
31-Dec-2001
REVIEW
6000
01-Jan-2003
01-Jan-3714
REVIEW
7000
33
8.2.
Recruitment Job
Requisition Aggregate
W_RCRTMNT_RQSTN_A
Recruitment Applicant
Aggregate
W_RCRTMNT_APPL_A
Recruitment Hire
Aggregate
W_RCRTMNT_HIRE_A
Terminology
Assignment is used to refer to an instance of a person in a job. It should not be an update-able key on
the source transaction system.
Applicant is used to refer to the person who is applying for the posted vacancy. He/she can be an
existing employee, an ex-employee of the organization or an external candidate.
Hiring Manager is used to refer to the person to whom the incumbent would report to, once hired.
Oracle Corporation |
34
Primary Sources
Grain
Description
W_JOB_RQSTN_FS
Flat file
Source adaptors
W_APPL_EVENT_FS
Flat file
Source adaptors
W_JOB_RQSTN_ACC_SNP_F
W_JOB_RQSTN_F
W_APPL_ACC_SNP_F
W_APPL_EVENT_F
W_RCRTMNT_EVENT_F
W_JOB_RQSTN_F
W_APPL_EVENT_F
W_JOB_RQSTN_ACC_SNP_F
W_APPL_ACC_SNP_F
W_RCRTMNT_RQSTN_A
W_RCRTMNT_EVENT_F
W_MONTH_D
W_RCRTMNT_APPL_A
W_RCRTMNT_EVENT_F
W_MONTH_D
W_RCRTMNT_HIRE_A
W_RCRTMNT_EVENT_F
W_MONTH_D
All the set up and configuration steps that are required for core Workforce also applies for Recruitment
(see the same section for Workforce). For Universal adaptors, there are no more extra configuration
steps required as long as the domain values are configured accurately.
Oracle Corporation |
35
Applicant
Generated
Events (FULL)
FULL load
process (Job
Req.)
FULL load
process
(Applicant)
Applicant
Generated
Events (INCR)
Applicant POW
Events (INCR)
INCR load
process (Job
Req.)
INCR load
process
(Applicant)
Oracle Corporation |
36
This table stores the de-normalized dates against various job requisition events from the Job Requisition
Events base fact table. After the pseudo Age Band Change events are populated in the base Job
Requisition fact table, those dates are also reflected in the Accumulated snapshot fact table. Any
changes to the Hiring Manager Position Hierarchy are also updated in this accumulated snapshot fact.
Note that the updates because of hierarchy changes do not apply during full ETL run.
Oracle Corporation |
37
Oracle Corporation |
38
Applicant
Load (FULL
& INCR)
Period of
Work Band
Events
(FULL &
INCR)
Oracle Corporation |
39
Full load
process
(Job Req.)
Full load
process
(Applicant)
Oracle Corporation |
40
Pre Image
Process
(Applicant)
Pre Image
Process
(Job Req.)
Incremental load
process (Job Req.)
Incremental load
process (Applicant)
Post
Image
Process
41
This table stores aggregated measures applicable to Job Requisitions, at a monthly level. The load of this
table drives from the Pipeline fact the temporary table W_RCRTMNT_EVENT_F_TMP that was populated
during the process of loading the Pipeline fact.
During full load, the metrics get aggregated into a temporary table W_RCRTMNT_RQSTN_A_TMP2,
which gets subsequently updated to set the effective to date column of the end aggregate table, and
finally gets loaded to the end aggregate table.
During incremental, an additional process, driven by the pre-populated temporary table
W_RCRTMNT_EVENT_F_TMP that tracks the changes affected in the Pipeline fact in the current ETL run,
loads yet another temporary table W_RCRTMNT_RQSTN_A_TMP1. The following aggregation of metrics
to the second temporary table W_RCRTMNT_RQSTN_A_TMP2 is similar to that of the full load, and so
are the remaining processes (updating effective to dates, and loading the end aggregate table).
The aggregate table has an EFFECTIVE_FROM_DT and EFFECTIVE_TO_DT column. In order to cater for
balance metrics (non-event ones that are non-additive), these dates help to avoid create unnecessary
monthly snapshots if nothing has changed for a Job Requisition. The effective from date is the date of
the last event happened in the month, and the effective to date is the last day of that month minus one
day.
Oracle Corporation |
42
Derive
Process
(FULL)
Time
Dimension Day
W_DAY_D
Load process
(FULL)
Time
Dimension Month
W_MONTH_
D
Recruitment Requisition
Aggregate
W_RCRTMNT_RQSTN_A
Update
process
(COMMON)
Derive process
(INCR)
Time Dimension
- Day
W_DAY_D
Load process
(INCR)
Recruitment Requisition
Aggregate
W_RCRTMNT_RQSTN_A
Oracle Corporation |
43
Time Dimension
- Day
W_DAY_D
Time Dimension
- Month
W_MONTH_D
Oracle Corporation |
44
Extract
Process
(INCR)
Time
Dimension Month
W_MONTH_
D
Time
Dimension Day
W_DAY_D
This mapping aggregates the applicable recruitment pipeline metrics and groups by all the dimensions in
the Hire Analysis Aggregate Fact table, at a monthly grain. Only applicants that are hired gets in this
aggregate table. During incremental load, the process deletes the records that are about to get
impacted because of changes in the Pipeline fact table, then re-processes them.
Oracle Corporation |
45
Time Dimension
- Day
W_DAY_D
Time Dimension
- Month
W_MONTH_D
Extract
Process
(INCR)
Time
Dimension Month
W_MONTH_D
Time
Dimension Day
W_DAY_D
46
8.3.
Absence Fact
W_ABSENCE_EVENT_F
Day Dimension
W_DAY_D
Workforce Fact
W_WRKFC_EVT_F
Absence Event
Dimension
CSV File Input
Absence Type
Reason Dimension
CSV file input
Terminology
Absence Occurrence is used to refer to one instance of Absence for an employee from start to end.
Absence Duration is the time lost away from work due to the Absence.
Absence Type is a category of Absence that the system tracks, such as illness, vacation, and leave
Absence Reason , within each Absence Type, you can create a set of Absence reasons that further
classify Absences. For example, if you create an Absence Type called illness, you might set up reason
such as cold, flu and stress.
Oracle Corporation |
47
Primary Sources
Grain
Description
W_ABSENCE_EVENT_DS
Flat file
Source adaptors
W_ABSENCE_TYPE_RSN_DS
Flat file
Source adaptors
W_ABSENCE_TYPE_RSN_D
W_ABSENCE_TYPE_RSN_DS
W_ABSENCE_EVENT_D
W_ABSENCE_EVENT_DS
W_ABSENCE_TYPE_RSN_D
W_ABSENCE_EVENT_F
W_ABSENCE_EVENT_D
W_DAY_D
W_WRKFC_EVT_F
Stores Absence
type, reason and
category
information.
All the set up and configuration steps that are required for core Workforce also applies for Absence.
(See the same section for Workforce). For Universal adaptors, there are no more extra configuration
steps required as long as the domain values are configured accurately.
Please refer to more detailed configurations of Absence Event Dimension in the associated excel
spreadsheet (HR_Analytics_UA_Lineage.xlsx).
Oracle Corporation |
48
This table is loaded using the two dimension tables W_ABSENCE_EVENT_D and
W_ABSENCE_TYPE_RSN_D along with time dimension. The dimension tables are loaded via their
corresponding Universal Staging area tables (W_ABSENCE_EVENT_DS and W_ABSENCE_TYPE_RSN_DS).
The columns to be populated, mandatory or non-mandatory and all other related information exists in
the associated excel spreadsheet (HR_Analytics_UA_Lineage.xlsx).
W_ABSENCE_EVENT_F
Load
Process
FULL
W_ABSENCE_TYPE_RSN_D
W_ABSENCE_EVENT_D
SIL Load
Process
FULL
SIL Load
Process
FULL
W_ABSENCE_TYPE_RSN_DS
W_ABSENCE_EVENT_DS
Oracle Corporation |
49
W_ABSENCE_EVENT_F
Mntn Process
INCR
W_WRKFC_EVT_EQ_TMP
Event Queue
Process INCR
Load Process
INCR
W_ABSENCE_EVENT_EQ_TMP
W_ABSENCE_EVENT_D
W_ABSENCE_EVENT_DS
W_ABSENCE_TYPE_RSN_
DS
Oracle Corporation |
50
Oracle Corporation |
51
8.4.
FILE_LEARNING_
GRADE_BAND
File Input
FILE_ROW_GEN_
BAND
File Input
Learning
Enrollment SNP
Fact
CSV File Input
Terminology
Learning Course describes different courses in a learning management .It would have attributes such
Code, Name, Description and so on.
Learning Activity describes an instance (class) of a course
Learning Enrollment Status describes different enrollment statuses in a learning management. For
Example Enrolled, Completed, Approved, etc
Learning Program represents a significant learning goal that can be achieved by completing multiple
learning activities.
Oracle Corporation |
52
Primary Sources
Grain
Description
W_LM_ENROLLMENT_ACC_
SNP_FS
Flat file
Source adaptors
W_LM_GRADE_BAND_D
Flatfile Sources
W_LM_ENROLLMENT_ACC_
SNP_F
W_LM_ENROLLMENT_ACC
_SNP_FS
W_LM_ENROLLMENT_EVEN
T_F
W_LM_ENROLLMENT_ACC
_SNP_F
W_LM_GRADE_BAND_D
Learning Grade
Band Dimension
stores data for
Grade/Scoring
Bands for
Learning
Activities
Accumulated
snapshot fact
table captures
each learner's
enrollment to a
learning activity.
This fact table
stores the status
changes for the
learning
enrollment
process.
All the set up and configuration steps that are required for core Workforce also applies for Learning.
(See the same section for Workforce). For Universal adaptors, there are no more extra configuration
steps required as long as the domain values are configured accurately.
Oracle Corporation |
53
W_LM_ENROLLMENT_ ACC_SNP_F accumulated snapshot fact table captures each learner's enrollment
to a learning activity and the status changes. The grain of this table is at Learner/Employee +Learning
Activity level. For example, an employee requests, enrolls and completes a learning activity; there will be
one row in this table.
W_LM_ENROLLMENT_ACC_SNP_F
W_LM_ENROLLMENT_ACC_SNP_FS
Position Hierarchy
Update Process
INCR ONLY
Oracle Corporation |
54
W_LM_ENROLLMENT_EVENT_F
W_LM_ENROLLMENT_ACC_SNP_F
55
8.5.
Payroll Fact
W_PAYROLL_F
Payroll Fact
CSV File
Input
Terminology
Pay Type describes various types of compensations or deductions that typically come in a pay stub.
Examples include Earning, Bonus, and Taxes and so on.
Pay Item Detail describes whether the line item in the payroll fact is at a detail level (like 401K
deductions, Medicare deductions, Social Security Deductions, Health Insurance Deductions etc) or if it is
at a higher level of a group (like DEDUCTIONS, or EARNINGS or TAXES and so on).
Oracle Corporation |
56
Primary Sources
Grain
Description
W_PAYROLL_FS
W_PAYROLL_F
Flat file
Source adaptors
W_PAYROLL_FS
W_PAYROLL_A_TMP
W_PAYROLL_F
W_PAYROLL_A
W_PAYROLL_F
W_PAYROLL_A_TMP
W_EMP_DEMOGRAPHICS_D
W_JOB_CATEGORY_D
W_PAY_TYPE_GROUP_D
All the set up and configuration steps that are required for core Workforce also applies for Payroll (see
the same section for Workforce). The time grain (OOTB Monthly) of the payroll aggregate table can be
configured to become Weekly or Quarterly or Yearly. Check the configuration steps for the parameter
$$GRAIN.
Oracle Corporation |
57
W_PAYROLL_F fact table stores the base Payroll Transactions. Examples of fact information stored in
this table include Pay Check Date, Pay Item Amount, Currency Codes, and Exchange Rates and so on. The
grain of this table is typically at an Employee - Pay type - Pay Period Start Date - Pay Period End Date
level. For a given employee and pay period, each record in this table stores the amount associated with
that pay type (line item).
Payroll Fact
W_PAYROLL_F
SIL Load
Process
FULL and INCR
Position
Hierarchy
Update Process
INCR ONLY
Oracle Corporation |
58
W_PAYROLL_A aggregate fact table stores Payroll transactions aggregated at a Monthly level on top of
the base fact table W_PAYROLL_F. The grain of this table is at a Monthly level (Period Start and End
Dates) out of box (configurable though) and to the Employee Demographics, Job Category, and Pay Type
Groups aggregate dimension levels.
PLP Process
FULL And INCR
Payroll Fact
W_PAYROLL_F
Employee Demographics
Aggregate Dimension
W_EMP_DEMOGRAPHI
CS_D
PLP Process
INCR ONLY
Payroll Aggregate
Temporary
W_PAYROLL_A_TMP
PLP Process
INCR ONLY
59
granularity chosen, this mapping looks up the correct time-bucket. With these two steps done,
the final Payroll Aggregate refresh becomes simpler)
PLP_PayrollAggregate_Load (Refreshes the Payroll Aggregate table W_PAYROLL_A driving from
the temporary table loaded in the prior step, W_PAYROLL_A_TMP. The incremental refresh
policy relies on the fact that for Payroll, there can practically be no updates. It could well be an
adjustment run or a reversal run or likewise. The ITEM_AMT value in the base payroll
transaction will also carry the appropriate sign to indicate whether the adjustment was a
negative effect or a positive effect. Reversal run typically comes with negative value of
ITEM_AMT. With this assumption, when a repeat record (key matches) comes in, we update the
value of the ITEM_AMT simply as:
Final ITEM_AMT <aggregate> = Old ITEM_AMT <aggregate> + New ITEM_AMT <temporary>)
Oracle Corporation |
60