You are on page 1of 8

HANA DB - SLT Configuration

Table of Content
1 Business Context, Document Purpose and Scope 3
1.1 Document Purpose & Scope 3
2 General remarks 4
2.1 High level architecture 4
2.2 Pre-requisites 5
3 Configuration 6
4 Configuration checks 7
5 Replication scenarios 8
5.1 Replication scenario ERP à SLT à BW 8
5.2 Replication scenario ERP à SLT à CAR 10
5.3 Replication scenario CRM à SLT à CAR 11
5.4 Schema Mappings 12
6 CAR table partitioning 13
6.1 Partitioning: 14
6.2 Relocating Partitions 15
6.3 Archiving 16
6.4 Important notes and transactions 16
7 Tips & Tricks 17
7.1 List of replicated tables 17
7.2 Maintain DB connection in SLT 21
7.3 Reload data, but don't re-create table in HANA 21
7.4 Transformation rules during replication 22
1 Business Context, Document Purpose and Scope
1.1 Document Purpose & Scope
This document contains the required configuration to enable SLT replication from a source system to the HANA
target database.
This configuration is required in the context of replicating ECC and CRM tables to CAR HANA Database using
real-time replication mode on SLT.
2 General remarks
2.1 High level architecture
SLT is able to replicate data between multiple sources to multiple targets – SAP and non-SAP. The following
shows the typical connection between SLT and the source and target systems:
The following roles and authorizations have to be assigned to the different users to ensure a proper replication
scenario:
2.2 Pre-requisites
1. Create an RFC destination from SLT to the source SLT of the replication
2. Collect and define the following details about the target HANA database for the replication
a. Configuration name (corresponds to the name of the schema generate in HANA)
b. Number of Data Transfer jobs (based on number of tables to be replicated)
c. HANA Database Hostname (master node in case of scale-out)
d. Instance/system number
e. Password of HANA System user
3. Activate the following services form the transaction SICF
a. /sap/public/myssocntl
b. /sap/bc/webdynpro/sap/iuuc_replication_config
c. /sap/bc/webdynpro/sap/iuuc_helpcenter_document
d. /sap/bc/webdynpro/sap/iuuc_repl_mon_powl
e. /sap/bc/webdynpro/sap/iuuc_helpcenter
f. /sap/public/mysso/cntl
g. /sap/bc/webdynpro/sap/iuuc_repl_mon_schema_oif
h. /sap/public/bc/ur
i. /sap/public/bc/icons
j. /sap/public/bc/icons_rtl
k. /sap/public/bc/webicons
l. /sap/public/bc/pictograms
m. /sap/public/bc/webdynpro
3 Configuration
1. Go to transaction LTRC
2. Click on the button Create New Configuration
3. Click on the menu bar on Action à Create New Configuration
4. Fill all the required inputs as shown below
The selection of “Allow Multiple Usage” is quite important. It allows the usage of the same RFC
connection configuration for multiple replication into different targets, i.e. ERP data into BW and ERP
at the same time.
“Read from Single Client” would only take data from selected source client. As client info is replicated
anyhow in addition with all replicated data, the target application is taking care of using only client
specific data in the target scenario.
The rule of thumb for the number of transfer jobs is roughly 1 job per 10 tables to be replicated.
5. Save the configuration.
4 Configuration checks
The configuration can tested using the following checks.
1. From transaction LTR check if
2. From transaction LTRC access to the configuration just created (providing he correct Mass Transfer
ID) and check that initial technical tables are in replication mode.
3. Login to the target HANA Database and check the schema just created (same name as the
Configuration Name). In particular check if the tables DD02L, DD02T and DD08L have been created
and contains entries.
To speed up initial data load of tables, the number of calculation jobs can be increased until the initial load has
been done.
5 Replication scenarios
5.1 Replication scenario ERP à SLT à BW
As an example a screenshot has been attached to show the created configuration for ERP replication into BW
in the pre-production environment.
The following tables have been requested from Reporting stream to be replicated into BW with real time
replication setting:

5.2 Replication scenario ERP à SLT à CAR


5.3 Replication scenario CRM à SLT à CAR
5.4 Schema Mappings
The schema mappings have to be done via HANA studio. This is quite important to ensure that the applications
are able to access the appropriate date.
The physical schema name is dependent on the SLT configuration name and has to be mapped to the
SAP_XXX authoring schema name.
6 CAR table partitioning
The following note is relevant to partition table TLOGF and TLOGF_EXT:
1719282 - POS TLOG Table Partitioning Information.
This note provides information on how to partition the following TLOG tables:
• /POSDW/TLOGF: Stores transaction information received from the POS on the SAP HANA database
• /POSDW/TLOGF_EXT: Stores customer extensions on the SAP HANA database
The second partitioning level is to remain within the physical bounds that apply to table size with respect to the
number of entries, the physical bounds of the overall number of partitions, and a heuristic to keep low the
overhead for partitioning results consolidation. We recommend that you only use partitioning if a table or
partition has more than 250 million entries. The maximum number of entries for a table or partition should be
1,000 million in order to avoid the physical limit of 2 to the power of 31 entries.
To determine the number of partitions required for the /POSDW/TLOGF and /POSDW/TLOGF_EXT tables,
consider the following inputs:
• D : number of days data kept in system = 45 days
• T : number of transactions created per day = 2100
• E : average number of table entries, E is MAX(15+5*N,X) where X is the number of extension fields
per transaction
o E = 21000 transactions per day * 36 stores * (15+5*6 items per transaction) = 34.020.000 per
day = 1530.900.000 table entries per 45 days
Creating empty partitions will not harm the system
It is also recommended to plan for the future and start with the future setup already than re-partitioning later
on. Adding the empty partitions is definitely the right choice!
Only the slave nodes shall contain data
In the setup given, this means that the data is distributed over 2 nodes (in the note’s formula: variable M = 2)
Consider the slave node amount (variable M) when determining the target time range for partitioning
Weekly partitioning would result in ~119 mio. records per partition (34.020.000 records/day * 7 days / 2).
To get closer to the minimum recommended partition size, on could choose two weeks per partition.
6.1 Partitioning:
alter table "SAP<sid>"."/POSDW/TLOGF" partition by HASH ( mandt,retailstoreid, businessdaydate )
PARTITIONS GET_NUM_SERVERS(), RANGE (businessdaydate ) (
PARTITION 20150101 <= VALUES < 20150115,
PARTITION 20150115 <= VALUES < 20150129,
PARTITION 20150129 <= VALUES < 20150212,
PARTITION 20150212 <= VALUES < 20150226,
PARTITION 20150226 <= VALUES < 20150312,
PARTITION 20150312 <= VALUES < 20150326,
PARTITION 20150326 <= VALUES < 20150409,
PARTITION 20150409 <= VALUES < 20150423,
PARTITION 20150423 <= VALUES < 20150507,
PARTITION 20150507 <= VALUES < 20150521,
PARTITION 20150521 <= VALUES < 20150604,
PARTITION 20150604 <= VALUES < 20150618,
PARTITION 20150618 <= VALUES < 20150702,
PARTITION 20150702 <= VALUES < 20150716,
PARTITION 20150716 <= VALUES < 20150730,
PARTITION 20150730 <= VALUES < 20150813,
PARTITION 20150813 <= VALUES < 20150827,
PARTITION 20150827 <= VALUES < 20150910,
PARTITION 20150910 <= VALUES < 20150924,
PARTITION 20150924 <= VALUES < 20151008,
PARTITION 20151008 <= VALUES < 20151022,
PARTITION 20151022 <= VALUES < 20151105,
PARTITION 20151105 <= VALUES < 20151119,
PARTITION 20151119 <= VALUES < 20151203,
PARTITION 20151203 <= VALUES < 20151217,
PARTITION 20151217 <= VALUES < 20151231,
PARTITION OTHERS);
alter table " SAP<sid>"."/POSDW/TLOGF_EXT" partition by HASH ( mandt,retailstoreid, businessdaydate )
PARTITIONS GET_NUM_SERVERS(), RANGE (businessdaydate ) (
PARTITION 20150101 <= VALUES < 20150115,
PARTITION 20150115 <= VALUES < 20150129,
PARTITION 20150129 <= VALUES < 20150212,
PARTITION 20150212 <= VALUES < 20150226,
PARTITION 20150226 <= VALUES < 20150312,
PARTITION 20150312 <= VALUES < 20150326,
PARTITION 20150326 <= VALUES < 20150409,
PARTITION 20150409 <= VALUES < 20150423,
PARTITION 20150423 <= VALUES < 20150507,
PARTITION 20150507 <= VALUES < 20150521,
PARTITION 20150521 <= VALUES < 20150604,
PARTITION 20150604 <= VALUES < 20150618,
PARTITION 20150618 <= VALUES < 20150702,
PARTITION 20150702 <= VALUES < 20150716,
PARTITION 20150716 <= VALUES < 20150730,
PARTITION 20150730 <= VALUES < 20150813,
PARTITION 20150813 <= VALUES < 20150827,
PARTITION 20150827 <= VALUES < 20150910,
PARTITION 20150910 <= VALUES < 20150924,
PARTITION 20150924 <= VALUES < 20151008,
PARTITION 20151008 <= VALUES < 20151022,
PARTITION 20151022 <= VALUES < 20151105,
PARTITION 20151105 <= VALUES < 20151119
PARTITION 20151119 <= VALUES < 20151203,
PARTITION 20151203 <= VALUES < 20151217,
PARTITION 20151217 <= VALUES < 20151231,
PARTITION OTHERS);
6.2 Relocating Partitions
Move the partitions across the nodes with SQL statements like
alter table "SAP<SID>"."/POSDW/TLOGF"
move partition 1 to HANASRV2:3<instance_no>03 physical;
alter table "SAP<SID>"."/POSDW/TLOGF_EXT"
move partition 1 to HANASRV2:3<instance_no>03 physical;
6.3 Archiving
As the POS data are transferred from POS via WS middleware as individual IDOC´s in realtime – roughly 50
Mio IDOC´s will be stored per day in CAR.
The attached document describes the archiving procedure for IDOC and also TLOGF and TLOGF_EXT table.
Relevant objects:
- Archiving TLOGF - Chapter 2 without deletion
- Archiving Inbound IDoc /POSDW/POSTR_CREATEMULTIPLE05 and optional all Outbound IDOCs
WPU* - Chapter 4
Please keep attention on system growth and empty/drop old partitions after the 45
days to keep the system memory stable on meaningful capacity level !
6.4 Important notes and transactions
Note 1719282 – POS TLOG Partitioning Information
§ Hash on client, retailstoreid, businessdaydate; range on businessdaydate
§ Required for initial partitioning
§ Basic information about partitioning for /POSDW/TLOGF and /POSDW/TLOGF_EXT
§ Determine average number of „lines“ per receipt in each table (default in sizing: 42)
§ Determine period range to suite around 250-500 Million rows in each partition
Note 1980718 – SAP CAR on SAP HANA SP06: Landscape redistribution
§ Automated distribution of partitions across nodes in a multi-node environment
§ Required from initial distribution of partitions across the nodes
§ Required if nodes are added or removed
§ Transaction code: /POSDW/CLASSIFY_TLOG
Note 2014446 – Maintain POS Transaction tables level 2 range partitions
§ Report to create new or remove existing (empty) partition ranges
§ Transaction code: /POSDW/PARTITIONTLOG
7 Tips & Tricks
7.1 List of replicated tables
7.1.1 ERP tables to CAR
1900038 - Create ECC SLT tables in SAP Customer Activity Repository
1897024 - Replicate ERP tables for SAP Customer Activity Repository
1 SAP_ECC,ADR6
2 SAP_ECC,ADRC
3 SAP_ECC,CABN
4 SAP_ECC,CAWN
5 SAP_ECC,CAWNT
6 SAP_ECC,DD02L
7 SAP_ECC,DD02T
8 SAP_ECC,DD07L
9 SAP_ECC,DD07T
10 SAP_ECC,EINA
11 SAP_ECC,EKET
12 SAP_ECC,EKKO
13 SAP_ECC,EKPO
14 SAP_ECC,FRE_TS_REL_ST
15 SAP_ECC,LFA1
16 SAP_ECC,KLAH
17 SAP_ECC,KNA1
18 SAP_ECC,KONBBYH
19 SAP_ECC,KONBBYPRQ
20 SAP_ECC,KONBBYT
21 SAP_ECC,KONDN
22 SAP_ECC,KONMATGRP
23 SAP_ECC,KONMATGRPT
24 SAP_ECC,KONV
25 SAP_ECC,KOTN203
26 SAP_ECC,KSSK
27 SAP_ECC,MAKT
28 SAP_ECC,MARA
29 SAP_ECC,MARC
30 SAP_ECC,MARD
31 SAP_ECC,MARM
32 SAP_ECC,MAST
33 SAP_ECC,MBEW
34 SAP_ECC,MEAN
35 SAP_ECC,STPO
36 SAP_ECC,SWOR
37 SAP_ECC,T000
38 SAP_ECC,T001
39 SAP_ECC,T001K
40 SAP_ECC,T001W
41 SAP_ECC,T005
42 SAP_ECC,T005K
43 SAP_ECC,T005S
44 SAP_ECC,T005T
45 SAP_ECC,T005U
46 SAP_ECC,T006
47 SAP_ECC,T006A
48 SAP_ECC,T006D
49 SAP_ECC,T006T
50 SAP_ECC,T009
51 SAP_ECC,T009B
52 SAP_ECC,T023
53 SAP_ECC,T023T
54 SAP_ECC,T024E
55 SAP_ECC,T134
56 SAP_ECC,T134M
57 SAP_ECC,T134T
58 SAP_ECC,T171
59 SAP_ECC,T171T
60 SAP_ECC,T179
61 SAP_ECC,T179T
62 SAP_ECC,T685
63 SAP_ECC,T685T
64 SAP_ECC,T6WFG
65 SAP_ECC,T6WFGT
66 SAP_ECC,T6WSP
67 SAP_ECC,T6WST
68 SAP_ECC,TAUUM
69 SAP_ECC,TCLA
70 SAP_ECC,TCURC
71 SAP_ECC,TCURF
72 SAP_ECC,TCURM
73 SAP_ECC,TCURN
74 SAP_ECC,TCURR
75 SAP_ECC,TCURT
76 SAP_ECC,TCURV
77 SAP_ECC,TCURW
78 SAP_ECC,TCURX
79 SAP_ECC,TSAD3T
80 SAP_ECC,TSPA
81 SAP_ECC,TSPAT
82 SAP_ECC,TVAU
83 SAP_ECC,TVAUT
84 SAP_ECC,TVAK
85 SAP_ECC,TVAKT
86 SAP_ECC,TVAPT
87 SAP_ECC,TVKO
88 SAP_ECC,TVKOT
89 SAP_ECC,TVKOV
90 SAP_ECC,TVPT
91 SAP_ECC,TVTW
92 SAP_ECC,TVTWT
93 SAP_ECC,TWAA
94 SAP_ECC,TWAAT
95 SAP_ECC,TWAT
96 SAP_ECC,TWATT
97 SAP_ECC,TWPT
98 SAP_ECC,TWPTT
99 SAP_ECC,TWTY
100 SAP_ECC,TWTYT
101 SAP_ECC,TWZLA
102 SAP_ECC,USRBF2
103 SAP_ECC,UST12
104 SAP_ECC,VBAK
105 SAP_ECC,VBAP
106 SAP_ECC,VBEP
107 SAP_ECC,VBKD
108 SAP_ECC,VBRK
109 SAP_ECC,VBRP
110 SAP_ECC,VBUK
111 SAP_ECC,VBUP
112 SAP_ECC,VEDA
113 SAP_ECC,WAKH
114 SAP_ECC,WAKP
115 SAP_ECC,WAKR
116 SAP_ECC,WAKT
117 SAP_ECC,WALE
118 SAP_ECC,WLK2
119 SAP_ECC,WRF_BRANDS
120 SAP_ECC,WRF_BRANDS_T
121 SAP_ECC,WRF_CHARVAL
122 SAP_ECC,WRF_CHARVALT
123 SAP_ECC,WRF_CHARVAL_HEAD
124 SAP_ECC,WRF_MATGRP_HIER
125 SAP_ECC,WRF_MATGRP_HIERT
126 SAP_ECC,WRF_MATGRP_PROD
127 SAP_ECC,WRF_MATGRP_SKU
128 SAP_ECC,WRF_MATGRP_STRCT
129 SAP_ECC,WRF_MATGRP_STRUC
130 SAP_ECC,WRF1
Separate note
131 TWSAI
132 WRFST_TYP
133 WRFT_APC_CDT
7.1.2 CRM tables to CAR
1938004 - Create CRM tables in SAP Customer Activity Repository
1897025 - Replicate CRM tables for SAP Customer Activity Repository
•ADRC
•BUT000
•BUT021_FS
•CRMM_BUT_CUSTNO
•LOYD_CRD_CARD
•LOYD_MSH_MEMS
•LOYD_MSH_MS_TIER
7.1.3 ERP tables to BW
Replicate the ECC table into BW schema SAP_ECC (via schema mapping):
SAP_ECC.T001
SAP_ECC.TCURF
SAP_ECC.TCURN
SAP_ECC.TCURR
SAP_ECC.TCURV
SAP_ECC.TCURX
SAP_ECC.BKPF
SAP_ECC.BSAD
SAP_ECC.BSID
SAP_ECC.DD02L
SAP_ECC.DD02T
7.2 Maintain DB connection in SLT
Use TNX DBCO to display and maintain DB Connection settings
7.3 Reload data, but don't re-create table in HANA
Use case:
Load ended in ERROR and you want to keep the target table structure and do e.g. one of the following
• re-create the error
• check if error still exists
• restart the load if the error has been corrected
Why only data reload?
• table structure could have been changed in HANA by customer
• if 'stop' + 'replicate' in HANA Studio -> drop/re-create of table (customer changes are lost)
Process
Only works if table is still on 'load' status (TA IUUC_SYNC_MON -> relevant tables -> process option =
COMPLETE - does not work if in DELTA)!
• delete data in HANA (only data not table)
• right-click on table
• select 'Delete'
• select 'Delete All Rows'
• reset FAILED/ERROR flag in SLT (TA MWBMON -> Expert Functions -> Reset all Loaded Indicator;
set ACP ID to 1)
• re-load automatically restarts
7.4 Transformation rules during replication
In the current scenario the data are replicated from client to client without changing any information like client
info.
Transaction LTRS allows to apply any kind of rule during replication process, for example like mapping the
ERP client to CAR client.

You might also like