You are on page 1of 20

An In-memory database is one that stores all of its data online or in its memory.

For your CPU to start executing your program, the required information should be available
in your RAM. If the program was not called recently, the information would not be available
here and would need to be picked up from the persistent disk – your hard drive.
Consequently, when you call a program for the first time, the information is loaded to the
RAM and then processed by the CPU. The second time when you called it, it was already
there in the RAM and hence got processed really fast.
RAM access is really fast and any data present there is referred to as data in-memory.
Now before you throw your disks out for being a disappointment, it’s important to
understand that the RAM is volatile memory, i.e. it loses its data on loss of power. Thus,
it’s important for backups to be taken to persistent disks. Backups are scheduled jobs
executed as per the configurations to make sure no data is lost in case of SAP HANA DB
down times.

The beauty of SAP HANA lies in the fact that it does most of its calculations in-memory
at the database layer instead of the application layer as done traditionally. SAP HANA is
not just a database. It consists of different engines that crunch calculations efficiently
and return results to the application layer.

In column store, Each column in the table acts as an individual table, an


d gets stored separately. When each column acts as an individual table, each of these
individual mini-tables can be indexed (=sorted) and compressed (=process of removing
duplicates).

Column

Property Row Store Store Reason

Memory

usage Higher Lower Compression

Modifications require updates to multiple co

Transactions Faster Slower tables

Slower even if

Analytics indexed Faster Smaller dataset to scan, inherent indexing

Whether you are using Ubuntu, Chrome, Mac, iOS or Android operating system on your
device, you can be productive on a Windows Cloud Desktop And can also get Integrated SAP
HANA with SharePoint FromApps4Rent.com

SAP HANA is not just a database – it’s an appliance which consists of different servers.
Index server is the primary component but it consists of other different servers as well.
Let’s understand what each of these servers are responsible for.
SAP HANA Index Server

1
The most important server in the architecture is the Index server. The actual data is
contained here and the engines that process this data are also present in the index server
itself. It has multiple components and its architecture will be taken up as a separate topic
in the next tutorial.
SAP HANA Name Server
Stores information about the topology of the SAP HANA System. SAP HANA can also run
on multiple hosts. In these cases, the name server knows where each component runs
and also knows which data is located on which server.
SAP HANA Preprocessor Server
Whenever a text analysis functionality is called upon by the index server, the preprocessor
server answers such requests.
SAP HANA Statistics Server
This collects data about the resource allocation, performance and status of the SAP HANA
system. It keeps a regular update on the health check of the HANA appliance.
SAP HANA XS Server / XS Engine
The XS Engine allows external applications to access the data models in the SAP HANA
database via XS Engine Clients. It transforms the persistence model in the database into
consumption model for external clients to be exposed via HTTP/HTTPS.

The IMCE (In memory computing Engine) or the Index Server. The data is contained and
processed in this server.

Connection/Session Management
To work with the SAP HANA database, users must use an application of their choice. This
component creates and manages these sessions for the database clients. SQL Statements
are used to communicate with the HANA Database.
The Authorization Manager
Authorization manager makes sure this data security is enforced based on the user
roles/authorizations that have been given to the database user ID .

Replication Server
The replication server is responsible for replicating the table data and metadata (structure)
from the source system.
Metadata Manager
The term “metadata” stands for data about data. This includes information about table
structures, view structures, datatypes, field descriptions and so on. All this metadata is
stored and maintained by the Metadata Manager.
Transaction Manager
This component manages database transactions and keeps track of running and closed
transactions. It co-ordinates with the other engines on database COMMITs and
ROLLBACKs.

Request Processing and Execution Control


This is an important block of components that receives the request from
Applications/Clients and directs it to the correct sub-block for further processing.
The sub-blocks are as listed below.
SQL Processor
SAP HANA In memory computing engine SQL requests are processed by this component.
Any kind of data insertion, updation and deletion of datasets are handled by this processor.
SQLScript
This block symbolizes the internal language of the SAP HANA Database. SAP HANA
SQLScript optimizes operations by parallel processing of queries.
Multidimensional Expressions (MDX)
MDX language is used in need of manipulating and querying multidimensional OLAP cubes.
SAP HANA Planning Engine
This engine allows HANA to execute organizational planning operations to execute in the
database layer. The scope of these applications can range from simple manual data entry
through to complex planning scenarios.
SAP HANA Calculation engine
After initial processing from the SQLSCript, MDX and planning engine, the data models are
converted into Calculation models which creates an optimal parallel processing enabled
logical execution plan.
The Row Store
As discussed in the initial tutorials, Row store is a row based storage of data in a serial
manner
The Column Store
Column based storage is exclusive to in-memory databases to faster querying of data.
Revisit the Row store vs column store tutorial to know more.
Persistence Layer & Disk Logger

We learnt in the first tutorial that SAP HANA is an in-memory database which is similar to
the RAM of a PC that you may use. This also means that the main memory in SAP HANA
is volatile, i.e in case of a power outage or restart, all data in it would be lost. Thus, there
is a persistence layer to periodically save all the data in a permanent/persisted manner.
Data and logs of a system are stored in Log volumes whereas Data volumes store SQL data
and undo log information and also SAP HANA information modeling data.
SAP HANA In memory computing engine saves all changes to data into the persistent disk
at periodic intervals called savepoints. The default frequency of these savepoints is every 5
minutes which can be changed as per the requirement. If a system restart/power outage
ever occurs, data from the last successful savepoint can be read from the data volumes,
and redo log entries written to the log volumes.

Smart data access may not be exactly a data provisioning technique as there is no data
being replicated as this method involves replication of table meta-data only from the
source and not its data. This means that you get the table structure on which you can
create a “Virtual table” for your developments. This table looks and feels exactly like the
source table with the difference that the data that appears to be inside it is actually not
stored in HANA but remotely in its source system.

A workspace is nothing but a folder where all your offline work and configurations get
saved.

there are 4 sections or folder like icons that you see here. Let’s discuss their relevance.
1. Catalog: This is where all the source metadata (Tables, views etc) is grouped under.
Here you can do data previews on source system tables that have been replicated or
available as virtual tables in case of SDA.
2. Content: This is where all your HANA development takes place. The HANA models that
you create go under here.
3. Provisioning: This is mostly used for Smart data access. All the source systems
connected via SDA will have their tables displayed here like a “Menu Card” in a
restaurant. You can choose which one you want and build a virtual table for it in the
“Catalog” section we discussed just now.
4. Security: This is mostly for security consultants to maintain users and roles according
to your role in the project – Developers, administrators, testers and so on.

Prior to the advent of HANA, a model called extended star schema was used in SAP BW
which was the best practice for BW running on a legacy database. With BW on HANA, it is
no longer relevant and is not discussed in this tutorial. As of BW on HANA 7.5 and
Enterprise HANA SP11, a lean star schema approach is what must be followed in all data
models. Well, BW does it by itself anyways when on HANA so you can leave it up to the
application.

Schemas are usually used to group tables of a similar source/ similar purpose. Each
username by default gets its own schema but you can also create one.

Note: Any system object like table names, field names, procedure names etc. should be
enclosed in a double quote and any values for character fields like ‘Apples’ go in single
quotes.

Note: Null denoted by ‘?’ in HANA default settings denotes the non-existence of a value for
that field. Null is not zero or blank ‘’ but is a symbol for non-existence of any value.

the package name is also prefixed to the table name automatically. That’s something
exclusive to HDB Tables.

Only CDS based tables are the best practice as of this date. These can be HDBDD tables (described in the next
tutorial) or HDBCDS table (for XSA based projects)

CDS Table is created by creating a file in the Content Floder.

namespace TEACHMEHANA ;

@Schema: 'SHYAM'
context TABLES {

Entity CUST_REV_CDS {

CUST_ID : String ( 10 );

FIRST_NAME : String ( 20 );

LAST_NAME : String( 20 );

REVENUE_USD : Integer;
};
};

SAP HANA Type (hdbtable) CDS Type (hdbdd)

NVARCHAR String

SHORTTEXT String

NCLOB LargeString

TEXT LargeString

VARBINARY Binary
SAP HANA Type (hdbtable) CDS Type (hdbdd)

BLOB LargeBinary

INTEGER Integer

INT Integer

BIGINT Integer64

DECIMAL(p,s) Decimal(p,s)

DECIMAL DecimalFloat

DOUBLE BinaryFloat

DAYDATE LocalDate

DATE LocalDate

SECONDTIME LocalTime

TIME LocalTime

SECONDDATE UTCDateTime

LONGDATE UTCTimestamp

TIMESTAMP UTCTimestamp

ALPHANUM hana.ALPHANUM

SMALLINT hana.SMALLINT

TINYINT hana.TINYINT

SMALLDECIMAL hana.SMALLDECIMAL

REAL hana.REAL
SAP HANA Type (hdbtable) CDS Type (hdbdd)

VARCHAR hana.VARCHAR

CLOB hana.CLOB

BINARY hana.BINARY

ST_POINT hana.ST_POINT

ST_GEOMETRY hana.ST_GEOMETRY

This table can also be loaded using the hdbti method in the same way as the hdb tables.

How HDBDD is different than HDBTABLE method? What do you mean by reusing datatypes and increasing
maintainability? Cant we do the same thing with HDBTABLE?

in 1 HDBDD file, I can declare and define multiple tables. But in an HDBTABLE file, I can only declare 1 table.
In HDBDD, I can define a datatype let’s say apple which is of varchar (20) length.
Now if varchar (20) is a frequently used datatype, I can declare fields as the datatype apple which will be same as
varchar(20). Ofcourse, this is a simple example and there are further complex usecases.

Import flat file(CSV) with HDB Table – Table


Import Configuration
This method only works with HDB Tables. the CSV file will reside on your SAP HANA server
and not the local desktop. Open HANA studio and go to the repositories tab. If you don’t
see this tab, you are not in the developer perspective.

Linking the SAP HANA HDB table with this CSV file –
HDBTI configuration
To do this, stay in the repository tab and right click on your package. Again select New->
Other as shown below.
Write ‘configuration’ in the search bar as shown below. A list of options will open up. Click
on Table Import Configuration inside the Database Development folder and press Next.

The syntax of this hdbti file is as given below


import = [
{
hdbtable = “<hdbtable_package_path>::<hdbtable_name>”;
file = “<csvfile_package_path>:<csvfilename>”;
header = <header_existence>;
delimField = “<delimiter_sumbol>”;
}
];
Operator Description

= Equal

<> Not Equal

> Greater than

< Less than

>= Greater than or equal

<= Less than or equal

BETWEEN Between an inclusive range

LIKE Search for a pattern

IN Returns multiple values from a column

Wildcard Characters : % and _

SQL wildcards must be used with SQL LIKE operator. % can be filled with any number of
characters. For example, using LIKE ‘%am’ means that it will return all data where the
string ends with am. The data output may be ‘shyam’ ‘xysfasdaam’ or just ‘am’
_ has to be filled with exactly one character. For example using LIKE ‘_hyam’ can return
data where string ends with hyam. So the output can be shyam , xhyam, yhyam.

These can be used at start of the string, end of it and in between as well. You can use it
as LIKE ‘s__%’.This one tells the SAP HANA system that the string we are looking for
should be at least of length 3 and should start with a lowercase s. Also can have more
characters at the end due to the % wild card So, the result can be shyam, s12, s56testchar
and so on.

NOT LIKE example

SQL alias for table names

We already learnt about field aliases where we rename individual fields or formulas as
some other name that we desire. Similarly, we can create table aliases where we give the
table name some other (usually shorter) name so that it’s easier to type it in when used
again and again in the same statement. This also increases code readability.
For example, to select employee ID and first name from EMP_NAMES table, we can provide
an alias by placing the alias (ename in this case) in front of the table name separated by
a blank space. Then you can also write EMP_ID field name as ename. EMP_ID which tells
the system that you need to pick up the EMP_ID field from ename table which is an alias
of EMP_NAMES table. It makes no sense to do this when there is only one table in the
select statement since the system knows it needs to get this field from the only available
table but when you have two or more, it is always better to use aliases since if you have
the same field in multiple tables, you can specify using aliases where to pick it from.
Now what if you needed to add a filter too, the WHERE condition should be ahead of the
GROUP BY. In this example, I add the filter to pick only Country codes IN and FR. As
always, field names are in double quotes, character values in single quotes and integer
values without quotes.
here comes a need at times to sort one or more columns of data. We use the Order by
clause for this operation. Order by should appear at the end of your select statement after
your WHERE, GROUP BY and HAVING clauses if any or all of them exist.

In HANA Views, we need some placeholder to hold the tables. Such placeholder are called
“Projections”.

In calculation view, if you set the data category as CUBE, the default node is Aggregation.
You can change the default node to Star Join by selecting the checkbox.

If you are working with tables of SAP source systems, they most probably will have the
MANDT field as I explained earlier. These tables are cross client and in those cases, change
the Default client setting to “Cross Client” instead of “Session Client” as shown by the
green arrow below.

Also on the Execute In drop down, select “SQL Engine” for best performance.
The client is a logical partition ID of the database. Just like your laptop hard disk can be
partitioned into multiple drive IDs like C:, D:and so on, there can be multiple SAP
applications installed on different partitions of the same database. Thus, it’s important to
tell the system, which partition the BW application you are trying to connect to exists
upon.

Now, try to apply a filter in the Join node as shown below. Notice that there is no ‘Apply
filter’ option. This is just to emphasize the fact that only Projections allow filters. If you
need a filter on a join result, you would need to add another projection after the join and
filter it there.

If a dynamic filter needs to be applied after all results have been calculated (top-level
filter) , SAP HANA variables are the answer. By dynamic filter, I mean that when executed,
this view would ask the user for filter values. In this case, the executing user has full
control of the filter being applied.
if a field has not been selected from the table, it will not be available to act as an attribute
for creating a new variable. This happens because SAP HANA variables are created on the
top level of the node and at the top, only selected fields exist.

An important property that you might have observed from the execution of SAP HANA
variables is that the filter is applied after the view finishes its calculations. For example,
lets say a view pulls up 1 million records and you now apply a variable on it at the top,
which causes it to return only 1 record on execution. The filter was only applied after the
1 million records were already processed and was filtered just before the output.

HANA throws us a warning that from now on, filters for this projection can only be
maintained as an expression code. This means that no more rights click + apply filter
functionalities will work in this node. Every time you need a filter (even a static one) you
would have to add a small bit of code here. We do this because Input Parameters can only
be added as filter via the expression editor. Press OK to move ahead.

Notice that the input parameter is always represented as covered by single quotes and
double dollar signs on either side whenever used in a code.
Major differences between Input parameters and
variables in SAP HANA
The differences are:
1. Variables apply filter after execution of all nodes till the semantics (at the top level)
whereas Input parameters can apply filters at any projection level.
2. Variables are bound to attributes/specific fields whereas an input parameter is
independent of any field in the view.
3. Variables have a sole purpose of filtering data whereas filtering is only one of the
reasons to use an input parameter.

Input parameters can be used in dynamic calculations to make any statement dynamic. Simple example would be
to have an input parameter called bonus_factor. Let’s say there’s a new calculated field BONUS which should be
equal to SALARY* bonus_factor . so bonus_factor comes in as part of user input and helps derive a calculated
field. Variables can’t do this.

Variables are a bit tricky.. they may or may not push down the filter based on how that field gets propagated from
the source to target… there are many factors.. types of joins, whether any calculations were performed on it.. and
so on. Variables are propagated to SAP reporting tools like WebI but don’t work with 3rd party tools like Cognos. I
use input parameters extensively and don’t really use variables ..but as I said, in SAP Business Objects based
reporting scenarios, it may work out.

Concatenate more strings to it using the + operator. The Plus (+) operator works as a
string concatenator for string fields and an arithmetic plus for numerical datatypes like
decimal and integers. We needed the AUART and WAERK to be separated by a dash. Hence
after the plus symbol, we add a dash surrounded by single quotes since it is a character
value. Then add another plus for the next field WAERK. The end result should resemble
the below. Double click WAERK to add it at the end too.

Calculated columns can be created at any node.

Restricted column
P_AUART is an input parameter.

Rank Node

The first setting is the sort direction. Here you specify whether you wish to sort it with the
highest value first or with the lowest value by choosing Descending or Ascending
respectively. Since we need to pick up the maximum score, we keep this at Descending.

Next, we set the “Order By” field. This is the field we need to sort Descending (as per our
previous setting). In our case, this field is SCORE. We need to sort SCORE descending to
find out the top score.
Next, we need to set the “Partition By” column. This is the field we wish to sort by. Thus,
we need to sort SCORE descending by each EMP_ID and then the first row for each such
employee ID would be his/her top score.

Once done. Save and activate the view. Now, check the data preview to confirm the
results. As seen below, the view has ranked and picked up only the top score of each
employee and also the date on which they achieved this feat. Congratulations employee
1003!

Aggregation Node
Aggregation Types
Let’s channel our curiosity and try to switch this setting so that we get the average of all
available SCORE values for each employee. All the available values for aggregation types
are as shown below. The common ones are COUNT, MIN, MAX, AVG which are used to find
the count, minimum value, maximum value and average values of measures respectively.
VAR and STDDEV are Variance and Standard deviations for advanced statistical analysis
in rare cases.

Measures aren’t the only fields capable of getting aggregated. The aggregation node also
helps remove duplicates. When two rows in the incoming data set contain the same data,
the aggregation node also works as a duplicate row remover. For example, from
Projection_1 let’s disable all fields except EMP_ID.

UNION node here is that it’s doesn’t work like a UNION. It works as a UNION ALL operator.
This means that keeps piling one data set below the next without aggregating them.

We finally have the option of right clicking and creating a new schema by the way in HANA
WEB IDE. I always wondered why they missed that in HANA Studio.
the SAP HANA Scripted Calculation view is a two node structure. The bottom node is where
you write the HANA SQL Script and the semantics node, as always is the output node.
There cannot be any additional nodes in this data flow. All the necessary logic must be
written into the Script_View node.

In Attribute View, a “Data Foundation” is a node which you cannot remove. It is purely
used to include tables into the view. You can use a singular table here or have more by
specifying the join condition inside this “Data Foundation”. There is no separate JOIN node
that you can insert here. You cannot insert views into the “Data Foundation”. It only
accepts tables.

In Analytic View, we have a Semantics node as always, a data foundation where tables
are added and in addition, there is a Star Join node which as you see below already has
the Data Foundation as one of its input as default.
As in the attribute view, the data foundation only accepts tables. No views can be added
here to the join. But, in an analytic view, there must be exactly one central transaction table.
This means that you can add more transaction tables to the join provided they only supply
attributes and all of their measure fields are disabled. Usually, there is only one transaction
table and other master data tables. The result of this join passes on to the Star join.
The star join contains the data foundation already. It also accepts attribute views but no
individual tables can be added here. It is called a star join because an analytic view is
basically a star schema structure in itself –a central transactional data table surrounded by
master data.

Delivery Unit is a container used by the Life Cycle Manager (LCM) to transport repository objects
between the SAP HANA systems.

Hierarchies are used to structure and define the relationships among


attributes in a modeling view.
Organizations define hierarchies for information classification, allowing roll-up
and drill-down analysis. For example, a sales organization might allocate a
sales person to a country and a country to a region. Sales data can then be
aggregated and analyzed by region, country, or sales person.
There are two types of hierarchies:

Level Hierarchies are hierarchies that are rigid in nature, where the root
and the child nodes can be accessed only in the defined order. For example,
organizational structures, and so on.
Parent/Child Hierarchies are value hierarchies, that is, hierarchies derived
from the value of a node. For example, a Bill of Materials (BOM) contains
Assembly and Part hierarchies, and an Employee Master record contains
Employee and Manager data. The hierarchy can be explored based on a
selected parent; there are also cases where the child can be a parent.

You might also like