Professional Documents
Culture Documents
Over twenty major techniques, new and modified, were needed. Interestingly only
one was about using memory rather than disks, there was much more to it than that
Exploiting modern microchip features, that appeared from about 2005 onward, was
key this could yield speed improvements of 100,000 times(sic) or more
The dramatic speed increase eliminated the need for aggregates and indexes in
applications, other research told us these caused 60-95% or more of application
complexity a massive simplification. Speedup is so great that even after
dynamically producing aggregates on the fly there is still plenty of performance spare
to be hundreds or thousands of times faster than traditional systems
But this had to be based on column stores, notoriously difficult to update, so they
invented a unique technique suitable for running thousands or millions of
transactions per second
Thus Hybrid Transactional and Analytical Processing (HTAP) became possible, with
both types of processing being done on the same single copy of the data, which
allows for huge landscape simplification. Operational reporting can now return to the
operational systems with large systems savings and the agility of reporting on real
time operational data.
The speedup also benefited text processing, spatial processing, planning, predictive,
and graph processing, all could be mixed together at will, in-memory, and in-parallel.
This allowed for greater agility and productivity, not just through the speed of
execution but by having all these techniques at your fingertips ready to be used and
combined.
The SAP Development organization, working 24/7 following the sun in multiple
centers around the world expanded the research out into todays fully featured HANA
Platform
This work produced an enterprise system that is fundamentally simpler, and
that enables high productivity, much greater agility, higher performance and lower
TCO.
Capabilities
1. Functional capabilities
High-performance computing
Leverage the latest hardware and software innovations to accelerate performance for
all applications.
Collapse
Comprehensive data processing
Embed multiple data processing engines and predictive libraries to maximize value
from Big
Data and the Internet of Things (IoT).
Collapse
OLAP and OLTP support
Allow processing for transactional and analytic workloads on the same system with
online transaction processing (OLTP) and online analytical processing (OLAP).
Collapse
Administration and security
Monitoring system health and providing network security are key tasks for
administrators. Theyre both built into SAP HANA.
Integration services
All of your data sources can be integrated into SAP HANA to complement your
SAP HANA applications or to perform in-depth analyses.
Technical capabilities
SAP HANA Enterprise 1.0 is an in-memory computing appliance that combines SAP
database software with pre-tuned server, storage, and networking hardware from one of
several SAP hardware partners. It is designed to support real-time analytic and
transactional processing.
Technical components that make up HANA?
The heart of SAP HANA Enterprise 1.0 is the SAP In-Memory Database 1.0, a massively
parallel processing data store that melds row-based, column-based, and object-based
storage techniques. Other components of SAP HANA Enterprise 1.0 include:
SAP In-Memory Computing Studio,
SAP Host Agent 7.2,
SAPCAR 7.10,
Sybase Replication Server 15,
SAP HANA Load Controller 1.00, and,
SAP Landscape Transformation 1 - SHC for ABA.
SAP HANA runs the SUSE Linux Enterprise Server 11 SP1 operating system. It is
generally delivered as an on-premise appliance and is available no
Concepts and support
SAP HANA is designed to replicate and ingest structured data from SAP and non-SAP
relational databases, applications, and other systems quickly. One of three styles of data
replication trigger-based, ETL-based, or log-based - is used depending on the source
system and desired use-case. The replicated data is then stored in RAM rather than loaded
onto disk, the traditional form of application data storage. Because the data is stored inmemory, it can be accessed in near real-time by analytic and transactional applications that
sit on top of HANA.
What applications does HANA support?
SAP has delivered several next-generation, targeted analytic applications designed
specifically to leverage the real-time functionality offered by HANA, including SAP
Smart Meter Analytics and SAP CO-PA Accelerator. It is developing others focused on
analytics related to retail, financial, telecommunications, and other industries and to
horizontal use-cases such as human capital management.
HANA is highly-optimized to interface with the SAP Business Objects portfolio of
reporting, dash boarding, and other analytic products. SAP plans to add HANA support for
Business Warehouse in November, eliminating the need for Business Warehouse
Accelerator in most customer environments. SAP is also migrating Business ByDesign,
SAPs on-demand business suite for small and mid-sized businesses, on to HANA.
Currently, HANA does not easily support non-SAP analytic or transactional applications
without significant application re-architecting.
What does HANA cost and how large can it scale?
SAP has not publicly released specific pricing information regarding HANA, but early
estimates indicate customers can initially have HANA up and running for under $300,000,
including hardware, software, and services. Depending on scale, pricing levels can reach
up to $2 million or more. HANA is not capable of storing petabyte-levels of data.
However, due to its advanced compression capabilities, HANA deployments can store tens
of terabytes of data or more, which is considered large data volumes in most current SAP
customer environments.
What is the HANA value proposition to customers?
Enterprises collect large volumes of structured data via legacy ERP, CRM, and other
systems. Most struggle to make use of the data while spending large sums to store and
protect it. One option to make use of this data is to extract, transform, and load subsets
into a traditional enterprise data warehouse for analysis. This process is time-consuming
and requires significant investment in related proprietary hardware. The result is often an
expensive, bloated EDW that provides little more than backward-looking views of
company data.
SAP HANA offers enterprises a new approach to harnessing the value of all that corporate
data. As mentioned above, HANA runs on inexpensive commodity hardware from any of
several SAP partners, including IBM, Dell, and HP. Its data replication and integration
capabilities vastly speed up the process of loading data into the database. And because it
uses in-memory storage, applications on top of HANA can access data in near-real time,
meaning end-users can gain meaningful insight while there is still time to take meaningful
action. HANA can also perform predictive analytics to help organizations plan for future
market developments.
How is it different From Competing Offerings from Oracle?
Oracle unveiled an in-memory analytic appliance of its own, called Exalytics, at Oracle
OpenWorld in October 2011. Among the important differences compared to SAP HANA,
Exalytics is designed to run on Sun-only hardware, it is a mash-up of various existing
Oracle technologies, and there are few, if any, systems in production. As with all Oracle
technologies, the risk of vendor lock-in is high, and the cost is significantly higher than
comparable HANA deployments.
What are compelling SAP HANA use cases?
Real-time analytics as supported by SAP HANA have numerous potential use cases
including:
Profitability reporting and forecasting,
Retail merchandizing and supply-chain optimization,
Security and fraud detection,
Energy use monitoring and optimization, and,
Telecommunications network monitoring and optimization.
What are HANAs limitations?
SAP HANA is not a platform for loading, processing, and analyzing huge volumes
petabytes or more of unstructured data, commonly referred to as big data. Therefore,
HANA is not suited for social networking and social media data analytics. For such uses
cases, enterprises are better off looking to open-source big-data approaches such as
Apache Hadoop or LexisNexis HPCC Systems, or even MPP-based next generation data
warehousing appliances like EMC Greenplum or Teradata AsterData.
While SAP has promised a slew of new HANA-optimized applications, currently only a
handful are on the market. It is incumbent upon SAP to follow through on its commitment
with practical applications that address real-world business problems. Also, SAP HANA is
not pre-optimized to support non-SAP applications, which requires significant application
re-engineering on the part of enterprise IT groups.
What does HANA mean for SAPs product direction?
Enterprises are increasingly demanding real-time analytic and transactional processing
capabilities from business applications. HANA puts SAP in a good position to deliver such
functionality for its customer-base of traditional enterprises. But SAP must balance
innovation in the form of HANA and related applications with continuing support for its
legacy back-office ERP and other business applications that form the backbone of many
an enterprise IT environment. Further, as unstructured data processing and analytics
becomes more commonplace at traditional (read: non-Web 2.0 companies) enterprises,
Open environment
Support standard JDBC/ODBC and RESTFul Webservice to help you build applications
that can be easily integrated with your legacy systems.
Componentized data integration
Allow maximum flexibility and shrink TCO with componentized data integration.
Efficient systems management
Integrate development, administration, and monitoring tools to manage systems more
efficiently
SAP Hanna certifications
The SAP HANA Hardware Directory lists all hardware that has been certified or is
supported under the following scenarios:
Hardware that has been certified within the SAP HANA hardware certification
program
Previously validated hardware based on Westmere technology - as reflected earlier in
the Product Availability Matrix PAM.
Supported Entry Level Systems: Only Intel Xeon E5 v2/v3 based 2-socket single
node systems with minimum 8 cores per CPU and is valid for particular SPS releases
The certification is valid for a particular group of appliances/storage family of the
hardware manufacturer wherein multiple models might be included. For further details,
like released CPU types, please compare the scenario pages in SCN: SAP HANA
Hardware Certification - Appliance (HANA-HWC-AP SU) for SUSE Linux Enterprise
Server (SLES) (see details & disclaimer)
SAP HANA Hardware Certification - Appliance (HANA-HWC-AP RH) for Red Hat
Enterprise Linux (RHEL) .SAP HANA Hardware Certification Enterprise Storage
Scenario (HANA-HWC-ES) (see details & disclaimer).Details in the listings are as
provided by the Partner on the certification/publishing date. They are subject to
change and may be changed by SAP at any time without notice. It is not intended to
be binding upon SAP to any particular course of business, product strategy and/or
development. The certification is valid for the stipulated time-period. Any errors in a
listing do not result in the right to get support for a particular configuration. The
Supported Entry Level Systems are valid for specific service packs. The hardware
was tested by the hardware partner with SAP Linux Lab. The systems are supported
for SAP HANA..For SAP HANA compute nodes memory chips have to be
homogeneous, spread across all CPUs symmetrically and providing maximum
bandwidth. The hardware is required to have a valid SAP HANA Hardware
certification at the point of purchase by the customer. Once the validity date of the
certification has passed, the hardware will continue to be supported by the Partner
until the end of maintenance as indicated by the Partners
Certified Appliances
The certification is valid for a particular group of appliances from
the hardware manufacturer wherein multiple models might be included. During the
preparation phase of the certification Partner and SAP can jointly agree on the group of
appliances. The group of appliances can be defined as a set of appliance models that share
and fulfill all of the following criteria:
They are based on the same architecture
Within one group of certified appliances different CPU models within the same
microarchitecture can be used as released by SAP HANA reference architecture:
Nehalem EX architecture: Intel X7560
Westmere EX architecture: Intel E7-#870 (# is for 2,4, or 8)
Ivy Bridge EX architecture: Intel E7-#880v2, E7-#890v2 (# is for 2,4, or 8)
Haswell EX architecture: Intel E7-8880v3, E7-8890v3, or E7-8880Lv3
Broadwell EX architecture: Intel E7-8880v4, E7-8890v4
Empty CPU slots may be filled up to the maximum number supported by the
appliance
There are several scenarios available for SAP HANA enterprise storage certification.
Please see details below for the test procedure which comes with each scenario
version applicable to the HANA Hardware Certification Check Tool (HWCCT).
1. Scenario Version HANA-HWC-ES 1.0
For certification tests, HWCCT based on HANA SPS8 or SPS9 and related
revisions is used.
2. Scenario Version HANA-HWC-ES 1.1
For certification tests, HWCCT based on HANA SPS10 and related revisions are
used.
New KPI table is introduced
The Enterprise Storage Certification Scenario HANA-HWC-ES 1.1 is the successor of
HANA-HWC-ES 1.0 with an updated testing method and an adopted set of KPIs.
Certifications that were released under scenario 1.0 remain valid until their expiration date
and to be used with all SAP HANA revisions. From July 14th 2015 onwards the
certification scenario 1.1 is mandatory
SAP HANA is an in-memory data platform that is deployable as an on-premise appliance,
or in the cloud. This in-memory database is highly suited, for performing real-time
analytics, and developing and deploying real-time applications. And this is powered by,
real-time DB platform, Hanna database, which is fundamentally different than any other
database engines which are available currently and they are, and platform architecture is
shown below.
Introduction to In-memory computing
In-memory computing technology combines hardware and software technology
innovations. Hardware innovations include blade servers and CPUs with multicore
architecture and memory capacities measured in terabytes for massive parallel scaling.
Software innovations include an in-memory database with highly compressible row and
column storage specifically designed by SAP to maximize in memory computing
technology. Parallel processing takes place in the database layer rather than in the
application layer.
In-memory computing is the storage of information in the main random access memory
(RAM) of dedicated servers rather than in complicated relational databases operating on
comparatively slow disk drives. In-memory computing helps business customers,
including retailers, banks and utilities, to quickly detect patterns, analyze massive data
volumes on the fly, and perform their operations quickly. The drop in memory prices in
the present market is a major factor contributing to the increasing popularity of in-memory
computing technology. This has made in-memory computing economical among a wide
variety of applications. Many technology companies are making use of this technology.
For example, the in-memory computing technology developed by SAP, called High-Speed
Analytical Appliance (HANA), uses a technique called sophisticated data compression to
store data in the random access memory. HANAs performance is 10,000 times faster
when compared to standard disks, which allows companies to analyze data in a matter of
bad movies have been able to enjoy a great opening weekend before crashing the
second weekend when negative word-of-mouth feedback has cooled off the initial
enthusiasm. That week-long grace period is about to disappear for silver screen flops.
In the future, consumer feedback wont take a week, a day, or an hour. The very
second showing of a movie could suffer from a noticeable falloff in attendance due to
consumer criticism piped instantaneously through the new technologies. Since such
rapid response is not possible with old-fashioned movie reels, changes in technology
both disruptive and accelerated will surge through other industries besides IT.
Another example worth mentioning harks back to McDermotts middleman: It will
no longer be good enough to have the weekend numbers ready for executives on
Monday morning. Executives will run their own reports on revenue, Twitter their
reviews over the weekend and by Monday morning have acted on their decisions.
Our final example is from the utilities industry: The most expensive energy that a
utilities company provides is energy to meet unexpected demand during peak periods
of consumption. In those cases, the provider may have to buy additional energy to
support the power grid, which can get expensive. However, if the company could
analyze trends in electrical power consumption based on real-time meter reading, it
could offer its consumers in real time extra low rates for the week or month if
they reduce their consumption during the following few hours. Consumers then have
the option to save money by modifying their immediate consumption patterns,
perhaps by switching off the power at their residence and going to a movie. By
giving consumers an informed choice and an incentive, utilities companies have a
chance to moderate peaks in energy consumption. This advantage will become much
more dramatic when we switch to electric cars; predictably, those cars are going to be
recharged the minute the owners return home from work, which could be within a
very short period of time. In order to establish an effective BI strategy, IT
professionals must ask themselves which half of the money they spend on BI
investments is working. Within the marketing organization, you probably use focus
groups to determine what works and what does not. By using SAP BusinessObjects
BI solutions, you can get help from powerful statistics tools to focus your efforts
quickly. 6 SAP Thought Leadership SAP In-Memory Computing Technology
In-memory computing technology combines hardware and software technology
innovations. Hardware innovations include blade servers and CPUs with multicore
architecture and memory capacities measured in terabytes for massive parallel
scaling. Software innovations include an in-memory database with highly
compressible row and column storage specifically designed by SAP to maximize inmemory computing technology. Parallel processing takes place in the database layer
rather than in the application layer as we know it from the client-server architecture.
Total cost is expected to be 30% lower than traditional relational database technology
due to: Leaner hardware and less system capacity required, as mixed workloads of
analytics, operations, and performance management are handled within a single
system, which also reduces redundant data storage Reduced extract, transform, and
load (ETL) processes between systems and fewer prebuilt reports, reducing the
support effort required to run the software Replacing traditional databases in SAP
applications with in-memory computing technology resulted in report runtime
improvements of up to a factor of 1000 and compression rates of up to a factor of 10.
simple to load data from your transactional database into your reporting database, or even
build traditional tuning structures to enable that reporting. As transactions are happening,
and besides, administrators can report against them live. By consolidating two landscapes
(OLAP and OLTP) into a single database, SAP HANA provides companies with massively
lower TCO in addition to mind-blowing speed. But even more important is the new
application programming paradigm enabled for extreme applications. Since the SAP
HANA database resides entirely in-memory all the time, additional complex calculations,
functions and data-intensive operations can happen on the data directly in the database,
without requiring time-consuming and costly movements of data between the database and
applications. This incredible simplification and optimization of the data layer is the killer
feature of SAP HANA because it removes multiple layers of technology and significant
human effort to get incredible speed. It also has the benefit of reducing the overall TCO
of the entire solution. Some other database engines on the market today might claim to
provide one or another benefit that SAP HANA brings. However, none of them can deliver
on all of them. This is real-time computing, and customers can take advantage of this
today via SAP BW on SAP HANA, Accelerators on SAP HANA and native SAP HANA
applications
But even more important is the new application programming paradigm enabled for
extreme applications. Since the SAP HANA database resides entirely in-memory all the
time, additional complex calculations, functions and data-intensive operations can happen
on the data directly in the database, without requiring time-consuming and costly
movements of data between the database and applications. This incredible simplification
and optimization of the data layer is the killer feature of SAP HANA because it removes
multiple layers of technology and significant human effort to get incredible speed. It also
has the benefit of reducing the overall TCO of the entire solution. Some other database
engines on the market today might claim to provide one or another benefit that SAP
HANA brings. However, none of them can deliver on all of them. This is real-time
computing, and customers can take advantage of this today via SAP BW on SAP HANA,
Accelerators on SAP HANA and native SAP HANA applications
Impactful architecture
When enterprise architecture plays a role in setting up and maintaining BI projects, it
helps reduce costs by considering the overall context of the project. It examines business
needs, IT landscape, performance requirements, data model complexity, and the tools and
software required to meet reporting demands. In seeing that these requirements are met, it
helps establish a unified BI platform that supports administration, thereby significantly
trimming the TCO for a BI project. Incrementally, and compatibility with current and
legacy landscapes. In-memory computing technology provides scaling and flexibility of
hardware for higher performance. On-the-fly aggregations relieve IT staff from manual
query tuning and data aggregation tasks. In cases where an enterprise data warehouse is
not in place, SAP HANA provides instant access to real-time data via replication from
ERP software from SAP with no need for complex ETL processes. By contrast, in
traditional data warehouse environments, ever higher performance and functional
requirements lead to the acquisition of additional hardware, software, and performance
tuning tasks. In highly heterogeneous environments, multiple BI solution sets require
additional independent lifecycle management, which adds to solution maintenance efforts.
you must understand your business strategy, then establish an IT portfolio that reinforces
that strategy with every IT decision made. In doing so, your IT can help align company
priorities across lines of business and corporate support organizations, achieve business
objectives, and keep lines of business profitable. The objectives for IT portfolio
management, assists achieve business objectives, and keep lines of business profitable.
The objectives for IT portfolio managers s, executing project work, and maintaining
budget accountability. Aligning existing and planned projects with these guidelines.
Prohibiting all IT and business initiatives not aligned with the guidelines of the IT
portfolio If a company intends to leverage in memory computing technology, its IT
portfolio management becomes even more important. We give two examples that illustrate
this. Formerly, operational reporting functionality was transferred from OLTP applications
to a data warehouse. With in-memory computing technology, this functionality is
integrated back into the transaction system. The consequence is s, executing project work,
and maintaining budget accountability .Aligning existing and planned projects with these
guidelines Prohibiting all IT and business initiatives not aligned with the guidelines of
the IT portfolio If a company intends to leverage in memory computing technology, its IT
portfolio management becomes even more important. We give two examples that illustrate
this. Formerly, operational reporting functionality was transferred from OLTP applications
to a data warehouse. With in-memory computing technology, this functionality is
integrated back into the transaction system. The consequence is hat transaction processing
functionality will have to be aligned much more closely with the integrated BI
functionality. SAP BusinessObjects Event Insight, which uses in-memory computing,
requires a tightly woven network of data processes and data exploration to identify
specific thresholds of performance measures. Reaching a threshold then triggers additional
steps in one or more business processes. In order to implement an end-to-end business
process with embedded BI, both business process effort and IT effort must be carefully
orchestrated. Enterprise architecture emerges where business requirements are formally
and rigorously sustained by IT. In advanced enterprise architectural processes, no
technology is implemented that has not been vetted and approved by the enterprise
architecture office with regard to strategic viability and medium- to long-term benefit for
the enterprise. However, those organizations wanting a head-start in in-memory
computing must deploy at least one in-memory computing project as soon as possible to
develop know-how, resources, and a feel for how the new technology will impact their
unique situation. Obviously, it will take some time before the first applications developed
specifically to exploit in-memory computing affect the enterprise application architecture.
Yet in-memory computing technology will have a major impact on BI and data warehouse
application architecture as well as on OLTP applications. This is something enterprise
architects should take stock of earlier rather than later. Real-time data access to BI
information and the repurposing of data warehouses will be key to orchestrating their onpremise, on-device, and on demand architecture successfully. And they will find that inmemory computing will open up new avenues and change the way they regard three-tier
architecture for data processing.
A Clean, Spare Architecture
Adopting in-memory computing results in an uncluttered architecture based on a few,
tightly aligned core systems enabled by service-oriented architecture (SOA) to provide
harmonized, valid metadata and master data across business processes. Some of the most
salient shifts and trends in future enterprise architectures will beta shift to BI self-service
application like data exploration, instead of rolling out static report solutions for structured
and unstructured data. Full integration of planning business processes with instant BI
provisioning from the source applications, replacing data warehouses for operational data
and near-real-time data transfers. Substitution of traditional ETL architecture and
expansive data cleansing and harmonization processes with real-time data validation
during all manual and automated data input processes. Central metadata and master-data
repositories that define the data architecture, allowing data stewards to work effectively
across all business units and all architecture platforms Instantaneous analysis of real-time
trending algorithms with direct impact on live execution of business processes Offline
long-term historic trending that can impact future execution of business processes
.Construction of an event insight grid architecture combining live business applications
across on-premise, on device, and on-demand architectures for proactive use of BI instead
of analyzing historic events after the fact What specific changes are introduced to existing
landscapes depend on how functional requirements, such as high availability or disaster
recovery, were implemented. The technical specifications of the hardware making up the
landscape also play a role. Another factor is whether a companys data center is committed
to a single vendors technology or is prepared to incorporate technology from a range of
vendors.
It is most likely that future deployments of SAP Net Weaver BW will not require separate
hardware to run SAP NetWeaver BW Accelerator. However, to what extent existing
hardware components can be reused for SAP NetWeaver BW Accelerator will depend on
how that hardware exploits the in-memory computing technology of the application. Realtime in-memory computing technology will most probably because a decline in sheer
numbers of Structured Query Language (SQL) satellite databases. The purpose of those
databases as flexible, ad hoc, more business-oriented, less IT-static tools might still be
required, but their offline status will be too much of a disadvantage and will delay data
updates. Some might argue that satellite systems equipped with in-memory computing
technology will take over from satellite SQL databases. For limited sandbox purposes, that
is a possibility. However, because in-memory computing technology can process massive
quantities of real-time data to provide instantaneous results, traditional satellite
architectures will always be at least one step behind. They are also likely to inherit
undesired transformations made during the ETL process.
SAP Hanna system architecture
The SAP HANA database is developed in C++ and runs on SUSE Linux Enterprise
Server. SAP HANA database consists of multiple servers and the most important
component is the Index Server. SAP HANA database consists of Index Server, Name
Server, Statistics Server, Preprocessor Server and XS Engine.
The index server uses the preprocessor server for analyzing text data and extracting the
information on which the text search capabilities are based.
Name Server:
The name server owns the information about the topology of SAP HANA system. In a
distributed system, the name server knows where the components are running and which
data is located on which server.
Statistic Server:
The statistics server collects information about status, performance and resource
consumption from the other servers in the system.. The statistics server also provides a
history of measurement data for further analysis.
Session and Transaction Manager:
The Transaction manager coordinates database transactions, and keeps track of running
and closed transactions. When a transaction is committed or rolled back, the transaction
manager informs the involved storage engines about this event so they can execute
necessary actions.
XS Engine:
XS Engine is an optional component. Using XS Engine clients can connect to SAP HANA
database to fetch data via HTTP.
Analytics and Applications
Real-time analytics The Categories of Analytics which HANA specializes
1. Operational Reporting (real-time insights from transaction systems such as
custom or SAP ERP). This covers Sales Reporting (improving fulfillment rates
and accelerating key sales processes), Financial Reporting (immediate insights
across revenue, customers, accounts payable, etc.), Shipping Reporting (better
enabling complete stock overview analysis), Purchasing Reporting (complete
real-time analysis of complete order history) and Master Data Reporting (realtime ability to impact productivity and accuracy).
2. Data Warehousing (SAP NetWeaver BW on HANA) BW customers can run
their entire BW application on the SAP HANA platform leading to
unprecedented BW performance (queries run 10-100 times faster; data loads 510 times faster; calculations run 5-10 times faster), a dramatically simplified IT
landscape (leads to greater operational efficiency and reduced waste), and a
business community able to make faster decisions. Moreover, not only is the
BW investment of these customers preserved but also super-charged. Customers
can migrate with ease to the SAP HANA database without impacting the BW
application layer at all.
3. Predictive and Text analysis on Big Data - To succeed, companies must go
beyond focusing on delivering the best product or service and uncover
customer/employee /vendor/partner trends and insights, anticipate behavior and
take proactive action. SAP HANA provides the ability to perform predictive and
text analysis on large volumes of data in real-time. It does this through the
power of its in-database predictive algorithms and its R integration capability.
With its text search/analysis capabilities SAP HANA also provides a robust way
to leverage unstructured data.
processes. Its extremely fast response times ensure performance indicators are available
within seconds, allowing companies to adapt to any changes in their business operations
without delay. It also paves the way for never-before-seen applications that build on the
ability to analyze all available data with unmatched flexibility and speed, for example for
complex material requirements planning processes, the real-time analysis of customer
behavior, or available-to-promise checks to find out if an order can actually be delivered to
the customer in the specified delivery time. Companies can turn to REALTECH as an
independent partner with extensive hands-on experience in the delivery of in-memory
projects. We provide consulting services to help you implement, transition to, and run an
in-memory platform. We will work with you develop a custom in-memory strategy that is
unique to your business and allows you to optimize your business processes and drive
value. Tap into our knowledge, gained from hundreds of customer projects, and our
expertise as an SAP partner for SAP HANA projects.
The benefits
Cost
SAP Hanna has the capabilities to
optimization deliver higher levels of cost reductions
and provide significant reduction in
the database footprint, without losing
any information also means that you
can dramatically reduce the
complexity of your system landscape.
This means lower purchase and
maintenance costs and less time spent
on system and lifecycle management.
High levels
of flexibity
Assists in streamlining of the IT landscape and reduce total cost of ownership (TCO
And in the manufacturing enterprises, in-memory computing technology will connect the
shop floor to the boardroom, and the shop floor associate will have instant access to the
same data as the board member. The technology supports this by integrating on-premise,
on-demand, and on-device architectures. Once the appropriate business processes are in
place, empowered shop floor staff can take immediate action based on re all-time data to
make whatever adjustments on the shop floor are necessary. They will then see the results
of their actions reflected immediately in the relevant KPIs. You could call this true 360degree BI, as it eliminates the middleman as well as the need to create
Any reports other than whatever statutory or legal reports may be required.
Besides, one of the main use cases of in-memory compute is the shop-floors of a factory,
where enterprises, in-memory computing technology will connect the shop floor to the
boardroom, and the shop floor associate will have instant access to the same data as the
board member.
The technology supports this by integrating on-premise, on-demand, and on-device
architectures. Once the appropriate business processes are in place, empowered shop floor
staff can take immediate action based on real-time data to make whatever adjustments on
the shop floor are necessary. They will then see the results of their actions reflected
immediately in the relevant KPIs. And this process if properly implemented, it to a large
extent eliminates the need for the middleman as well as the need to create any reports
other than whatever statutory or legal reports may be required.
An apt case can be entertainment companies. Conventionally, bad movies have been able
to enjoy a great opening weekend before crashing the second weekend when negative
word-of-mouth feedback has cooled off the initial enthusiasm. That week-long grace
period is about to disappear for silver screen flops. In the future, consumer feedback wont
take a week, a day, or an hour. The very second showing of a movie could suffer from a
noticeable falloff in attendance due to consumer criticism piped instantaneously through
the new technologies. Since such rapid response is not possible with old-fashioned movie
Another case, relatest to the utility segment. The most expensive energy that a utilities
company provides is energy to meet unexpected demand during peak periods of
consumption. In those cases, the provider may have to buy additional energy to support
the power grid, which can get expensive. However, if the company could analyze trends in
electrical power consumption based on real-time meter reading, it could offer its
consumers in real time extra low rates for the week or month if they reduce their
consumption during following the next few hours.
SAP Hanna Architecture
The in-memory computing engine of SAP Hanna, database is developed in C++ and runs
on SUSE Linux Enterprise Server. SAP HANA database consists of multiple servers and
the most important component is the Index Server. SAP HANA database consists of
Index Server, Name Server, Statistics Server, Pre-processors Server and XS Engine and
powered by the following components
Index server
Name server
Statistics server
Pre-processor server
The SAP HANA database is developed in C++ and runs on SUSE Linux Enterprise
Server. SAP HANA database consists of multiple servers and the most important
component is the Index Server. SAP HANA database consists of Index Server, Name
Server, Statistics Server, Preprocessor Server and XS Engine. And the functionalities are
listed below
Index server
Persistent
layer
Processor
server
Name
server
Statistics
server
Index server 1
Connection and Session Management
This component is responsible for creating and managing sessions and connections for
the database clients. Once a session is established, clients can communicate with the
SAP HANA database using SQL statements. For each session a set of parameters are
maintained like, auto-commit, current transaction isolation level etc. Users are
Authenticated either by the SAP HANA database itself (login with user and password)
or authentication can be delegated to an external authentication providers such as an
LDAP directory.
The Authorization Manager This component is invoked by other SAP HANA
database components to check whether the user has the required privileges to execute
the requested operations.SAP HANA allows granting of privileges to users or roles. A
privilege grants the right to perform a specified operation (such as create, update,
select, execute, and so on) on a specified object (for example a table, view, SQLScript
function, and so on).The SAP HANA database supports Analytic Privileges that
represent filters or hierarchy drilldown limitations for analytic queries. Analytic
privileges grant access to values with a certain combination of dimension attributes.
This is used to restrict access to a cube with some values of the dimensional attributes.
Request Processing and Execution Control: The client requests are analyzed and
executed by the set of components summarized Request Processing and Execution
Control. The Request Parser analyses the client request and dispatches it to the responsible
component. The Execution Layer acts as the controller that invokes the different engines
and routes intermediate results to the next execution step.
SQL Processor: Incoming SQL requests are received by the SQL Processor. Data
manipulation statements are executed by the SQL Processor itself. Other types of requests
are delegated to other components. Data definition statements are dispatched to the
Metadata Manager, transaction control statements are forwarded to the Transaction
Manager, planning commands are routed to the Planning Engine and procedure calls are
forwarded to the stored procedure processor.
SQLScript: The SAP HANA database has its own scripting language named SQLScript
that is designed enable optimizations and parallelization. SQLScript is a collection of
extensions to SQL. SQLScript is based on side effect free functions that operate on tables
using SQL queries for set processing. The motivation for SQLScript ito offloads dataintensive application logic into the database.
Multidimensional Expressions (MDX):
MDX is a language for querying and manipulating the multidimensional data stored
in OLAP cubes. Incoming MDX requests are processed by the MDX engine and also
forwarded to the Calc Engine.
Planning Engine:
Planning Engine allows financial planning applications to execute basic planning
operations in the database layer. One such basic operation is to create a new version of a
data set as a copy of an existing one while applying filters and transformations. For
example: planning data for a new year is created as a copy of the data from the previous
year. Another example for a planning operation is the disaggregation operation that
distributes target values from higher to lower aggregation levels based on a distribution
function.
Calc engine:
The SAP HANA database features such as SQLScript and Planning operations are
implemented using a common infrastructure called the Calc engine. The SQLScript,
MDX, Planning Model and Domain-Specific models are converted into Calculation
Models. The Calc Engine creates Logical Execution Plan for Calculation Models. The
Calculation Engine will break up a model, for example some SQL Script, into operations
that can be processed in parallel.
Transaction Manager:
In HANA database, each SQL statement is processed in the context of a transaction. New
sessions are implicitly assigned to a new transaction. The Transaction Manager
coordinates database transactions, controls transactional isolation and keeps track of
running and closed transactions. When a transaction is committed or rolled back, the
transaction manager informs the involved engines about this event so they can execute
necessary actions.
The transaction manager also cooperates with the persistence layer to achieve atomic and
durable transactions.
Metadata Manager:
Metadata can be accessed via the Metadata Manager component. In the SAP HANA
database, metadata comprises a variety of objects, such as definitions of relational tables,
columns, views, indexes and procedures. Metadata of all these types is stored in one
common database catalog for all stores. The database catalog is stored in tables in the Row
Store. The features of the SAP HANA database such as transaction support and multiversion concurrency control are also used for metadata management. In the center of the
figure you see the different data Stores of the SAP HANA database. A store is a subsystem of the SAP HANA database which includes in-memory storage, as well as the
components that manages that storage.
Persistence Layer:
The Persistence Layer is responsible for durability and atomicity of transactions. This
layer ensures that the database is restored to the most recent committed state after a restart
and that transactions are either completely executed or completely undone. To achieve this
goal in an efficient way, the Persistence Layer uses a combination of write-ahead logs,
shadow paging and save points. The Persistence Layer offers interfaces for writing and
reading persisted data. It also contains the Logger component that manages the transaction
log. Transaction log entries are written explicitly by using a log interface or implicitly
when using the virtual file abstraction.
Architecture overview
The SAP HANA database is developed in C++ and runs on SUSE Linux Enterprise
Server. SAP HANA database consists of multiple servers and the most important
component is the Index Server. SAP HANA database consists of Index Server, Name
Server, Statistics Server, Preprocessor Server and XS Engine. The SAP HANA database
has been developed on C++ and runs on SUSE Linux Enterprise Server. SAP HANA
database consists of multiple servers and the most important component is the Index
Server. SAP HANA database consists of Index Server, Name Server, Statistics Server,
Preprocessor Server and XS Engine.The SAP Hanna server architecture is designed to
provide a sturdy, high available, and reliable, in-memory computing facility for the
enterprises, and especially designing large and complex computer architecture, the
proprietary in-memory computing is designed to serve the purpose. And the adoption of
SAP Hanna database has increased exponentially over the years and it has turned out to be
a industry leader and industry standard for in-memory computing requirements, across all
business verticals and all relevant requirements. Shown is the Server architecture of SAP
Hanna
Index
server
This component is responsible for creating and managing sessions and connections for the
database clients, once a session is established, clients can communicate with the SAP
HANA database using SQL statements. For each session a set of parameters are
maintained like, auto-commit, current transaction isolation level etc. Users are
Authenticated either by the SAP HANA database itself (login with user and password) or
authentication can be delegated to an external authentication providers such as an LDAP
director
The Authorization Manager
This component is invoked by other SAP HANA database components to check whether
the user has the required privileges to execute the requested operations. SAP HANA
allows granting of privileges to users or roles. A privilege grants the right to perform a
specified operation (such as create, update, select, execute, and so on) on a specified
object (for example a table, view, SQLScript function, and so on).The SAP HANA
database supports Analytic Privileges that represent filters or hierarchy drilldown
limitations for analytic queries. Analytic privileges grant access to values with a certain
combination of dimension attributes. This is used to restrict access to a cube with some
values of the dimensional attributes.
Request Processing and Execution Control:
The client requests are analyzed and executed by the set of components summarized as
Request Processing and Execution Control. The Request Parser analyses the client request
and dispatches it to the responsible component. The Execution Layer acts as the controller
that invokes the different engines and routes intermediate results to the next execution
step.
SQL Processor:
Incoming SQL requests are received by the SQL Processor. Data manipulation statements
are executed by the SQL Processor itself. Other types of requests are delegated to other
components. Data definition statements are dispatched to the Metadata Manager,
transaction control statements are forwarded to the Transaction Manager, planning
commands are routed to the Planning Engine and procedure calls are forwarded to the
stored procedure processor.
SQLScript:
The SAP HANA database has its own scripting language named SQLScript that is
designed to enable optimizations and parallelization. SQLScript is a collection of
extensions to SQL. SQLScript is based on side effect free functions that operate on tables
using SQL queries for set processing. The motivation for SQLScript is to offload data-
Want to know more about Row Data and Column Data Storage?
Persistence Layer:
The Persistence Layer is responsible for durability and atomicity of transactions. This
layer ensures that the database is restored to the most recent committed state after a restart
and that transactions are either completely executed or completely undone. To achieve this
goal in an efficient way, the Persistence Layer uses a combination of write-ahead logs,
shadow paging and save points. The Persistence Layer offers interfaces for writing and
reading persisted data. It also contains the Logger component that manages the transaction
log. Transaction log entries are written explicitly by using a log interface or implicitly
when using the virtual file abstraction.
SAP Hanna Studio
HANA Studio is an Eclipse-based, integrated development environment (IDE) that is used
to develop artifacts in a HANA server. It enables technical users to manage the SAP
HANA database, to create and manage user authorizations, to create new or modify
existing models of data etc.
Supported platforms
The SAP HANA studio runs on the Eclipse platform 3.6. We can use the SAP HANA
studio on the following platforms:
Microsoft Windows x32 and x64 versions of: Windows XP, Windows Vista,
Windows 7
SUSE Linux Enterprise Server SLES 11: x86 64-bit version
System requirements
Java JRE 1.6 or 1.7 must be installed to run the SAP HANA studio. The Java runtime must
be specified in the PATH variable. Make sure to choose the correct Java variant for
installation of SAP HANA studio:
For a 32-bit installation, choose a 32-bit Java variant.
For a 64-bit installation, choose a 64-bit Java variant.
SAP Hanna clients
HANA Client is the piece of software which enables you to connect any other entity,
including Non-Native applications to a HANA server. This other entity can be, say, an
NW Application Server, an IIS server etc.
The HANA Client installation also provides JDBC, ODBC drivers. This enables
applications written in .Net, Java etc. to connect to a HANA server, and use the server as a
remote database. So, consider client as the primary connection enabler to HANA server.
HANA Client is installed separately from the HANA studio.
SAP Hanna can be installed using the following installation paths
Microsoft Windows 32-bit -> C:\Program Files\sap\hdbstudio
Microsoft Windows 64-bit -> C:\Program Files\sap\hdbstudio
Microsoft Windows 32-bit (x86) -> C:\Program Files (x86)\sap\hdbstudio
Linux:
Open a shell and go to the installation directory, such as /usr/sap/hdbstudio
2. Execute the following command ./hdbstudio.
SAP Hanna Studio
What is SAP HANA Studio? Eclipse Open Integrated Development Environment (IDE)
integrates different tools in a unified environment, big ecosystem of tools
Extensibility, Multi-platform, broad adoption, Eclipse IDE-based developer environment
for SAP HANA Integrated Environment for Administration and end-to-end application
and content development for the SAP HANA Platform. Tools and Plug-Ins for working
with SAP HANA SAP HANA Studio Tools: basic components for design-time SAP
HANA repository interaction and access to run-time objects in SAP HANA database
catalog Domain specific editors for HANA development artifacts, composed in eclipse
perspectives like Administration, Development, and Modeler. The SAP HANA Systems
view is one of the basic elements within SAP HANA Studio. You can use the SAP HANA
Systems view to display the contents of the SAP HANA repository that is hosting your
development project artifacts. The catalog displays the database objects that have been
activated, for example, from design-time objects or from SQL DDL statements. The
objects are divided into schemas, which is a way to organize activated database objects.
1.
The SAP HANA Repositories view enables you to browse the contents of the repository
on a specific SAP HANA system; you can display the package hierarchy and use the
Checkout feature to download files to the workspace on your local file system. The SAP
HANA Repositories view is a list of repository workspaces that you have created for
development purposed on various SAP HANA systems. Generally, you create a
workspace, check out files from the repository, and then do most of your development
work in the systems. The Project Explorer view shows you the development files located
in the repository workspace you create on your workstation. You use the Project Explorer
view to create and modify development files. Using context-sensitive menus, you can also
commit the development files to the SAP HANA repository and activate them.
Data provisioning
It is highly necessary to be aware of the different provisioning techniques that can be
employed, and it is essential that is very important to have a clear idea on different data
provisioning techniques available. For HANA, which are with using the existing and the
tools offered b SAP Hanna repository and external tools offered by different vendors.
Broadly that can be classified as:
SAP HANA in-built tools
External Tools
And listed below are some of the options
Flat file
upload
Remote
data sync
area
Smart data The SAP HANA smart data streaming
streaming option processes high-velocity, highvolume event streams in real time,
allowing us to filter, aggregate, and
enrich raw data before committing it to
your database.With SAP HANA smart
data streaming, you can accept data
input from a variety of sources including
data feeds, business applications,
sensors, IT monitoring infrastructure and
so on, apply business logic and analysis
to the streaming data and store your
results directly in SAP HANA.This
option is available from SAP HANA
SPS 9 revision.
Smart data This option is used to remotely access
access
the data from any source without
SDA
physically loading to SAP HANA and
can be used to build modeling objects on
top of the data. This is achieved by
creating remote connection and then
virtual tables on top of source tables.
The restriction with virtual tables is, it
can be only used to build calculation
views in SAP HANA.This option is
business documents. Hanna, ensures that SAP Business Content Data source Extractors,
have been made available which forms the fundamentals and for data modeling and data
acquisition for SAP business warehouse.Hana ensures that, SAP business content data
source extractors have been made available which creates the fundamentals for data
modeling and data acquisition for SAP business warehouse and with DCX the SAP
Business content data source extractors are available to deliver data directly to SAP
Hanna. DCX is a batch oriented data acquisition method and it should be considered as a
form of extraction and transformation and load although capabilities are limited to the
users for extraction. Besides, a key point about the DCX is that, in many means, the
acquisition occurs, every 15 minutes.
ETLeverage pre-existing foundational data models of SAP Business Suite entities for use
in SAP HANA data mart scenarios significantly reduces complexity of data modeling
tasks in SAP HANA, besides, accelerating timelines for SAP HANA implementation
projects. Provide semantically rich data from SAP Business Suite to SAP HANA:
a.Ensures that data appropriately represents the state of business documents from ERP.b,
Application logic to give the data the appropriate contextual meaning is already built into
many extractors. Reduces the TCO -Re-uses existing proprietarextrxtion methods to
transformation, and load mechanism built into SAP Business Suite systems over a simple
http(s) connection to SAP HANA. There is no No additional server or application needed
in system landscape Change data capture (delta handling): Efficient data acquisitions
only bring new or changed data into SAP HANA DXC provides a mechanism to properly
handle data from all delta processing types
Default DXC Configuration for SAP Business Suite
DXC is available in different configurations based on the SAP Business Suite system: The
default configuration is available for SAP Business Suite systems based on SAP
NetWeaver 7.0 or higher such as ECC 6.0., The alternative configuration is available for
SAP Business Suite systems based on releases lower than SAP Net Weaver 7.0 such as
SAP ERP 4.6, for example An SAP Business Suite system is based on SAP Net Weaver.
As of SAP Net Weaver version 7.0, SAP Business Warehouse (BW) is part of SAP Net
Weaver itself, which means a BW system exists inside SAP Business Suite systems such
as ERP (ECC 6.0 or higher). This BW system is referred to as an embedded BW system.
Typically, this embedded BW system inside SAP Business Suite systems is actually not
utilized, since most customers who run BW have it installed on a separate server, and they
rely on that one. With the default DXC configuration, that SAP Hanna systems, adopt,
scheduling and monitoring features of this embedded BW system, but do not utilize its
other aspects such as storing data, data warehousing, or reporting / BI. DXC extraction
processing essentially bypasses the normal dataflow, and instead sends data to SAP
HANA. The following illustration depicts the default configuration of DXC.
DXC Configuration 1
An In-Memory Data Store Object (IMDSO) is generated in SAP HANA, which will
directly corresponds to the structure of the Data Source which is currently been in use.
The This IMDSO consists of several tables and an activation mechanism. The active data
table of the IMDSO can be utilized as basis for building data models in SAP HANA
(attribute views, analytical views,
Data is transferred from the source SAP Business Suite system using an HTTP connection.
Generally, the extraction and load process is virtually the same as when extracting and
loading SAP Business Warehouse you rely on Info Package scheduling, the data load
monitor, process chains, etc. which are all well-known from operating SAP Business
Warehouse. And calculation views).And shown below is the ETL replication architecture
server
XS engine
Listed below are the functionality
Manager
coordinates
database
transactions, controls transactional
isolation and keeps track of running
and closed transactions. When a
transaction is committed or rolled
back, the transaction manager informs
the involved engines about this event
so they can execute necessary actions.
The transaction manager also
cooperates with the persistence layer
to achieve atomic and durable
transactions
Hanna
Database
Database
Executor
application, scenario, or component runs on one SAP HANA system. This deployment
type is available, with restrictions, for production SAP HANA systems.
Multiple SAP HANA Systems on One Host (MCOS)
SAP Hanna System types
The number of hosts in a SAP HANA system landscape determines the SAP HANA
system type. Requires. The host provides links to the installation directory, data directory,
and log directory or to the storage itself. The storage needed for an installation does not
have to be on the host. In particular, shared data storage is required for distributed
systems. requires. The host provides links to the installation directory, data directory, and
log directory or to the storage itself. The storage needed for an installation does not have
to be on the host. In particular, shared data storage is required for distributed systems. An
SAP HANA system can be configured as one of the following types:
Single-host system - One SAP HANA instance on one host.
Distributed system (multiple-host system) - Multiple SAP HANA instances
distributed over multiple hosts, with one instance per host
Technical representation of single and multiple hosts systems are shown
If the system consists of multiple connected hosts, it is called a distributed system. The
following graphic shows the file system for a distributed installation:
internal host names. The host names of the other site must always be resolvable, for
example, through configuration in SAP HANA or corresponding entries in the /etc/hosts
file.
Host Name Resolution for Client Communications
Client applications communicate with SAP HANA servers from different platforms and
types of clients via a client library (such as SQLDBC, JDBC, ODBC, DBSL, ODBO or
ADO.NET) for SQL or MDX access. And in, In distributed systems, the application has a
logical connection to the SAP HANA system: that is, the client library may in fact use
multiple connections to different servers or change to a different underlying connection.
The client library supports load balancing and minimizes communication overhead by
Selecting connections based on load data
Routing statements based on information about the location of data
Communication with SAP HANA hosts from a Web browser or a mobile application is
requested using the HTTP protocol, which enables access to SAP HANA Extended
Application Services (SAP HANA XS).
Public Host Name Resolution
By design, An SQL client library always connects to the first available host specified in
the connect string. From this host, the client library then receives a list of all the hosts.
During operations, statements may be sent to any of these hosts this works as long as there
is only one external network. If a hostname or IP address is irresolvable, the client library
falls back on the host names in the connect string: In single-host systems, the user doesnt
normally notice this. In rare cases, the connection attempt does not fail immediately but
waits for a tcp timeout, making the first statement run very slowly. In distributed systems,
performance is impaired because statements must first be sent to the initial host and then
forwarded on the server side to the right host names
Connect String with Multiple Hostnames
In a distributed SAP HANA system consisting of more than one host, a list of hosts (host:
port) is specified in the SQL client library connect string. It is practically possible that, all
hosts can take the role of the active master, they are one of the three configured master
candidates, must be listed in the connect string to allow an initial connection to any of
them in the event of a host auto-failover. A host auto-failover is an automatic switch from
a crashed host to a standby host in the same system. One (or more) standby hosts are
added to a SAP HANA system and configured to work in standby mode. As long as they
are in standby mode, these hosts do not contain any data and do not accept requests or
queries. When an active (worker) host fails, a standby host automatically takes its place
and Inclusion of the standby hosts in the connect string is mandatory if they are master
candidates, otherwise optional. The client connection code (ODBC/JDBC) uses a roundrobin approach to reconnection, ensuring that the clients can always access the SAP
HANA database, even after failover. The following diagram illustrates how host autofailover works. An active host fails (in this example, Host 2), and the standby host takes
over its role by starting its database instance using the persisted data and log files of the
failed host.
Tight
Tight coupling is the first choice for the
coupling designers, and is generally implemented
through data warehousing. In this case,
data is pulled over from disparate sources
into a single physical location through the
process of ETL - Extraction,
Transformation and Loading. The single
physical location provides a uniform
interface for querying the data. ETL layer
helps to map the data from the sources so
as to provide a semantically uniform data
warehouse
Loose In direct contrast, the tight coupling
coupling approach, is a to tight coupling approach, a
virtual mediated schema provides a
interface that takes the query input from
the user, transform the query in the way
source database can understand and then
sends the query directly to the source
Tight coupling
Advantage
Disadvantages
Independence (Lesser
dependency to source
systems since data is
physically copied
over).Faster query
processing. Complex
query processing
.Advanced data
summarization and
storage possible. High
Volume data processing
Loose Coupling
Advantage
Disadvantages
Amazon Web Services (AWS) provides SAP customers and partners with on-demand
access to servers, storage, and networking in the cloud, to run their SAP systems. AWS is
completely self-service, and customers need to pay only for the actual resources that are
required for their requirements, since all the instances are offered on a subscription model,
infrastructure planning gets simplified, as the instances, whatever required can be
purchased and configured within minutes, instead of previously waiting for hours or some
cases days. SAP use cases range from running a single SAP test system to hosting a
complete SAP production environment. Here are some of the most common uses. For
more information about these scenarios.
Disaster
recovery
Task
How to?
Sign-up
.Create a new account: Create a new account to log into the SAP Cloud Appliance
Library. Select Azure as the cloud provider and enter your subscription ID.
Download the certificate
2.
3.
4.
Log into the Azure Portal using the subscription you provided and upload it to
Management Certificates under Settings.
5.
6.
7.
8.
9.
10.
Start working
Once completed, your new instance of SAP HANA Developer Edition will be up and
running on Azure.
Do with the
Simple and
Integrated
BUILD
Design Tool
the critical and non-critical, and extra-modern business application on the cloud
platorm.Powered by the in-memory, technology, SAP Hanna Clouds platform platform-asa-service offers comprehensive capabilities to help business users and developers create
better, more agile applications in less time.
SAP Hanna offers one of the best, in-memory based cloud computing platform with wide
range of services. Designers and architects, can build, run, host both critical and noncritical business and non-business applications and host both business and non-business
services, on the SAP Hanna Cloud platform. And SAP Hanna turns out to be one of the
best platforms for designers and application developers. Users can create, build and run
the critical and non-critical, and extra-modern business application on the cloud
platorms.Powered by the in-memory, technology, SAP Hanna Clouds platform platformas-a-service offers comprehensive capabilities to help business users and developers create
better, more agile applications in less time. Listed below are the capabilities
Integration
services
User
experience
For
developers
and
operations
Internet of
things
Static
requirement
Dynamic
requirement
certified hardware configurations already take these rules into account, so there is no need
to perform this disk sizing.
CPU sizing
CPU sizing only has to be performed in addition to the memory sizing if a massive
amount of users working on a relatively small amount of data is expected. Choose the Tshirt configuration size that satisfies both the memory and CPU requirements. The CPU
sizing is user-based. The SAP HANA system has to support 300 SAPS for each
concurrently active user. The servers used for the IBM Systems Solution for SAP HANA
support about 60 - 65 concurrently active users per CPU, depending on the server model.
SAP Hanna Migration
Before migration, the following aspects needs to be considered and a final strategy has to
be put in place and the plan should be guide to the entire migration process, for the
planning of environments, migration procedure of SAP systems to SAP HANA in an onpremise landscape The plan should be detailed should include all stakeholders, include
each and every requirement for migration plan, and importantly it should not be a copy
paste job and such a document serves as the starting point,It should start with an overview
of available migration path options, explore options, and alternatives, and incorporate
recommendations from different stakeholders. And it is essential that the organization
hire, professional consultants, to derive at a rugged and a fool-proof migration strategy
and plan.
Migration path
Option 1:
SAP Hanna offers a couple of migration strategies, for instance, if the requirement is
transform the existing architecture, SAP has multiple choices to offer,viz, SAP Landscape
Transformation, where you install a new system for the transformation, such as
performing a step-wise or partly migration of an SAP system or the consolidation of
several systems into one system running on SAP HANA.
Option 2:
The classical migration of SAP systems to SAP HANA (that is, the heterogeneous system
copy using the classical migration tools software provisioning manager 1.0 and R3load) is
using reliable and established procedures for exchanging the database of an existing
system - it is constantly improved especially for the migration to SAP HANA.
Option 3:
SAP is also offering an option of a strategy which is, one-step procedure that combines
system update and database migration for the migration to SAP HANA. This is provided
with the database migration option (DMO) of Software Update Manager (SUM).The
common strategy or recommendation is to adopt the database migration options offered by
SAP Hanna as default, for migrations to SAP Hanna, where the organization can
benefitted, if adopt, simplified migration to SAP HANA, performed by one tool, with
minimized overall project cost and only one downtime window
The general recommendation is to use the database migration option of SUM, as it has
become our standard procedure for migrations to SAP HANA - with it, you can profit
from a. As reasonable alternative to our standard recommendation, in case the database
migration option of SUM does not fit your requirements, consider to use the classical
migration procedure with software provisioning manager, which is also continuously
improved especially for the migration to SAP HANA. Reasons might be that the database
migration option of SUM does not support your source release or if you prefer a
separation of concerns over a big bang approach as offered by DMO of SUM.As possible
exception, there are further migration procedures for special use cases, such as the
consolidation of SAP systems in the course of the migration project or the step-wise
migration to SAP HANA, as oultined above.
Perform an individual assessment
For migrations, SAP has already provides an exhaustive set of guidelines, and
possibilities and alternatives, that can be adopted to a particular environment, all that is
required to be done, is to chose the correct fit for that particular environment and
respective specific requirements, SAP has a speculated team, called the Professional
services, which will guide the organization, towards a incident free migration, Based on
the standard recommendation from SAP, find the best option depending on your individual
requirements and the boundary conditions you are facing. To support you in this process,
SAP provides a decision matrix in the End-to-End Implementation Roadmap for SAP
NetWeaver AS ABAP guide (SMP login required), which is intended to highlight
important aspects for your decision on the right migration procedure, including these key
considerations (see the guide for the latest version of the matrix).And listed below are a
few such considerations:
What is the release and Support Package level of your existing SAP system? Is an
update mandatory or desired as part of the migration procedure?
Is your existing SAP system already Unicode?
Do you plan any landscape changes - such as changing the SAPSID of your SAP
system or the hardware of your application server - as part of the migration or do you
rather prefer an in-place migration?
Do you plan the migration of your complete system or a partial migration?
Are your operating system and your database versions supported according to the
Product Availability Matrix (PAM) of the target release or are corresponding updates
required?
Do you expect a significant downtime due to a large database volume?
Data Center Integration
TDI stands for Tailored Datacenter Integration and describes a program that allows HANA
customers to leverage existing hardware and infrastructure components for their HANA
environment. Typically an HANA appliance comes with all necessary components preconfigured and is provided by certified HANA hardware partners. TDI targets the usage of
certain hardware and infrastructure components already existing in a customers landscape
instead of the corresponding components delivered with a HANA appliance, SAP HANA
tailored data center integration offers you more openness and freedom of choice to
configure the layer for SAP HANA depending on your existing data center layout.
In addition to SAP HANA as standardized and highly optimized appliance, you can use
the tailored data centre integration approach, which is more open and flexible, to run SAP
HANA in datacenters. This option enables a reduction in hardware and operational costs
through the reuse of existing hardware components and operational processes.
One of the easiest ways to integrate the organizations HANA system into datacenter is to
work with your VCE SAP specialist to get a datacenter in a box that is configured and
ready for deployment. The flexibility of SAP HANA TDI, combined with a converged
infrastructure from VCE, delivers the best of both worlds. By incorporating all the TDI,
the IT team gets a single point of experience of an appliance, without the limitations of an
appliance
The VCE is created to work with the in-house IT team and managers in order to
consolidate in house r HANA and non-HANA application requirements into a singlepoint-of-contact converged infrastructure that includes compute, network, and storage,
backup and replication options. Because VCE systems support both bare metal and
virtualized environments, nearly any SAP application requirement can be covered on this
single platform.
In the event of the IT team deciding to design and build a own SAP Hanna system, the
following steps can be followed, in order to achieve a reliable operational infrastructure
for your SAP HANA system. These steps incorporate the fundamental principles and
design aspects that were discussed previously and leverage hardware and software
components provided by EMC and its partners.
And the following needs to be established
platform & Appliance methodology (Installation & Update)
Persistence
Backup & Recovery (System Copy)
High Availability
Disaster Recovery
Monitoring & Administration
Security & Auditing
Roles and Responsibilities
With the appliance model SAP distributes all support requests regarding any component of
SAP HANA to the correct part of the support organization. With tailored data center
integration the customer is responsible for defining support agreements with the various
partners and organizing all aspects of support.
Service and Support
Customers should work with their hardware partners to ensure hardware support
requirements are being fulfilled.A supportability tool called the SAP HANA HW
Configuration Check Tool is provided by SAP, which allows you to check if the hardware
is optimally configured to meet the requirements of SAP HANA.
Installation
Number of requirements has to be fulfilled before proceeding with installation of SAP
HANA tailored data center integration.
The server must be certified and is listed in the hardware directory.
All storage devices must have successfully passed the hardware certification for SAP
HANA.
The exam SAP Certified Technology Specialist SAP HANA Installation
(E_HANAINS) needs to be needs to be successfully passed for a person to perform
SAP HANA software installations.
SAP HANA hardware partners and their employees do not need this certificate.
Companies, or their employees, who are sub-contractors of hardware partners
must be certified to perform SAP HANA software installations
Change management
Updating and patching the Operating System
With tailored data center integration the customer is responsible for updating and
patching the Operating System.
Updating and patching the SAP HANA Software
With tailored data center integration the customer is responsible for installing,
updating, and patching the SAP HANA software.
System Installation
A number of requirements have to be fulfilled before proceeding with installation of SAP
HANA tailored data center integration.
The server must be certified and is listed in the hardware directory.
All storage devices must have successfully passed the hardware certification for
SAP HANA.
The exam SAP Certified Technology Specialist SAP HANA Installation
(E_HANAINS) needs to be needs to be successfully passed for a person to perform
SAP HANA software installations. SAP HANA hardware partners and their
employees do not need this certificate.
Companies, or their employees, who are sub-contractors of hardware partners must
be certified to perform SAP HANA software installations.
Recommended steps to be performed
One of the easiest ways to integrate the organizations HANA system into datacenter is to
work with your VCE SAP specialist to get a datacenter in a box that is configured and
ready for deployment. The flexibility of SAP HANA TDI, combined with a converged
infrastructure from VCE, delivers the best of both worlds. By incorporating all the TDI,
the IT team gets a single point of experience of an appliance, without the limitations of an
appliance
The VCE is created to work with the in-house IT team and managers in order to
consolidate in house r HANA and non-HANA application requirements into a singlepoint-of-contact converged infrastructure that includes compute, network, and storage,
backup and replication options. Because VCE systems support both bare metal and
virtualized environments, nearly any SAP application requirement can be covered on this
single platform.
In the event of the IT team deciding to design and build a own SAP Hanna system, the
following steps can be followed, in order to achieve a reliable operational infrastructure
for your SAP HANA system. These steps incorporate the fundamental principles and
design aspects that were discussed previously and leverage hardware and software
components provided by EMC and its partners. And at the minimum the following
questions need to discuss before the design steps be undertaken:
What is the RAM requirement of your protection HANA system?
Which systems in your landscape must meet TDI KPIs
What non-production systems require the same performance and support from SAP?
What non-production systems require no SAP performance support and what is the
required performance percentage compared to production (50%, 25%, etc.)?
What is your RTO/RPO for local and remote scenarios?
What level of data protection do that is required?
What is your system refresh frequency?
What other systems have to be in sync for your system refresh?
SAP Hanna Deployment
Deployment of SAP Hanna is a complicated task and need to be undertaken by through
bead professionals, which should proceeded by a very detailed planning strategy. And top
class professionals needs to be employed. It is always recommended, the SAP
professional services to be engaged, who will take the planning and implementation, in ah
highly professional manner.
Upgrading an existing enterprise SAP ERP system to HANA platform may be challenging
in terms of cost as well as various risk factors. To mitigate it, a phase wise migration plan
can be best for many enterprises. But, where to start? What are options available for the
first step to start with?
When evaluating SAP HANA for your business, remember to keep the following in mind:
(1) how can HANA help your business? (2) What type of HANA solution do you need to
achieve this? (3) How should you plan your deployment strategy?
Before deploying SAP HANA, an organization should take time to understand where SAP
HANA can deliver maximum benefit based on the business goals.
Steps in the migration should primarily be as follows
1. Understanding the opportunities and its value in your business.
2. Map the above to SAP HANA solution
3. Deploy SAP HANA
In best possible scenario your migration plan could start with an early phase deployment
of the SAP CO-PA Accelerator. Its a Profitability Analysis tool with In-Memory
Computing from SAP. Its implementation is easy, quick, and will help you not only
technically but also strategically in the migration plan. It will facilitate organization to dig
deep and structure the parameters such as the revenue which a product or service
generates and the costs entailed to it.
Challenges in SAP Hanna Adoption
One of the biggest challenges when planning an SAP HANA adoption strategy stems
ironically from the flexibility of SAP HANA itself. Because SAP has transformed SAP
HANA from a relatively straightforward in-memory data warehouse into a platform
capable of running many enterprise applications, choosing which SAP product to start
with can be mind-boggling, The strategy to adopt Hanna platform, should necessarily take
into consideration, the deployment logistics, how the database will be installed and run
and managed and monitored, what are the purchase decisions need to be made by a
customer whether a customer, customer buy a standalone appliance? Integrate with an
existing data center environment? Or tap the cloud? Needs to factor the deployment
logistics, which hardware is suitable, and which platform. Should a customer opt for a
stand-alone, appliance or an integrative one or, customer buys a standalone appliance?
Integrate with an existing data center environment? Or tap the cloud? The answers, it turns
out, start with the fundamentals a good business case. Unfortunately, according to
Gartner, establishing a compelling business case is challenging for companies considering
SAP HANA adoption. The reason the business case for SAP HANA is such a major
challenge is because many organizations have large investments in their ERP
implementation, there seems to be a universal opinion
Organizations can deploy the SAP HANA in-memory database on premise to power realtime insights across your business and control systems and data behind your own
firewall. Choose a certified appliance provided by one of our partners for the fastest
implementation. Or deploy over your existing IT landscape to maximize your current data
center investments
SAP Hanna use case / application scenerio
SAP HANA, an in-memory platform, enables managing, analyzing, and processing of big
data, allowing applications to run analytics directly on transactional data and its users to
The SAP HANA powered technology will allow the function of much more in depth HR
solution and management including: retention management, spans & layers, spot rising
leaders, M&A integration, succession planning, data auditing & cleansing, self-discovery,
institutional balance, and job grade optimization.
For companies, these functionalities cover a breadth of organizational needs:
Retention Management: Shows in visual terms those who are prone to flight risk
and highlights the highest performers in the organization who have low compensation
ratios. Then, matches to this query are displayed and can be viewed layer-by-layer or
in divisions. It will be easier to spot higher concentrations of flight risk employees.
Spans & Layers: This will show concentrations and patterns of high spans of
control. No matter if your optimal spans and layers scenario is 88, 1010, or 46,
OrgInsight highlights those who fall outside the metric.
Spot Rising Leaders: It will be much more efficient to find future leaders within the