Professional Documents
Culture Documents
sum(used_memory_size) used_mem_size
FROM PUBLIC.M_SERVICE_COMPONENT_MEMORY
group by host,
component
ORDER BY sum(used_memory_size) desc;
Database resident
Resident memory is the physical memory actually in operational use by a process.
SELECT SUM(PHYSICAL_MEMORY_SIZE/1024/1024/1024) "Database Resident" FROM
M_SERVICE_MEMORY;
Find the total resident on each node and physical memory size
SELECT
HOST,
ROUND(USED_PHYSICAL_MEMORY/1024/1024/1024,
2) AS "Resident GB",
ROUND((USED_PHYSICAL_MEMORY + FREE_PHYSICAL_MEMORY)/1024/1024/1024,
2) AS "Physical Memory GB"
FROM PUBLIC.M_HOST_RESOURCE_UTILIZATION;
Find total Resident
SELECT
T1.HOST,
(T1.USED_PHYSICAL_MEMORY + T2.SHARED_MEMORY_ALLOCATED_SIZE)/1024/1024/1024 "Total
Resident"
FROM M_HOST_RESOURCE_UTILIZATION AS T1 JOIN (SELECT
M_SERVICE_MEMORY.HOST,
SUM(M_SERVICE_MEMORY.SHARED_MEMORY_ALLOCATED_SIZE) AS
SHARED_MEMORY_ALLOCATED_SIZE
FROM SYS.M_SERVICE_MEMORY
GROUP BY M_SERVICE_MEMORY.HOST) AS T2 ON T2.HOST = T1.HOST;
SELECT
ROUND(SUM("M")/1024/1024/1024,
2) as "Max Peak Used Memory GB"
FROM (SELECT
SUM(CODE_SIZE+SHARED_MEMORY_ALLOCATED_SIZE) AS "M"
FROM SYS.M_SERVICE_MEMORY
UNION SELECT
SUM(INCLUSIVE_PEAK_ALLOCATION_SIZE) AS "M"
FROM M_HEAP_MEMORY
WHERE DEPTH = 0);
The garbage collector will be triggered, and free up the memory. This will not unload the tables.
Schema/Tables Monitoring
Find Tables loaded into memory & delta records
When used: To see what tables are loaded to memory at any given time; If a report is running slow see if the
table is loaded to memory though the tables goes on lazy loading it is a best practice to have the table loaded
to memory.
SELECT
LOADED,
TABLE_NAME,
RECORD_COUNT,
RAW_RECORD_COUNT_IN_DELTA ,
MEMORY_SIZE_IN_TOTAL,
MEMORY_SIZE_IN_MAIN,
MEMORY_SIZE_IN_DELTA
from M_CS_TABLES
where schema_name = 'SCHEMA'
order by RAW_RECORD_COUNT_IN_DELTA Desc
To drill down further and see what columns is not loaded /loaded please use below
Select top 100 LOADED,
HOST,
TABLE_NAME,
COLUMN_NAME,
MEMORY_SIZE_IN_TOTAL
from PUBLIC.M_CS_COLUMNS
WHERE SCHEMA_NAME = 'SCHEMA'
AND LOADED <> 'TRUE'
MERGE DELTA
See if there is delta to be merged. RAW_RECORD_COUNT_IN_DELTA will provide the delta count.
SELECT
LOADED,
TABLE_NAME,
RECORD_COUNT,
RAW_RECORD_COUNT_IN_DELTA ,
MEMORY_SIZE_IN_TOTAL,
MEMORY_SIZE_IN_MAIN,
MEMORY_SIZE_IN_DELTA
from M_CS_TABLES
where schema_name = 'SCHEMA'
order by RAW_RECORD_COUNT_IN_DELTA Desc
Smart merge
UPDATE <table_name> MERGE DELTA INDEX WITH PARAMETERS ('SMART_MERGE'='ON')
Find Compression
When used: To see the uncompressed size and the compression ratio in HANA for the loaded tables.
SELECT top 100 "SCHEMA_NAME",
sum("DISTINCT_COUNT") RECORD_COUNT,
sum("MEMORY_SIZE_IN_TOTAL") COMPRESSED_SIZE,
sum("UNCOMPRESSED_SIZE") UNCOMPRESSED_SIZE,
(sum("UNCOMPRESSED_SIZE")/sum("MEMORY_SIZE_IN_TOTAL")) as COMPRESSION_RATIO,
100*(sum("UNCOMPRESSED_SIZE")/sum("MEMORY_SIZE_IN_TOTAL")) as
COMPRESSION_PERCENTAGE
FROM "SYS"."M_CS_ALL_COLUMNS"
GROUP BY "SCHEMA_NAME"
having sum("UNCOMPRESSED_SIZE") >0
ORDER BY UNCOMPRESSED_SIZE DESC ;
To go on a detail level and identify what type of compression is applied on each column and the ratio please
use below
select
COLUMN_NAME,
LOADED,
COMPRESSION_TYPE,
MEMORY_SIZE_IN_TOTAL,
UNCOMPRESSED_SIZE,
COMPRESSION_RATIO_IN_PERCENTAGE as COMPRESSION_FACTOR
from M_CS_COLUMNS
where schema_name = 'SCHEMA'
Expensive Statements
Ensure the expensive statement trace is ON
When used: To troubleshoot a report failure or a sql failure and understand why it failed. Also to monitor the
expensive sqls executed in HANA. Identify the ways for performance optimization.
"ALLOC_MEM_SIZE_COLSTORE",
"MEMORY_SIZE",
"REUSED_MEMORY_SIZE",
"CPU_TIME"
FROM "PUBLIC"."M_EXPENSIVE_STATEMENTS"
WHERE ERROR_CODE > 0
ORDER BY START_TIME DESC;
CONNECTIONS
Find running connections
SELECT "HOST", "PORT", "CONNECTION_ID", "TRANSACTION_ID", "START_TIME", "IDLE_TIME",
"CONNECTION_STATUS", "CLIENT_HOST", "CLIENT_IP", "CLIENT_PID", "USER_NAME",
"CONNECTION_TYPE", "OWN", "IS_HISTORY_SAVED", "MEMORY_SIZE_PER_CONNECTION",
"AUTO_COMMIT", "LAST_ACTION", "CURRENT_STATEMENT_ID", "CURRENT_OPERATOR_NAME",
"FETCHED_RECORD_COUNT", "AFFECTED_RECORD_COUNT", "SENT_MESSAGE_SIZE",
"SENT_MESSAGE_COUNT", "RECEIVED_MESSAGE_SIZE", "RECEIVED_MESSAGE_COUNT",
"CREATOR_THREAD_ID", "CREATED_BY", "IS_ENCRYPTED", "END_TIME",
"PARENT_CONNECTION_ID", "CLIENT_DISTRIBUTION_MODE", "LOGICAL_CONNECTION_ID",
"CURRENT_SCHEMA_NAME", "CURRENT_THREAD_ID"
FROM "PUBLIC"."M_CONNECTIONS"
WHERE CONNECTION_STATUS = 'RUNNING'
ORDER BY "START_TIME" DESC
Resetting Connections
Find the connection
SELECT CONNECTION_ID, IDLE_TIME
FROM M_CONNECTIONS
WHERE CONNECTION_STATUS = 'IDLE' AND CONNECTION_TYPE = 'Remote'
ORDER BY IDLE_TIME DESC
Disconnect Session
ALTER SYSTEM DISCONNECT SESSION '203927';
ALTER SYSTEM CANCEL SESSION '237048';
PASSWORD Policy
Disable password policy on a user, this is used when you dont want the policy to be applied on a
user. This will set to lifetime.
ALTER USER USER DISABLE PASSWORD LIFETIME
Audit Policy
Configure
Enable global auditing
alter system alter configuration ('global.ini',
'SYSTEM')
set ('auditingconfiguration',
'global_auditing_state' ) = 'true' with reconfigure;
Set the auditing file type
alter system alter configuration ('global.ini','SYSTEM')
set ('auditingconfiguration'
,'default_audit_trail_type' ) = 'CSVTEXTFILE'
with reconfigure;
aduit target path
alter system alter configuration ('global.ini','SYSTEM')
set ('auditingconfiguration'
,'default_audit_trail_path' ) = 'path'
with reconfigure;
Policy enable/disable
ALTER AUDIT POLICY Audit_EDW_DM_DROPTABLE_H00 ENABLE;
Sohail Ahmed
May 19, 2016 5:18 PM
Very useful commands. Appreciate your effort putting them together.
Vipul Kapadia
Mar 18, 2016 1:28 AM
This is a wonderful collection of useful commands!
Is there a way to find out what tables are referenced (used) the most from views? Example, if we have MARA
table, we want to know if it used by 20 views or 200 views. Any help with this would be great. Thank you.
Anindya Bose
Dec 29, 2015 7:23 PM
very good article. thanks for sharing
Sanjeev Nagalikar
Dec 29, 2015 7:14 PM
Many Queries at one location..Thanks for all these.. Add some more about adding XS engine..checking the
ports which is used by tenant DB's etc..
Vinod Nair
May 18, 2015 8:19 PM
Hi Srikanth, I agree, the command I had was from very previous revisions SP4 or SP5 (no sure now) where I
had run into memory & thread management issues & had to manually call GC for some troubleshooting. You
dont have to do any on this now on recent releases as HANA does a good job on that and let it manage this
automatically.
Thanks,
Vinod
srikanth mandalapu
Aug 28, 2014 11:18 PM
Hi Nair,
regarding Garbage Collection,
your advise was to run this below command,
mm gc f
Isn't this be a automated thing by setting up the below parameter in the indexserver ini file?
mvcc_aged_checker_timeout
Appreciate for your thoughts.
Thanks
Srikanth M
Lucas Oliveira
Apr 15, 2014 6:48 PM
Hi All,
There's a neat set of SQL statements in collective Note 1969700 - SQL statement collection for SAP HANA.
The Note is surely complementary to the information described in this post.
Cheers,
Lucas de Oliveira
Justin Molenaur in response to Purnaram Kodavatiganti on page 13
Apr 2, 2014 9:19 PM
Thanks for the info, this helps.
Keeping what you said in mind, do you expect this table to grow over time after the initial load or is the initial
load the final dataset?
If you didn't partition the table, the auto merge will still be very expensive going forward if new data is arriving in
the table. You are going to hit the same exact problem as the it grows, the merge time/resources will increase
to a point where it is unmanageable.
I would advise considering your partitioning options to mitigate that risk. That would also help with any future
initial loads too, since no manual process would be needed.
Regards,
Justin
Purnaram Kodavatiganti in response to Justin Molenaur on page 13
Apr 2, 2014 9:11 PM
Hi Justin,
In my case the tables are not partitioned(not my choice Client decided not to do do at this point).
Yes you are absolute right I am disabling the writes to I/O by issuing MEMORY_MERGE ON. All these
steps are done only to do the initial load and we as a team decided to take the risk. Yes we wanted to make
sure we load data in chucks of every year and commit immediately after every year. The reason for issuing
EXEC('COMMIT') was to free up the memory in Delta. If not issued the delta merge is still growing but not at
exponential rate.
Once the initial load is done we are moving back to auto merge option.
Regards
PK
Justin Molenaur in response to Vinod Nair on page 14
Regards
PK
Lars Breddemann in response to Purnaram Kodavatiganti on page 16
Apr 1, 2014 3:20 PM
Sorry, but can't agree.
Generally speaking, the delta merge process should be left to it's own devices.
Things like mass data loading optimization, delta merge fine tuning and transaction handling in stored
procedures - these all belong to a rather specific and currently popular use case in SAP HANA: rebuild basic
data warehousing functionality in SAP HANA SQLScript.
This surely does not apply for every HANA user.
- Lars
Purnaram Kodavatiganti in response to Stefan Seemann on page 16
Mar 27, 2014 7:45 PM
Hi All,
Execute immediate('COMMIT');
MERGE DELTA OF <TABLE NAME> WITH PARAMETERS ('MEMORY_MERGE' = 'ON');
Finally after all updates done. running the following command.
MERGE DELTA OF <TABLE NAME>
This sequence gives even better performance in Large data loads with Delta merge data in memory not
growing huge.
Hopefully this helps everyone.
PK
Stefan Seemann
Mar 27, 2014 6:05 PM
Handy collection indeed.
Instead of
where schema_name = 'SCHEMA'
you can use
where schema_name = CURRENT_USER
This sets automatically the schema name of the connected user.
based on
- Regards,
Vinod Nair
Lin Hu
Mar 26, 2014 2:10 PM
It is very nice.
I would suggest little correction for section
Find which node is active
SELECT HOST,PORT,CONNECTION_ID FROM M_CONNECTIONS WHERE OWN = 'TRUE';
It will tell which node your session is connected to and not anything else. Useful when you need to know your
session connection_id.