You are on page 1of 22

Goals for Tuning.

February 13, 2011 by samadhan Objectives of Tuning the System: The objectives of tuning a system can be either anyone of the two below. 1)To reduce the response time for end users of the system. Take it as a special care if your database response time is not optimal. 2)To reduce the resources used to process the same work. Take it as special care if you have limited hardware resource. We can achieve both our objectives in three ways. A)Reduce the Workload: SQL tuning commonly involves finding more efficient ways to process the same workload. It is possible to change the execution plan of the statement without altering the functionality to reduce the resource consumption. B) Balance the Workload: Systems often tend to have peak usage in the daytime when real users are connected to the system, and low usage in the nighttime. If noncritical reports and batch jobs can be scheduled to run in the nighttime and their concurrency during day time reduced, then it frees up resources for the more critical programs in the day. C) Parallelize the Workload: Queries that access large amounts of data (typical data warehouse queries) often can be parallelized. This is extremely useful for reducing the response time in low concurrency data warehouse. However, for OLTP environments, which tend to be high concurrency, this can adversely impact other users by increasing the overall resource usage of the program.

Three basic steps of SQL tuning.


February 13, 2011 by samadhan Probably you have heard many times about performance tuning of Oracle. And one of the major part of performance tuning is to tune the SQL statement so that it generates better execution plan and performs well. It will be easy to tune the SQL statements if we remember three basic steps of SQL tuning. The three basic steps of SQL tuning is listed here. Step1: At first we need to identified the high load SQLs or top SQLs that are responsible for performance problem or in another word the sql that consume application workload and system resources. We can easily identify them by reviewing past SQL execution history available in the system.

Step 2: Now generate execution plan of those high load sql statements and verify that the execution plans produced by the query optimizer for these statements perform reasonably. Step 3: Implement corrective actions to generate better execution plans for poorly performing SQL statements. While taking corrective actions there is many considerations. Hopefully I will discuss these consideration in my blogs one by another.

The possible causes for Excessive Undo generation


February 13, 2011 by samadhan 1)Transactions with huge undo. It is obvious to see high undo usage when there are huge transactions. If that is going to be the case this growth should be expected behavior. 2)The Higher value setting of UNDO_RETENTION. The higher value of UNDO_RETENTION will increase much undo. As it cant be marked undo extents as EXPIRED till the duration of the UNDO_RETENTION. 3)Autoextensible mode of Undo data files. Disabling the auto extensible mode of the datafiles of active undo tablespace will reuse the UNEXPIRED extents when undo tablespace suffer in space pressure.If they are set to auto extensible, it will not reuse the space for the new transaction. In such scenarios new transaction will allocate a space and your undo tablespace will start growing. 4)Undo tablespace is using Autoallocate option: UNDO is using auto allocate option of LMT. As the number of the extents goes up, the extent size will be increased too.When the number is extents reaches hundred the extent size will be bigger. If it reaches thousands it will be even more bigger.If you know the size

Troubleshoot unusable Index in Oracle


February 13, 2011 by samadhan If a procedural object, such as a stored PL/SQL function or a view, becomes invalid, the DBA does not necessarily have to do anything: the first time it is accessed, Oracle will attempt to recompile it, and this may well succeed. But if an index becomes unusable for any reason, it must always be repaired explicitly before it can be used.

This is because an index consists of the index key values, sorted into order, each with the relevant rowid. The rowid is the physical pointer to the location of the row to which the index key refers. If the rowids of the table are changed, then the index will be marked as unusable. This could occur for a number of reasons. Perhaps the most common is that the table has been moved, with the ALTER TABLEMOVE command. This will change the physical placement of all the rows, and therefore the index entries will be pointing to the wrong place. Oracle will be aware of this and will therefore not permit use of the index. Identifying Unusable Indexes In earlier releases of the Oracle database, when executing SQL statements, if the session attempted to use an unusable index it would immediately return an error, and the statement would fail. Release 10g of the database changes this behavior. If a statement attempts to use an unusable index, the statement will revert to an execution plan that does not require the index. Thus, statements will always succeedbut perhaps at the cost of greatly reduced performance. This behavior is controlled by the instance parameter SKIP_UNUSABLE_INDEXES, which defaults to TRUE. The exception to this is if the index is necessary to enforce a constraint: if the index on a primary key column becomes unusable, the table will be locked for DML. To detect indexes which have become unusable, query the DBA_INDEXES view: SQL> select owner, index_name from dba_indexes where status=UNUSABLE; Repairing Unusable Indexes To repair the index, it must be re-created with the ALTER INDEXREBUILD command. This will make a pass through the table, generating a new index with correct rowid pointers for each index key. When the new index is completed, the original unusable index is dropped. The syntax of the REBUILD command has several options. The more important ones are TABLESPACE, ONLINE, and NOLOGGING. By default, the index will be rebuilt within its current tablespace, but by specifying a table space with the TABLESPACE keyword, it can be moved to a different one. Also by default, during the course of the rebuild the table will be locked for DML. This can be avoided by using the ONLINE keyword. 1)Create Table and insert row in it: SQL> create table test ( a number primary key); Table created.

SQL> insert into test values(1); 1 row created. SQL> commit; Commit complete. 2)Check the Index Status SQL> select index_name, status from user_indexes; INDEX_NAME STATUS SYS_C0044513 VALID 3)Move the Table and Check Status: SQL> alter table test move; Table altered. SQL> select index_name, status from user_indexes; INDEX_NAME STATUS SYS_C0044513 UNUSABLE 4)Rebuild The Index: SQL> alter index SYS_C0044514 rebuild online; Index altered. SQL> select index_name, status from user_indexes; INDEX_NAME STATUS SYS_C0044514 VALID

Improve INSERT Performance in oracle


February 14, 2011 by samadhan One day we had an issue to increase the speed of insert statements. This activity was carried out at our one of the bank client. We had to insert about 500K record to database as quickly as possible. We were inserting at a rate of 10 records/sec. Well, I was thinking the following approaches to gain speed1. Use a large blocksize By defining large (i.e. 16k or 32k) blocksizes for the target tables, we can reduce I/O because more rows fit onto a block before a block full condition (as set by PCTFREE) unlinks the block from the freelist.

>DROP TABLESPACE sam_tbs INCLUDING CONTENTS AND DATAFILES; >CREATE TABLESPACE sam_tbs DATAFILE /mnt/extra/test1.dbf SIZE 1024M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED blocksize 16K; 2. Increase the size of UNDO tablespace >CREATE UNDO TABLESPACE UNDOTBS DATAFILE /mnt/extra/test_undo1.dbf SIZE 200M BLOCKSIZE 16K; >alter system set undo_tablespace=UNDOTBS scope=both; >DROP TABLESPACE UNDOTBSP1 INCLUDING CONTENTS AND DATAFILES; 3. APPEND into tables By using the APPEND hint, it ensures that Oracle always grabs fresh data blocks by raising the high-water-mark for the table. >insert /*+ append */ into customer values (hello,;there); 4. Table into NOLOGGING mode Putting the table into NOLOGGING mode, which will allow Oracle to avoid almost all redo logging. >SELECT logging FROM user_tables WHERE table_name = LOGIN; >ALTER TABLE login NOLOGGING; Again, to enable logging >ALTER TABLE login LOGGING; 5. Disable/drop indexes Its far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size. 6. Parallelize the load We can invoke parallel DML (i.e. using the PARALLEL and APPEND hint) to have multiple inserts into the same table. For this INSERT optimization, make sure to define multiple freelists and use the SQL APPEND option.

Latch Contention in Oracle


February 14, 2011 by samadhan What is Latch? Well, the hardware definition of latch is a window or door lock. The electronic definition of latch is an electronic circuit used to store information. Then what is Latch in Oracle? Very uncommon as the problem is super sophisticated.

Latch is one kind of very quick (could be acquired and released in nanoseconds) lock or serialization mechanism (makes more sense) to protect Oracles shared memory in SGA. Basically latch protects the same area of SGA being updated by more than one process. Now question comes, what are protected and why? Each Oracle operation needs to read and update SGA. For example 1. When a query reads a block from disk, it will modify a free block in buffer cache and adjust the buffer cache LRU chain 2. When a new SQL statement is parsed, it will be added to the library cache within SGA 3. When DML issued and modifications are made in blocks, changes are placed in redo buffer 4. Database writer periodically (after commit, after SCN change or after each 3 sec) writes buffers from memory to disk and updates their status from dirty to clean. 5. Redo log writer writes blocks from redo buffer to redo logs. Latch prevents any of these operations from colliding and possibly corrupting the SGA. If the specific latch is already in use by another process, oracle will retry very frequently with a cumulative delay up to certain times (controlled by hidden parameter) called spin count. First time one process fails to acquire the latch, it will attempt to awaken after 10 milliseconds up to its spin count. Subsequent waits will increase in duration might be seconds in extreme cases. This affects response time and throughput. Most common type latch is Cache Buffer Latch and Cache buffer LRU chain latch which are caused for highly accessing blocks, called hot blocks. Contention on these latches is typically caused by concurrent access to a very hot block. The most common type of such hot block is index root or block branch.

Optimize Data Access Path in Oracle


February 14, 2011 by samadhan To achieve optimum performance for data-intensive queries, materialized views and indexes are essential when tuning SQL statements. The SQL Access Advisor enables to optimize data access paths of SQL queries by recommending the proper set of materialized views, materialized view logs, and indexes for a given workload. A materialized view provides access to table data by storing the results of a query in a separate schema object. A materialized view contains the rows resulting from a query against one or more base tables or views. A materialized view log is a schema object that records changes to a master tables data, so that a materialized view defined on the master table can be refreshed incrementally.

The SQL Access Advisor also recommends bitmap, function-based, and B-tree indexes. A bitmap index provides a reduced response time for many types of ad hoc queries and reduced storage requirements compared to other indexing techniques. A functional index derives the indexed value from the table data. For example, to find character data in mixed cases, a functional index can be used to look for the values as if they were all in uppercase characters.

When you would make Index and when not


February 14, 2011 by samadhan * Create an index if you frequently want to retrieve less than 15% of the rows in a large table. * To improve performance on joins of multiple tables, index columns used for joins. * Small tables do not require indexes. Some columns are strong candidates for indexing. Columns with one or more of the following characteristics are candidates for indexing: * Values are relatively unique in the column. * There is a wide range of values (good for regular indexes). * There is a small range of values (good for bitmap indexes). * The column contains many nulls, but queries often select all rows having a value. In this case, use the following phrase: WHERE COL_X > -9.99 * power(10,125) Using the preceding phrase is preferable to: WHERE COL_X IS NOT NULL This is because the first uses an index on COL_X (assuming that COL_X is a numeric column). Columns with the following characteristics are less suitable for indexing: * There are many nulls in the column and you do not search on the not null values. The size of a single index entry cannot exceed roughly one-half (minus some overhead) of the available space in the data block. Other Considerations: 1. The order of columns in the CREATE INDEX statement can affect query performance. In general, specify the most frequently used columns first.If you create a single index across columns to speed up queries that access, for example, col1, col2, and col3; then queries that access just col1, or that access just col1 and col2, are also speeded up. But a query that accessed just col2, just col3, or just col2 and col3 does not use the index.

2. There is a trade-off between the speed of retrieving data from a table and the speed of updating the table. For example, if a table is primarily read-only, having more indexes can be useful; but if a table is heavily updated, having fewer indexes could be preferable. 3. Drop Index that are no longer required. 4. Using different tablespaces (on different disks) for a table and its index produces better performance than storing the table and index in the same tablespace. Disk contention is reduced.

Sample Table Scans in Oracle


February 14, 2011 by samadhan A sample table scan retrieves a random sample of data from a simple table or a complex SELECT statement, such as a statement involving joins and views. This access path is used when a statements FROM clause includes the SAMPLE clause or the SAMPLE BLOCK clause. To perform a sample table scan when sampling by rows with the SAMPLE clause, Oracle reads a specified percentage of rows in the table. To perform a sample table scan when sampling by blocks with the SAMPLE BLOCK clause, Oracle reads a specified percentage of table blocks. Example: SQL> select * from test_skip_scan sample(.2); 20 rows selected. Execution Plan Plan hash value: 571935661 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 20 | 180 | 6 (0)| 00:00:01 | | 1 | TABLE ACCESS SAMPLE| TEST_SKIP_SCAN | 20 | 180 | 6 (0)| 00:00:01 | Sample Table Scans in Oracle What and when Index Scans is used

Index Skip, Full, Fast Full Index, Index Joins Bitmap Indexes Scan
February 16, 2011 by samadhan A)Index Skip Scan As the name suggest index skip scan does not scan complete index. But it scan of the subindexes. Index skip scan lets a composite index be split logically into smaller subindexes. In skip scanning, the initial column of the composite index is not specified in the query. In other words, it is skipped. The number of logical subindexes is determined by the number of distinct values in the initial column. Skip scanning is advantageous if there are few distinct values in the leading column of the composite index and many distinct values in the nonleading key of the index. Suppose if I make a make a composite index with two columns sex and id. The leading column sex contains only two distinct columns. Now if I query with non-leading column that is with id column then index skip scan will be used. Example: SQL> create table test_skip_scan (sex varchar2(1), id number, address varchar2(20)); Table created. SQL> create index test_skip_scan_I on test_skip_scan(sex,id); Index created. SQL> begin for i in 1 .. 10000 loop insert into test_skip_scan values(decode(remainder(abs(round(dbms_random.value(2,20),0)),2),0,M',F'),i,null); end loop; end; / PL/SQL procedure successfully completed. SQL> analyze table test_skip_scan estimate statistics; Table analyzed. SQL> select * from test_skip_scan where id=1;

Execution Plan Plan hash value: 2410156502 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| TEST_SKIP_SCAN | 1 | 9 | 4 (0)| 00:00:01 | |* 2 | INDEX SKIP SCAN | TEST_SKIP_SCAN_I | 1 | | 3 (0)| 00:00:01 | Predicate Information (identified by operation id): 2 access(ID=1) filter(ID=1) B)Index Fast Full Scan Fast full index scans are an alternative to a full table scan when the index contains all the columns that are needed for the query, and at least one column in the index key has the NOT NULL constraint. A fast full scan accesses the data in the index itself, without accessing the table. It cannot be used to eliminate a sort operation, because the data is not ordered by the index key. It reads the entire index using multiblock reads, unlike a full index scan, and can be parallelized. Fast full index scans cannot be performed against bitmap indexes. You can specify fast full index scans with the initialization parameter OPTIMIZER_FEATURES_ENABLE or the INDEX_FFS hint. Example: SQL> select /*+INDEX_FFS(test_skip_scan)*/ sex,id from test_skip_scan; 10000 rows selected. Execution Plan Plan hash value: 4280781105

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 10000 | 40000 | 7 (0)| 00:00:01 | | 1 | INDEX FAST FULL SCAN| TEST_SKIP_SCAN_I | 10000 | 40000 | 7 (0)| 00:00:01 |

C)Index Joins An index join is a hash join of several indexes that together contain all the table columns that are referenced in the query. If an index join is used, then no table access is needed, because all the relevant column values can be retrieved from the indexes. An index join cannot be used to eliminate a sort operation. You can specify an index join with the INDEX_JOIN hint. For more information on the INDEX_JOIN hint. SQL> select sex,id from test_skip_scan where id in (select col1 from test_tab); 1000 rows selected. Execution Plan Plan hash value: 1059662925 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 999 | 6993 | 9 (12)| 00:00:01 | |* 1 | HASH JOIN RIGHT SEMI | | 999 | 6993 | 9 (12)| 00:00:01 | | 2 | INDEX FAST FULL SCAN| TEST_TAB_I | 1000 | 3000 | 2 (0)| 00:00:01 | | 3 | TABLE ACCESS FULL | TEST_SKIP_SCAN | 10000 | 40000 | 6 (0)| 00:00:01 | Predicate Information (identified by operation id): 1 access(ID=COL1) D)Bitmap Indexes A bitmap join uses a bitmap for key values and a mapping function that converts each bit position to a rowid. Bitmaps can efficiently merge indexes that correspond to several conditions in a WHERE clause, using Boolean operations to resolve AND and OR conditions.

Bitmap indexes and bitmap join indexes are available only if you have purchased the Oracle Enterprise Edition.

What and when Index Scans is used


February 16, 2011 by samadhan In case of index scan a row is retrieved by traversing the index, using the indexed column values specified by the statement. An index scan retrieves data from an index based on the value of one or more columns in the index. To perform an index scan, Oracle searches the index for the indexed column values accessed by the statement. If the statement accesses only columns of the index, then Oracle reads the indexed column values directly from the index, rather than from the table. The index contains not only the indexed value, but also the rowids of rows in the table having that value. Therefore, if the statement accesses other columns in addition to the indexed columns, then Oracle can find the rows in the table by using either a table access by rowid or a cluster scan. An index scan can be various types like 1)Index Unique Scans 2)Index Range Scans 3)Index Range Scans Descending 4)Index Skip Scans 5)Full Scans 6)Fast Full Index Scans 7)Index Joins 8)Bitmap Indexes

1)Index Unique Scans This scan returns, at most, a single rowid. Oracle performs a unique scan if a statement contains a UNIQUE or a PRIMARY KEY constraint that guarantees that only a single row is accessed. This access path is used when all columns of a unique (B-tree) index or an index created as a result of a primary key constraint are specified with equality conditions. There is an example in later of this section. 2)Index Range Scans An index range scan is a common operation for accessing selective data. Data is returned in the ascending order of index columns. Multiple rows with identical values are sorted in ascending order by rowid.

If data must be sorted by order, then use the ORDER BY clause, and do not rely on an index. If an index can be used to satisfy an ORDER BY clause, then the optimizer uses this option and avoids a sort. The optimizer uses a range scan when it finds one or more leading columns of an index specified in conditions like col1=1 or col1<1 or col1>1 or (col1=1 AND col1=99 AND ..) Range scans can use unique or non-unique indexes. Range scans avoid sorting when index columns constitute the ORDER BY/GROUP BY clause. The hint INDEX(table_alias index_name) instructs the optimizer to use a specific index. Note that leading wildcards like %text does not result range scan but text% might result range scan. Look at following examples, 88% used range scans but %88 did not used range scans. SQL> create table table_a(n number ,k varchar2(15)); Table created. SQL> begin for i in 1 .. 10000 loop insert into table_a values(i,pc-||round(dbms_random.value(1,20000),0)); end loop; end; / 234567 PL/SQL procedure successfully completed. SQL> create index table_a_I_K on table_a(k); Index created. SQL> select * from table_a where k like 88%; no rows selected Execution Plan Plan hash value: 1124802227 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 1 | 22 | 1 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| TABLE_A | 1 | 22 | 1 (0)| 00:00:01 |

|* 2 | INDEX RANGE SCAN | TABLE_A_I_K | 1 | | 1 (0)| 00:00:01 | Predicate Information (identified by operation id): 2 access(K LIKE 88%) filter(K LIKE 88%) Note - dynamic sampling used for this statement SQL> select * from table_a where k like %88; 102 rows selected. Execution Plan Plan hash value: 1923776651 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 102 | 2244 | 8 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| TABLE_A | 102 | 2244 | 8 (0)| 00:00:01 | Predicate Information (identified by operation id): 1 filter(K LIKE %88) Note - dynamic sampling used for this statement 3)Index Range Scans Descending An index range scan descending is identical to an index range scan, except that the data is returned in descending order. Indexes, by default, are stored in ascending order. The optimizer uses index range scan descending when an order by descending clause can be satisfied by an index.

The hint INDEX_DESC(table_alias index_name) is used for index range scan descending. Example: SQL> select * from table_a where k like 8888%; 8 rows selected. Execution Plan Plan hash value: 1124802227 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| TABLE_A | 1 | 9 | 4 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | TABLE_A_I_K | 1 | | 2 (0)| 00:00:01 | Predicate Information (identified by operation id): 2 access(K LIKE 8888%) filter(K LIKE 8888%) SQL> select /*+index_desc(table_a)*/ * from table_a where k like 8888%; 8 rows selected. Execution Plan Plan hash value: 3364135956 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID | TABLE_A | 1 | 9 | 4 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN DESCENDING| TABLE_A_I_K | 1 | | 2 (0)| 00:00:01 | Predicate Information (identified by operation id): 2 access(K LIKE 8888%) filter(K LIKE 8888% AND K LIKE 8888%)

4)Index Skip Scans: Discussed in 5)Full Scans Discussed in 6)Fast Full Index Scans Discussed in 7)Index Joins Discussed in 8)Bitmap Indexes Discussed in To illustrate an example create a table and make its column primary key. Now put the indexed column in the where clause with an equality operator. Note that index unique scan will be used.

SQL> create table test_tab2 as select level col1, level col2 from dual connect by level<=100; Table created. Case 1: No index, so full table scan will performed. SQL> select * from test_tab2 where col1=99; Execution Plan Plan hash value: 700767796 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 1 | 26 | 3 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| TEST_TAB2 | 1 | 26 | 3 (0)| 00:00:01 | Predicate Information (identified by operation id): 1 filter(COL1=99) Note - dynamic sampling used for this statement Create non-unique index on the table.

SQL> create index test_tab2_I on test_tab2(col1); Index created. Case 2: As on col1 there is non-unique index so range scan will be performed. SQL> select * from test_tab2 where col1=99; Execution Plan Plan hash value: 465564947 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 1 | 26 | 2 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB2 | 1 | 26 | 2 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | TEST_TAB2_I | 1 | | 1 (0)| 00:00:01 | Predicate Information (identified by operation id): 2 access(COL1=99) Note - dynamic sampling used for this statement Now drop the index and add primary key on the table. SQL> drop index test_tab2_I; Index dropped. SQL> alter table 2 test_tab2 add primary key(col1); Table altered. Case 3: Adding primary key with equality operation on column causes to use index unique scan. SQL> select * from test_tab2 where col1=99; Execution Plan Plan hash value: 1384425796

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | | 0 | SELECT STATEMENT | | 1 | 26 | 1 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB2 | 1 | 26 | 1 (0)| 00:00:01 | |* 2 | INDEX UNIQUE SCAN | SYS_C006487 | 1 | | 0 (0)| 00:00:01 | Predicate Information (identified by operation id): 2 access(COL1=99)

How to adjust the high watermark in ORACLE 10g ALTER TABLE SHRINK
February 17, 2011 by samadhan Yesterday my junior team member was confused about fragmentation and High water mark concepts.Also there was good comment on my fragmentation post, so it inspire me to write something about the High Watermark and the Oracle 10gR1 New Feature SEGMENT SHRINKING.About fragmentation I have already disscused in my previous post. The High Watermark is the maximum fill-grade a table has ever reached. Above the high watermark are only empty blocks. These blocks can be formatted or unformatted. First lets have a look at the question when space is allocated - when you create a table at least one extent (contiguous blocks) is allocated to the table - if you have specified MINEXTENTS the number of MINEXTENTS extents will be allocated immedaitely to the table - if you have not specified MINEXTENTS then exactely one extent will be allocated . Immediately after creation of the segment (table) the high watermark will be at the first block of the first extent as long as there are no inserts made. When you insert rows into the table the high watermark will be bumped up step by step. This is done by the server process which makes the inserts. Now let us take a look at when space is released again from a segment like a table or index: Lets asume that we have filled a table with 1000000 rows. And lets asume that we deleted 50000 rows afterwards.

In this case the high watermark will have reached the level of 100000 and will have stayed there. Which means that we have empty blocks below the high watermark now. Oracle has a good reason this: it might occur that you delete rows and immediately this you insert rows into the same table. In this case it is good that the space was not released with the deletes, because it had to be get reallocate again for the following inserts, which would mean permanent changes to the data dictionary (=> dba_free_space, dba_extents, dba_segements ) . Furthermore the physical addresses of the deleted row get recycled by new rows. These empty blocks below the high watermark can get annoying in a number of situations because they are not used by DIRECT LOADs and DIRECT PATH LOADs: 1. seriell direct load: INSERT /*+ APPEND */ INTO hr.employees NOLOGGING SELECT * FROM oe.emps; 2. parallel direct load: ALTER SESSION ENABLE PARALLEL DML; INSERT /*+PARALLLEL(hr.employees,2) INTO hr.employees NOLOGGING SELECT * FROM oe.emps; 3. direct path loads: sqlldr hr/hr control=lcaselutz.ctl direct=y (default is direct=n) All the above actions case that the SGA is not used for the inserts but the PGA: there wil be temporary segements filled and dumped into newly formatted blocks above the high watermark. So we might want to get high watermark down before we load data into the table in order to use the free empty blocks for the loading. So how can we release unused space from a table? There are a number of possible options which are already available before Oracle 10g: - What we always could do is export and import the segment. After an import the table will have only one extent. The rows will have new physical addresses and the high watermark will be adjusted. - Another option would be to TRUNCATE the table.

With this we would loose all rows which are in the table. So we cannot use this if we want to keep existing records. With Oracle 9i another possibilty was implemented: ALTER TABLE emp MOVE TABLESPACE users; This statement will also cause that - the rows will have new physical addresses and - the high watermark will be adjusted. But for this: - we need a full (exclusive) table lock - the indexes will be left with the status unusable (because they contain the old rowids) and must be rebuilt. Starting with ORACLE 10gR1 we can use a new feature for adjusting the high watermark, it is called segment shrinking and is only possible for segments which use ASSM, in other words, which are located in tablespaces which use Automatic Segement Space Management. In such a tablespace a table does not really have a High watermark! It uses two watermarks instead: - the High High Watermark referred to as HHWM, above which alle blocks ar unformatted. - the Low High Watermark referred to as LHWM below which all blocks are formatted. We now can have unformatted blocks in the middle of a segment! ASSM was introduced in Oracle 9iR2 and it was made the default for tablespaces in Oracle 10gR2. With the table shrinking feature we can get Oracle to move rows which are located in the middle or at the end of a segment further more down to the beginning of the segment and by this make the segment more compact. For this we must first allow ORACLE to change the ROWIDs of these rows by issuing ALTER TABLE emp ENABLE ROW MOVEMENT; ROWIDs are normally assigned to a row for the life time of the row at insert time. After we have given Oracle the permission to change the ROWIDs we can now issue a shrink statement. ALTER TABLE emp SHRINK SPACE; This statement will procede in two steps: - The first step makes the segment compact by moving rows further down to free blocks at the beginning of the segment. - The second step adjusts the high watermark. For this Oracle needs an exclusive table lock, but for a very short moment only. Table shrinking - will adjust the high watermark - can be done online

- will cause only rowlocks during the operation and just a very short full table lock at the end of the operation - indexes will be maintained and remain usable - can be made in one go - can be made in two steps (this can be usefull if you cannot get a full table lock during certain hours: you only make the first step and adjust the high watermark later when it is more conveniant: - ALTER TABLE emp SHRINK SPACE; only for the emp table - ALTER TABLE emp SHRINK SPACE CASCADE; for all dependent objects as well - ALTER TABLE emp SHRINK SPACE COMPACT; only makes the first step (moves the rows) How are the indexes maintained? In the first phase Oracle scans the segment from the back to find the position of the last row. Afterwards it scan the segment from the beginning to find the position of the first free slot in a block in this segment. In case the two positions are the same, there is nothing to shrink. In case the two positions are different Oracle deletes the row from the back and inserts it into the free position at front of the segement. Now Oracle scan the segement from the back and front again and again until it finds that the two positions are the same. Since it is DML statements performed to move the rows, the indexes are maintained at the same time. Only row level locks are used for these operations in the first pase of SHRINK TABLE statement. The following restrictions apply to table shrinking: 1.) It is only possible in tablespaces with ASSM. 2.) You cannot shrink: - UNDO segments - temporary segments - clustered tables - tables with a colmn of datatype LONG - LOB indexes - IOT mapping tables and IOT overflow segments - tables with MVIEWS with ON COMMIT - tables with MVIEWS which are based on ROWIDs The Oracle 10g Oracle comes with a Segment Advisor utility. The Enterprise Manager, Database Control, even has a wizzard which can search for shrink candidates. This advisor is run automatically by an autotask job on a regular basis in the default maintainance window.

You can use the built in package DBMS_SPACE to run the advisor manually as well. I will blog about this later on some time.

You might also like