You are on page 1of 35

NetApp Best Practice Guidelines for Oracle

NetApp, Inc. TR-3369 Revised on May 2008.

Eric Barrett Technical Global Advisor Bikash R. Choudhury Technical Global Advisor Bruce Clarke Consulting Systems Engineer Blaine McFadden Technical Marketing Tushar Patel Technical Marketing Ed Hsu Systems Engineer Christopher Slater Database Consulting Systems Engineer Michael Tatum Database Consulting Systems Engineer

Table of Contents
1. Introduction .......................................................................................................................................................... 5 2. NetApp System Configuration ............................................................................................................................. 5 2.1. Appliance Network Settings .......................................................................................................................... 5 2.1.1. EthernetGigabit Ethernet, Autonegotiation, and Full Duplex.................................................................. 5 2.2. Volume, Aggregate Setup and Options......................................................................................................... 6 2.2.1. Databases .................................................................................................................................................. 6 2.2.2. Aggregates and FlexVol Volumes or Traditional Volumes ........................................................................ 6 2.2.3 Volume Size ................................................................................................................................................ 6 2.2.4 Recommended Volumes for Oracle Database Files and Logfiles .............................................................. 6 2.2.5. Oracle Optimal Flexible Architecture (OFA) on NetApp Storage............................................................... 7 2.2.6. Oracle Home Location:............................................................................................................................... 8 2.2.7. Best Practices for Control and Log Files.................................................................................................... 9 2.3. RAID Group Size......................................................................................................................................... 10 2.3.1 Traditional RAID ........................................................................................................................................ 10 2.3.2 RAID-DP.................................................................................................................................................... 10 2.4. Snapshot and SnapRestore ..................................................................................................................... 10 2.5. Snap Reserve.............................................................................................................................................. 11 2.6. System Options ........................................................................................................................................... 11 2.6.1. The minra Option...................................................................................................................................... 11 2.6.2. File Access Time Update ......................................................................................................................... 11 2.6.3. NFS Settings ............................................................................................................................................ 12 3. Operating Systems ............................................................................................................................................ 12 3.1. Linux ............................................................................................................................................................ 12 3.1.1. LinuxRecommended Versions.............................................................................................................. 12 3.1.2. LinuxKernel Patches............................................................................................................................. 13 3.1.3. LinuxOS Settings .................................................................................................................................. 13 3.1.4. Linux NetworkingFull Duplex and Autonegotiation............................................................................... 14 3.1.5. Linux NetworkingGigabit Ethernet Network Adapters .......................................................................... 14 3.1.6. Linux NetworkingJumbo Frames with GbE .......................................................................................... 14 3.1.7. Linux NFS ProtocolMount Options ....................................................................................................... 15

3.1.8. iSCSI Initiators for Linux........................................................................................................................... 15 3.1.9. FCP SAN Initiators for Linux .................................................................................................................... 15 3.2. Sun Solaris Operating Systems............................................................................................................... 15 3.2.1. SolarisRecommended Versions ........................................................................................................... 15 3.2.2. SolarisKernel Patches .......................................................................................................................... 15 3.2.3. SolarisOS Settings................................................................................................................................ 16 3.2.4. Solaris NetworkingFull Duplex and Autonegotiation ............................................................................ 17 3.2.5. Solaris NetworkingGigabit Ethernet Network Adapters........................................................................ 17 3.2.6. Solaris NetworkingJumbo Frames with GbE........................................................................................ 18 3.2.7. Solaris NetworkingImproving Network Performance............................................................................ 18 3.2.8. Solaris IP Multipathing (IPMP) ................................................................................................................. 19 3.2.9. Solaris NFS ProtocolMount Options..................................................................................................... 20 3.2.10. iSCSI Initiators for Solaris ...................................................................................................................... 22 3.2.11. Fibre Channel SAN for Solaris ............................................................................................................... 22 3.3. Microsoft Windows Operating Systems.................................................................................................... 22 3.3.1. Windows Operating SystemRecommended Versions ......................................................................... 22 3.3.2. Windows Operating SystemService Packs .......................................................................................... 22 3.3.3. Windows Operating SystemRegistry Settings...................................................................................... 22 3.3.4. Windows NetworkingAutonegotiation and Full Duplex......................................................................... 23 3.3.5. Windows NetworkingGigabit Ethernet Network Adapters .................................................................... 23 3.3.6. Windows NetworkingJumbo Frames with GbE .................................................................................... 23 3.3.7. iSCSI Initiators for Windows..................................................................................................................... 24 3.3.8. FCP SAN Initiators for Windows .............................................................................................................. 24 4. Oracle Database Settings .................................................................................................................................. 24 4.1. DISK_ASYNCH_IO ..................................................................................................................................... 24 4.2. DB_FILE_MULTIBLOCK_READ_COUNT.................................................................................................. 24 4.3. DB_BLOCK_SIZE ....................................................................................................................................... 25 4.4. DBWR_IO_SLAVES and DB_WRITER_PROCESSES ............................................................................. 25 4.5. DB_BLOCK_LRU_LATCHES ..................................................................................................................... 25 5. Backup, Restore, and Disaster Recovery.......................................................................................................... 25 5.1. How to Back Up Data from a NetApp System ............................................................................................ 26

5.2. Creating Online Backups Using Snapshot Copies...................................................................................... 26 5.3. Recovering Individual Files from a Snapshot Copy .................................................................................... 27 5.4. Recovering Data Using SnapRestore ......................................................................................................... 27 5.5. Consolidating Backups with SnapMirror ..................................................................................................... 27 5.6. Creating a Disaster Recovery Site with SnapMirror.................................................................................... 27 5.7. Creating Nearline Backups with SnapVault ................................................................................................ 28 5.8. NDMP and Native Tape Backup and Recovery.......................................................................................... 28 5.9. Using Tape Devices with NetApp Systems................................................................................................. 28 5.10. Supported Third-Party Backup Tools ........................................................................................................ 28 5.11. Backup and Recovery Best Practices ....................................................................................................... 29 5.11.1. SnapVault and Database Backups ........................................................................................................ 29 5.12. SnapManager for Oracle - Backup and Recovery Best Practices ............................................................ 32 5.12.1 SnapManager for Oracle ASM based backup and restore .................................................................. 33 5.12.2 SnapManager for Oracle RMAN based backup and restore ............................................................... 33 5.12.3 SnapManager for Oracle Cloning......................................................................................................... 33 References............................................................................................................................................................. 33 Revision History ..................................................................................................................................................... 35

1. Introduction
Thousands of Network Appliance (NetApp) customers have successfully deployed Oracle Databases on NetApp filers for their mission- and business-critical applications. NetApp and Oracle have worked over the past several years to validate Oracle products on NetApp filers and a range of server platforms. NetApp and Oracle support have established a joint escalations team that works hand in hand to resolve customer support issues in a timely manner. In the process, the team discovered that most escalations are due to failure to follow the best established practices when deploying Oracle Databases with NetApp filers. This document describes best practices guidelines for running Oracle Databases on NetApp storage systems with system platforms such as Solaris, HP/UX, AIX, Linux, and Windows. This document reflects the work done by NetApp, Oracle and by NetApp engineers at various joint customer sites. This document should be treated as a starting reference point and is the bare minimum requirements for deployment of Oracle on NetApp. This guide assumes a basic understanding of the technology and operation of NetApp products and presents options and recommendations for planning, deployment, and operation of NetApp products to maximize their effective use.

2. NetApp System Configuration


2.1. Appliance Network Settings
When configuring network interfaces for new systems, it's best to run the setup command to automatically bring up the interfaces and update the /etc/rc file and /etc/hosts file. The setup command will require a reboot to take effect. However, if a system is in production and cannot be rebooted, network interfaces can be configured with the ifconfig command. If a NIC is currently online and needs to be reconfigured, it must first be brought down. To minimize downtime on that interface, a series of commands can be entered on a single command line separated by the semicolon (;) symbol. Example: filer>ifconfig e0 down;ifconfig e0 'hostname'-e0 mediatype auto netmask 255.255.255.0 partner e0 When configuring or reconfiguring NICs or VIFs in a cluster, it is imperative to include the appropriate partner <interface> name or VIF name in the configuration of the cluster partners NIC or VIF to ensure fault tolerance in the event of cluster takeover. Please consult your NetApp support representative for assistance. A NIC or VIF being used by a database should not be reconfigured while the database is active. Doing so can result in a database crash.

2.1.1. EthernetGigabit Ethernet, Autonegotiation, and Full Duplex


Any database using NetApp storage should utilize Gigabit Ethernet on both the filer and database server. NetApp Gigabit II, III, and IV cards are designed to autonegotiate interface configurations and are able to intelligently self-configure themselves if the autonegotiation process fails. For this reason, NetApp recommends that Gigabit Ethernet links on clients, switches, and NetApp systems be left in their default autonegotiation state, unless no link is established, performance is poor, or other conditions arise that might warrant further troubleshooting. Flow control should by default be set to full on the filer in its /etc/rc file, by including the following entry (assuming the Ethernet interface is e5): ifconfig e5 flowcontrol full

If the output of the ifstat a command does not show full flow control, then the switch port will also have to be configured to support it. (The ifconfig command on the filer will always show the requested setting; ifstat shows what flow control was actually negotiated with the switch.)

2.2. Volume, Aggregate Setup and Options 2.2.1. Databases


There is currently no empirical data to suggest that splitting a database into multiple physical volumes enhances or degrades performance. Therefore, the decision on how to structure the volumes used to store a database should be driven by backup, restore, and mirroring requirements. A single database instance should not be hosted on multiple unclustered filers, because a database with sections on multiple filers makes maintenance that requires filer downtimeeven for short periodshard to schedule and increases the impact of downtime. If a single database instance must be spread across several separate filers for performance, care should be taken during planning so that the impact of filer maintenance or backup can be minimized. Segmenting the database so the portions on a specific filer can periodically be taken offline is recommended whenever feasible.

2.2.2. Aggregates and FlexVol Volumes or Traditional Volumes


With Data ONTAP 7G, Network Appliance supports pooling of a large number of disks into an aggregate and then building virtual volumes (FlexVol volumes) on top of those disks. These have many benefits for Oracle database environments, refer to [1]. For Oracle databases it is recommended that you pool all your disks into a single large aggregate and use FlexVol volumes for your database datafiles and logfiles as described below. This provides the benefit of much simpler administration, particularly for growing and reducing volume sizes without affecting performance. For more details on exact layout recommendations, refer to [2].

2.2.3 Volume Size


NetApp recommends customers to choose volume sizes based on requirements for backup and recovery as well as other aspects of the storage system design.

2.2.4 Recommended Volumes for Oracle Database Files and Logfiles


Based on our testing, we found the following layouts adequate for most scenarios. The general recommendation is to have a single aggregate containing all the flexible volumes containing database components.

Flexible Volumes and Aggregates Database binaries Database config files Transaction log files Archive logs Data files Temporary datafiles Cluster related files Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Do not make Snapshot copies of this volume Multiplex with transaction logs Multiplex with config files Use SnapMirror

Traditional Volumes For traditional volumes, we generally recommend that you create one volume each for datafiles and logs. Create an additional volume if ORACLE_HOME will reside on the storage system.

2.2.5. Oracle Optimal Flexible Architecture (OFA) on NetApp Storage


Distribute files on multiple volumes on physically separate disks to achieve I/O load balancing: Separate out high I/O Oracle files from system files for better response times Ease backup and recovery for Oracle data and log files by putting them in separate logical volumes Ensure fast recovery from a crash to minimize downtime Maintain logical separation of Oracle components to ease maintenance and administration OFA architecture works well with a multiple Oracle home (MOH) layout

For more information about Oracle OFA for RAC or Non-RAC and Oracle9i vs. Oracle10g, visit the following links: OFA for Non-RAC: http://download-west.oracle.com/docs/html/B14399_01/app_ofa.htm#i633126 For RAC, OFA for ORACLE_HOME changes as: http://download-west.oracle.com/docs/cd/B19306_01/install.102/b14203/apa.htm#CHDCDGFE

ORACLE_BASE ORACLE_HOME (home1) /DBS /LOG /Admin ORACLE_HOME (home2) /DBS /LOG /Admin

Type of Files ORACLE_HOME

Description Oracle libraries and binaries Oracle database files Oracle redo archive logs Oracle CRS HOME Oracle CRS HOME

OFA Compliant Mount point /u01/app/oracle/product/9.2.0/ or /u01/app/oracle/product/10.1.0/db_unique_name

Location Local filesystem or storage system NFS mount on storage subsystem NFS mount on storage subsystem NFS mount on storage subsystem NFS mount on storage subsystem

Database files

/u02/oradata

Log Files

/u03/oradata

CRS_HOME (For 10.1.x.x RAC) CRS_HOME (For 10.2.x.x RAC)

/u01/app/oracle/product/10.1.0/crs_1

/u04/crs/product/10.2.0/app/ (Oracle 10g R2)

2.2.6. Oracle Home Location:


OFA structure is flexible enough where ORACLE_HOME can reside either on the local file system or an NFS mounted volume. For Oracle 10g, ORACLE_HOME can be shared for a specific RAC configuration where a single set of Oracle binaries and libraries are shared by multiple instances of the same database. Some details about shared ORACLE_HOME are discussed below. What is a Shared ORACLE_HOME? A shared ORACLE_HOME is an ORACLE_HOME directory that is shared by 2 or more hosts. This is a software install directory and typically includes the oracle binaries, libraries, network files (listener, tnsnames, etc...), oraInventory, dbs, etc. A shared ORACLE_HOME is a term used to describe an Oracle software directory that is mounted from a NFS server and access is provided to 2 or more hosts from the same directory path. An ORACLE_HOME directory will look similar to the following (/u01/app/oracle/product/10.2.0/db_1) according to the OFA.

What does Oracle support on Oracle 10g? Single Instance (Oracle 10g) support using an NFS mounted ORACLE_HOME to a single host. Oracle RAC Instance (Oracle 10g) support using an NFS mounted ORACLE_HOME to 1 or more hosts.

What are the advantages of sharing the ORACLE_HOME in Oracle 10g? Redundant copies are no longer needed for multiple hosts. This is extremely efficient in a testing type of environment where quick access to the Oracle binaries from a similar host system is necessary. Disk space savings.

Patch application for multiple systems can be completed more rapidly. For example, if testing 10 systems that you want to all run the exact same Oracle DB versions this is beneficial. It is easier to add nodes.

What are the disadvantages of sharing the ORACLE_HOME in Oracle 10g? By patching one ORACLE_HOME directory all databases using the same home would need to be bounced as well. In a high availability environment by having a shared ORACLE_HOME could cause downtime to a greater number of servers if impacted.

What does NetApp support regarding sharing the ORACLE_HOME? We DO support a shared ORACLE_HOME in a RAC environment. We DO support a shared ORACLE_HOME for single instance Oracle when mounted to a single host system. We DO NOT support using a shared ORACLE_HOME in a production environment that requires high availability for a single instance Oracle setup. In other words, multiple databases should not share a single NFS mounted ORACLE_HOME while any of the database are running in production mode.

2.2.7. Best Practices for Control and Log Files


Online Redo Log Files Multiplex your log files. To do that, follow these recommendations: 1. Create a minimum of two online redo log groups, each with three members. Put the first online redo log group on one volume and the next on another volume. The LGWR instance process (Log Writer) flushes the REDO Log Buffer, which contains both comitted and uncomitted transactions to all members of the current online redo log group, and when the group is full it performs a log switch to the next group, and LGWR writes to all members of that group until the group fills up, and so on. Checkpoints do not cause log switches, in fact many checkpoints can occur while a log group is being filled, a checkpoint occurs when a log switch occurs. Suggested layout: Redo Grp 1: $ORACLE_HOME/Redo_Grp1 (on filer volume /vol/oralog) Redo Grp 2: $ORACLE_HOME/Redo_Grp2 (on filer volume /vol/oralog) Archived Log Files 1. Set your init parameter, ARCHIVE_LOG_DEST, to a directory in the log volume such as $ORACLE_HOME/log/ArchiveLog (on filer volume /vol/oralog).

2.

Control Files Multiplex your control files. To do that: 1. Set your init parameter, CONTROL_FILE_DEST, to point to destinations on at least two different filer volumes: Dest 1: $ORACLE_HOME/Control_File1 (on local filesystem or on filer volume /vol/oralog) Dest 2: $ORACLE_HOME/log/Control_File2 (on filer volume /vol/oradata)

2.3. RAID Group Size


When reconstruction rate (the time required to rebuild a disk after a failure) is an important factor, smaller RAID groups should be used. Below we recommend the best RAID group sizes based on whether you are using traditional NetApp RAID or RAID-DP.

2.3.1 Traditional RAID


Network Appliance recommends using the default RAID group size of eight disks for most applications. Larger RAID group sizes increase the impact from disk reconstruction due to: Increased number of reads required Increased RAID resources required An extended period during which I/O performance is impacted (reconstruction in a larger RAID group takes longer; therefore I/O performance is compromised for a longer period)

These factors will result in a larger performance impact to normal user workloads and/or slower reconstruction rates. Larger RAID groups also increase the possibility that a double disk failure will lead to data loss. (The larger the RAID group, the greater the chance that two disks will fail at the same time in the same group.)

2.3.2 RAID-DP
With the release of Data ONTAP 6.5, double-parity RAID, or RAID-DP, was introduced. With RAID-DP, each RAID group is allocated an additional parity disk. Given this additional protection, the likelihood of data loss due to a double disk failure has been nearly eliminated, and therefore larger RAID group sizes can be supported. With Data ONTAP 6.5 or later, RAID group sizes up to 16 disks can be safely configured using RAIDDP. However we recommend the default RAID group size of 16 for RAID-DP.

2.4. Snapshot and SnapRestore


NetApp strongly recommends using Snapshot and SnapRestore for Oracle Database backup and restore operations. Snapshot provides a point-in-time copy of the entire database in seconds without incurring any performance penalty, while SnapRestore can instantly restore an entire database to a point in time in the past. In order for Snapshot copies to be effectively used with Oracle Databases, they must be coordinated with the Oracle hot backup facility. For this reason, NetApp recommends that automatic Snapshot copies be turned off on volumes that are storing data files for an Oracle Database. To turn off automatic Snapshot copies on a volume, issue the following command: vol options <volname> nosnap on If you want to make the .snapshot directory invisible to clients, issue the following command: vol options <volname> nosnapdir on With automatic Snapshot copies disabled, regular Snapshot copies are created as part of the Oracle backup process when the database is in a consistent state. For additional information on using Snapshot and SnapRestore to back up/restore an Oracle Database, see [3].

10

2.5. Snap Reserve


Setting the snap reserve on a volume sets aside part of the volume for the exclusive use of Snapshot copies. Note: Snapshot copies may consume more space than allocated with snap reserve, but user files may not consume the reserved space. To see the snap reserve size on a volume, issue this command: snap reserve To set the volume snap reserve size (the default is 20%), issue this command: snap reserve <volume> <percentage> Do not use a percent sign (%) when specifying the percentage. The snap reserve should be adjusted to reserve slightly more space than the Snapshot copies of a volume consume at their peak. The peak Snapshot copy size can be determined by monitoring a system over a period of a few days when activity is high. The snap reserve may be changed at any time. Dont raise the snap reserve to a level that exceeds free space on the volume; otherwise client machines may abruptly run out of storage space. NetApp recommends that you observe the amount of snap reserve being consumed by Snapshot copies frequently. Do not allow the amount of space consumed to exceed the snap reserve. If the snap reserve is exceeded, consider increasing the percentage of the snap reserve or delete Snapshot copies until the amount of space consumed is less than 100%. NetApp DataFabric Manager (DFM) can aid in this monitoring.

2.6. System Options 2.6.1. The minra Option


When the minra option is enabled, it minimizes the number of blocks that are prefetched for each read operation. By default, minra is turned off, and the system performs aggressive read ahead on each volume. The effect of read ahead on performance is dependent on the I/O characteristics of the application. If data is being accessed sequentially, as when a database performs full table and index scans, read ahead will increase I/O performance. If data access is completely random, read ahead should be disabled, since it may decrease performance by prefetching disk blocks that are never used, thereby wasting system resources. The following command is used to enable minra on a volume and turn read ahead off: vol options <volname> minra on Generally, the read ahead operation is beneficial to databases, and the minra option should be left alone. However, NetApp recommends experimenting with the minra option to observe the performance impact, since it is not always possible to determine how much of an applications activity is sequential versus random. This option is transparent to client access and can be changed at will without disrupting client I/O. Be sure to allow two to three minutes for the cache on the appliance to adjust to the new minra setting before looking for a change in performance.

2.6.2. File Access Time Update


Another option that can improve access time is file access time update. If an application does not require or depend upon maintaining accurate access times for files, this option can be disabled. Use this option only if the application generates heavy read I/O traffic. The following command is used to disable file access time updates: vol options <volname> no_atime_update on

11

2.6.3. NFS Settings


For database files and mountpoints, NetApp supports the use of TCP as the data transport mechanism with the current NFS V3.0 client software on the host. UDP is not supported for database files and mountpoints.

3. Operating Systems
For a complete, up-to-date list of the platforms certified on Oracle databases, refer to http://www.netapp.com/partners/oracle/tech.html

3.1. Linux
For additional information about getting the most from Linux and NetApp technologies, see [4].

3.1.1. LinuxRecommended Versions


The various Linux operating systems are based on the underlying kernel. With all the distributions available, it is important to focus on the kernel to understand features and compatibility. Kernel 2.4 The NFS client in this kernel has many improvements over the 2.2 client, most of which address performance and stability problems. The NFS client in kernels later than 2.4.16 has significant changes to help improve performance and stability. There have been recent controversial changes in the 2.4 branch that have prevented distributors from adopting late releases of the branch. Although there were significant improvements to the NFS client in 2.4.15, Torvalds also replaced parts of the VM subsystem, making the 2.4.15, 2.4.16, and 2.4.17 kernels unstable for heavy workloads. Many recent releases from Red Hat and SUSE include the 2.4.18 kernel. The use of 2.4 kernels on hardware with more than 896MB should include a special kernel compile option known as CONFIG_HIGHMEM, which is required to access and use memory above 896MB. The Linux NFS client has a known problem in these configurations in which an application or the whole client system can hang at random. This issue has been addressed in the 2.4.20 kernel, but still haunts kernels contained in distributions from Red Hat and SUSE that are based on earlier kernels. Linux Kernel Recommendation NetApp has tested many kernel distributions, and those based on 2.6 are currently recommended. Recommended distributions include Red Hat Enterprise Linux Advanced Server 3.0 and 4.0 as well as Suse Enterprise Linux 9.0 (SLES9). This section will be revisited in the future with further recommendations.

Manufacturer Red Hat Red Hat Red Hat SUSE SUSE SUSE

Version Advanced Server 2.1 Advanced Server 3.0 Advanced Server 4.0 7.2 SLES 8 SLES 9

Tested Yes Yes Yes Yes Yes Yes

Recommended No Yes Yes No No Yes

12

3.1.2. LinuxKernel Patches


In all circumstances, the kernel patches recommended by Oracle for the particular database product being run should be applied first. In general, those recommendations will not conflict with the ones here, but if a conflict does arise, check with Oracle or NetApp customer support for resolution before proceeding. The uncached I/O patch was introduced in Red Hat Advanced Server 2.1, update 3, with kernel errata e35 and up. It is mandatory to use uncached I/O when running Oracle9i RAC with NetApp filers in a NAS environment. Uncached I/O does not cache data in the Linux file system buffer cache during read/write operations for volumes mounted with the "noac" mount option. To enable uncached I/O, add the following entry to the /etc/modules.conf file and reboot the cluster nodes: options nfs nfs_uncached_io=1 The volumes used for storing Oracle Database files should still be mounted with the "noac" mount option for Oracle9i RAC databases. The uncached I/O patch has been developed by Red Hat and tested by Oracle, NetApp, and Red Hat.

3.1.3. LinuxOS Settings


3.1.3.1. Enlarging a Client's Transport Socket Buffers Enlarging the transport socket buffers that Linux uses for NFS traffic helps reduce resource contention on the client, reduces performance variance, and improves maximum data and operation throughput. In future releases of the client, the following procedure will not be necessary, as the client will automatically choose an optimal socket buffer size. Become root on the client cd into /proc/sys/net/core echo 1048576 > rmem_max echo 262143 > wmem_max echo 1048576 > rmem_default echo 262143 > wmem_default Remount the NFS file systems on the client Red Hat distributions after 7.2 contain a file called /etc/sysctl.conf where changes such as this can be added so they will be executed after every system reboot. Add these lines to the /etc/sysctl.conf file on these Red Hat systems:

net.core.rmem_max = 1048576 net.core.wmem_max = 262143 net.core.rmem_default = 1048576 net.core.wmem_default = 262143 3.1.3.2. Other TCP Enhancements The following settings can help reduce the amount of work clients and filers do when running NFS over TCP: echo 0 > /proc/sys/net/ipv4/tcp_sack echo 0 > /proc/sys/net/ipv4/tcp_timestamps

13

These operations disable optional features of TCP to save a little processing time and network bandwidth. When building kernels, be sure that CONFIG_SYNCOOKIES is disabled. SYN cookies slow down TCP connections by adding extra processing on both ends of the socket. Some Linux distributors provide kernels with SYN cookies enabled. Linux 2.2 and 2.4 kernels support large TCP windows (RFC 1323) by default. No modification is required to enable large TCP windows.

3.1.4. Linux NetworkingFull Duplex and Autonegotiation


Most network interface cards use autonegotiation to obtain the fastest settings allowed by the card and the switch port to which it attaches. Sometimes, chipset incompatibilities may result in constant renegotiation or negotiating half duplex or a slow speed. When diagnosing a network problem, be sure the Ethernet settings are as expected before looking for other problems. Avoid hard coding the settings to solve autonegotiation problems, because it only masks a deeper problem. Switch and card vendors should be able to help resolve these problems.

3.1.5. Linux NetworkingGigabit Ethernet Network Adapters


If Linux servers are using high-performance networking (gigabit or faster), provide enough CPU and memory bandwidth to handle the interrupt and data rate. The NFS client software and the gigabit driver reduce the resources available to the application, so make sure resources are adequate. Most gigabit cards that support 64-bit PCI or better should provide good performance. Any database using NetApp storage should utilize Gigabit Ethernet on both the filer and database server to achieve optimal performance. NetApp has found that the following Gigabit Ethernet cards work well with Linux: SysKonnect. The SysKonnect SK-98XX series cards work very well with Linux and support singleand dual-fiber and copper interfaces for better performance and availability. A mature driver for this card exists in the 2.4 kernel source distribution. Broadcom. Many cards and switches use this chipset, including the ubiquitous 3Com solutions. This provides a high probability of compatibility between network switches and Linux clients. The driver software for this chipset appeared in the 2.4.19 Linux kernel and is included in Red Hat distributions with earlier 2.4 kernels. Be sure the chipset firmware is up to date. AceNIC Tigon II. Several cards, such as the NetGear GA620T, use this chipset, but none are still being manufactured. A mature and actively maintained driver for this chipset exists in the kernel source distribution. Intel EEPro/1000. This appears to be the fastest gigabit card available for systems based on Intel, but the card's driver software is included only in recent kernel source distributions (2.4.20 and later) and may be somewhat unstable. The card's driver software for earlier kernels can be found on the Intel Web site. There are reports that the jumbo frame MTU for Intel cards is only 8998 bytes, not the standard 9000 bytes.

3.1.6. Linux NetworkingJumbo Frames with GbE


All of the cards described above support the jumbo frames option of Gigabit Ethernet. Using jumbo frames can improve performance in environments where Linux NFS clients and NetApp systems are together on an unrouted network. Be sure to consult the command reference for each switch to make sure it is capable of handling jumbo frames. There are some known problems in Linux drivers and the networking layer when using the maximum frame size (9000 bytes). If unexpected performance slowdowns occur when using jumbo frames, try reducing the MTU to 8960 bytes.

14

3.1.7. Linux NFS ProtocolMount Options


For background on NFS and a brief explanation of what the various NFS client mount options do, please consult NetApp TR: Using the Linux NFS Client with NetApp [4] The table at the following link summarizes the list of recommended NFS client side mount options for various Oracle versions and OS platform permutations. http://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb7518 (NOW login required)

3.1.8. iSCSI Initiators for Linux


iSCSI support for Linux is just now becoming available in a number of different forms. Both hardware and software initiators are starting to appear but have not reached a level of adoption to merit a great deal of attention. Testing is insufficient to recommend any best practices at this time. This section will be revisited in the future for any recommendations or best practices for running Oracle Databases on Linux with iSCSI initiators.

3.1.9. FCP SAN Initiators for Linux


NetApp supports Fibre Channel storage access for Oracle Databases running on a Linux host. Connections to NetApp storage can be made through a Fibre Channel switch (SAN) or direct-attached. NetApp currently supports Red Hat Enterprise Linux 2.1 and 3.0 and SUSE Enterprise Server 8 on a Linux host with NetApp storage running Data ONTAP 6.4.1 and up. For more information about system requirements and installation, refer to [4]. NetApp recommends using Fibre Channel SANs for Oracle Databases on Linux where there is an existing investment in Fibre Channel infrastructure or the sustained throughput requirement for the database server is greater than 1GB per second (~110MB per second).

3.2. Sun Solaris Operating Systems 3.2.1. SolarisRecommended Versions


Manufacturer (Sun)Solaris Version 2.6 7 8 9 10 Tested Obsolete Yes Yes Yes Yes Recommended No No No Yes Yes

NetApp recommends the use of Solaris 9 Update 5 and above for optimal server performance.

3.2.2. SolarisKernel Patches


Sun patches are frequently updated, so any list is almost immediately obsolete. The patch levels listed are considered a minimally acceptable level for a particular patch; later revisions will contain the desired fixes but may introduce unexpected issues. NetApp recommends installing the latest revision of each Sun patch. However, report any problems encountered and back out the patch to the revision specified below to see if the problem is resolved. These recommendations are in addition to, not a replacement for, the Solaris patch recommendations included in the Oracle installation or release notes.

15

List of desired Solaris 8 patches as of March 16, 2006: Solaris 8 117000-05 108806-20 108528-29 116959-13 SunOS 5.8: kernel patch (obsoletes 108813-17) SunOS 5.8: Sun Quad FastEthernet qfe driver SunOS 5.8: kernel update patch SunOS 5.8: nfs and rpcmod patch (116959-05 addresses Solaris NFS client caching [wcc] bug 4407669: VERY important performance patch) 111883-34 SunOS 5.8: Sun GigaSwift Ethernet 1.0 driver patch

List of desired Solaris 9 patches as of March 16, 2006: Solaris 9 112817-27 113318-21 SunOS 5.9: Sun GigaSwift Ethernet 1.0 driver patch SunOS 5.9: nfs patch (addresses Solaris NFS client caching [wcc[ bug 4407669: also addresses Solaris bug 4960336 fdio: VERY important performance patch) 113459-03 112233-12 112854-02 117171-17 112764-08 SunOS 5.9: udp patch SunOS 5.9: kernel patch SunOS 5.9: icmp patch SunOS 5.9: patch /kernel/sys/kaio SunOS 5.9: Sun Quad FastEthernet qfe driver

List of desired Solaris 10 patches as of March 16, 2006: Solaris 10 120030-01 118375-06 118822-30 SunOS 5.10: mountd patch SunOS 5.10: nfs patch SunOS 5.10: kernel patch

Failure to install the patches listed above can result in database crashes and/or slow performance. They must be installed. Please note that the "Sun EAGAIN bug"SUN Alert 41862, referenced in patch 108727 can result in Oracle Database crashes accompanied by this error message: SVR4 Error 11: Resource temporarily unavailable The patches listed here may have other dependencies that are not listed. Read all installation instructions for each patch to ensure that any dependent or related patches are also installed.

3.2.3. SolarisOS Settings


There are a variety of Solaris settings that a system administrator or database administrator can use to get the most performance, availability, and simplicity out of a Sun and NetApp environment. Solaris file descriptors:

16

rlim_fd_cur. "Soft" limit on the number of file descriptors (and sockets) that a single process can have open rlim_fd_max. "Hard" limit on the number of file descriptors (and sockets) that a single process can have open Setting these values to 1024 is STRONGLY recommended to avoid database crashes resulting from Solaris resource deprivation. Solaris kernel "maxusers" setting: The Solaris kernel parameter "maxusers" controls the allocation of several major kernel resources, such as the maximum size of the process table and the maximum number of processes per user.

3.2.4. Solaris NetworkingFull Duplex and Autonegotiation


The settings in this section only apply to back-to-back connections between NetApp and Sun without connecting through a switch. Solaris GbE cards must have autonegotiation forced off and transmit flow control forced on. This is true for the Sun "ge" cards and is assumed to still be the case with the newer Sun ce cards. NetApp recommends disabling autonegotiation, forcing the flow control settings, and forcing full duplex.

3.2.5. Solaris NetworkingGigabit Ethernet Network Adapters


Sun provides Gigabit Ethernet cards in both PCI and SBUS configurations. The PCI cards deliver higher performance than the SBUS versions. NetApp recommends the use of the PCI cards wherever possible. Any database using NetApp storage should utilize Gigabit Ethernet on both the filer and database server to achieve optimal performance. SysKonnect is a third-party NIC vendor that provides Gigabit Ethernet cards. The PCI versions have proven to deliver high performance. Sun servers with Gigabit Ethernet interfaces should ensure that they are running with full flow control (some require setting both send and receive to ON individually). On a Sun server, set Gigabit flow control by adding the following lines to a startup script (such as one in /etc/rc2.d/S99*) or modify these entries if they already exist:

ndd set /dev/ge instance

ndd set /dev/ge ge_adv_pauseRX 1 ndd set /dev/ge ge_adv_pauseTX 1 ndd set /dev/ge ge_intr_mode ndd set /dev/ge ge_put_cfg 1 0

Note: The instance may be other than 0 if there is more than one Gigabit Ethernet interface on the system. Repeat for each instance that is connected to NetApp storage. For servers using /etc/system, add these lines: set ge:ge_adv_pauseRX=1

17

set ge:ge_adv_pauseTX=1 set ge:ge_intr_mode=1 set ge_ge_put_cfg=0 Note that placing these settings in /etc/system changes every Gigabit interface on the Sun server. Switches and other attached devices should be configured accordingly.

3.2.6. Solaris NetworkingJumbo Frames with GbE


SysKonnect provides SK-98xx cards that do support jumbo frames. To enable jumbo frames, execute the following steps: 1. Edit /kernel/drv/skge.conf and uncomment this line: JumboFrames_Inst0=On; 2. Edit /etc/rcS.d/S50skge and add this line: ifconfig skge0 mtu 9000 3. Reboot.

If using jumbo frames with a SysKonnect NIC, use a switch that supports jumbo frames and enable jumbo frame support on the NIC on the NetApp system.

3.2.7. Solaris NetworkingImproving Network Performance


Adjusting the following settings can have a beneficial effect on network performance. Most of these settings can be displayed using the Solaris ndd command and set by either using ndd or editing the /etc/system file. /dev/udp udp_recv_hiwat. Determines the maximum value of the UDP receive buffer. This is the amount of buffer space allocated for UDP received data. The default value is 8192 (8kB). It should be set to 65,535 (64kB). /dev/udp udp_xmit_hiwat. Determines the maximum value of the UDP transmit buffer. This is the amount of buffer space allocated for UDP transmit data. The default value is 8192 (8kB). It should be set to 65,535 (64kB). /dev/tcp tcp_recv_hiwat. Determines the maximum value of the TCP receive buffer. This is the amount of buffer space allocated for TCP receive data. The default value is 8192 (8kB). It should be set to 65,535 (64kB). /dev/tcp tcp_xmit_hiwat. Determines the maximum value of the TCP transmit buffer. This is the amount of buffer space allocated for TCP transmit data. The default value is 8192 (8kB). It should be set to 65,535 (64kB). /dev/ge adv_pauseTX 1. Forces transmit flow control for the Gigabit Ethernet adapter. Transmit flow control provides a means for the transmitter to govern the amount of data sent; "0" is the default for Solaris, unless it becomes enabled as a result of autonegotiation between the NICs. NetApp strongly recommends that transmit flow control be enabled. Setting this value to 1 helps avoid dropped packets or retransmits, because this setting forces the NIC card to perform flow control. If the NIC gets overwhelmed with data, it will signal the sender to pause. It may sometimes be beneficial to set this parameter to 0 to determine if the sender (the NetApp system) is overwhelming the client. Recommended settings were described in section 2.2.6 of this document. /dev/ge adv_pauseRX 1. Forces receive flow control for the Gigabit Ethernet adapter. Receive flow control provides a means for the receiver to govern the amount of data received. A setting of "1" is the default for Solaris.

18

/dev/ge adv_1000fdx_cap 1. Forces full duplex for the Gigabit Ethernet adapter. Full duplex allows data to be transmitted and received simultaneously. This should be enabled on both the Solaris server and the NetApp system. A duplex mismatch can result in network errors and database failure. sq_max_size. Sets the maximum number of messages allowed for each IP queue (STREAMS synchronized queue). Increasing this value improves network performance. A safe value for this parameter is 25 for each 64MB of physical memory in a Solaris system up to a maximum value of 100. The parameter can be optimized by starting at 25 and incrementing by 10 until network performance reaches a peak. Nstrpush. Determines the maximum number of modules that can be pushed onto a stream and should be set to 9. Ncsize. Determines the size of the DNLC (directory name lookup cache). The DNLC stores lookup information for files in the NFS-mounted volume. A cache miss may require a disk I/O to read the directory when traversing the pathname components to get to a file. Cache hit rates can significantly affect NFS performance; getattr, setattr, and lookup usually represent greater than 50% of all NFS calls. If the requested information isn't in the cache, the request will generate a disk operation that results in a performance penalty as significant as that of a read or write request. The only limit to the size of the DNLC cache is available kernel memory. Each DNLC entry uses about 50 bytes of extra kernel memory. Network Appliance recommends that ncsize be set to 8000. nfs:nfs3_max_threads. The maximum number of threads that the NFS V3 client can use. The recommended value is 24. nfs:nfs3_nra. The read-ahead count for the NFS V3 client. The recommended value is 10. nfs:nfs_max_threads. The maximum number of threads that the NFS V2 client can use. The recommended value is 24. nfs:nfs_nra. The read-ahead count for the NFS V2 client. The recommended value is 10.

3.2.8. Solaris IP Multipathing (IPMP)


Solaris has a facility that allows the use of multiple IP connections in a configuration similar to a NetApp virtual interface (VIF). In some circumstances, use of this feature can be beneficial. IPMP can be configured either in a failover configuration or in a load-sharing configuration. The failover configuration is fairly self-explanatory and straightforward to set up. Two interfaces are allocated to a single IP address, with one interface on standby (referred to in the Solaris documentation as deprecated) and one interface active. If the active link goes down, Solaris transparently moves the traffic to the second interface. Since this is done within the Solaris kernel, applications utilizing the interface are unaware and unaffected when the switch is made. NetApp has tested the failover configuration of Solaris IPMP and recommends its use where failover is required, the interfaces are available, and standard trunking (e.g., Cisco Etherchannel) capabilities are not available. The load-sharing configuration utilizes a trick wherein the outbound traffic to separate IP addresses is split across interfaces, but all outbound traffic contains the return address of the primary interface. Where a large amount of writing to a filer is occurring, this configuration sometimes yields improved performance. Because all traffic back into the Sun returns on the primary interface, heavy read I/O is not accelerated at all. Furthermore, the mechanism that Solaris uses to detect failure and trigger failover to the surviving NIC is incompatible with NetApp cluster solutions.

19

NetApp recommends against the use of IPMP in a load-sharing configuration due to its current incompatibility with NetApp cluster technology, its limited ability to improve read I/O performance, and its complexity and associated inherent risks.

3.2.9. Solaris NFS ProtocolMount Options


Getting the right NFS mount options can have a significant impact on both performance and reliability of the I/O subsystem. Below are a few tips to aid in choosing the right options. Mount options are set either manually, when a file system is mounted on the Solaris system, or, more typically, specified in /etc/vfstab for mounts that occur automatically at boot time. The latter is strongly preferred since it ensures that a system that reboots for any reason will return to a known state without operator intervention. To specify mount options: 1. 2. Edit the /etc/vfstab. For each NFS mount participating in a high-speed I/O infrastructure, make sure the mount options specify TCP V3 with transfer sizes of 32kB: hard,bg,intr,vers=3,proto=tcp, rsize=32768, wsize=32768, Note: These values are the default NFS settings for Solaris 8 and 9. Specifying them is not actually required but is recommended for clarity. Hard. The "soft" option should never be used with databases. It may result in incomplete writes to data files and database file connectivity problems. The hard option specifies that I/O requests will retry forever in the event that they fail on the first attempt. This forces applications doing I/O over NFS to hang until the required data files are accessible. This is especially important where redundant networks and servers (e.g., NetApp clusters) are employed. Bg. Specifies that the mount should move into the background if the NetApp system is not available to allow the Solaris boot process to complete. Because the boot process can complete without all the file systems being available, care should be taken to ensure that required file systems are present before starting the Oracle Database processes. Intr. This option allows operations waiting on an NFS operation to be interrupted. This is desirable for rare circumstances in which applications utilizing a failed NFS mount need to be stopped so that they can be reconfigured and restarted. If this option is not used and an NFS connection mounted with the hard option fails and does not recover, the only way for Solaris to be recovered is to reboot the Sun server. rsize/wsize. Determines the NFS request size for reads/writes. The values of these parameters should match the values for nfs.tcp.xfersize on the NetApp system. A value of 32,768 (32kB) has been shown to maximize database performance in the environment of NetApp and Solaris. In all circumstances, the NFS read/write size should be the same as or greater than the Oracle block size. For example, specifying a DB_FILE_MULTIBLOCK_READ_COUNT of 4 multiplied by a database block size of 8kB results in a read buffer size (rsize) of 32kB. NetApp recommends that DB_FILE_MULTIBLOCK_READ_COUNT should be set from 1 to 4 for an OLTP database and from 16 to 32 for DSS. Vers. Sets the NFS version to be used. Version 3 yields optimal database performance with Solaris. Proto. Tells Solaris to use either TCP or UDP for the connection. Currently, only TCP is supported for Oracle files over NFS. Previously, UDP gave better performance but was restricted to very reliable connections. TCP has more overhead but handles errors and flow control better. In recent versions of Solaris (8, 9 and 10) the performance difference is negligible. Forcedirectio. A new option introduced with Solaris 8. It allows the application to bypass the Solaris kernel cache, which is optimal for Oracle. This option should only be used with volumes containing data files. It

20

NetApp recommended mount options for Oracle single-instance database on Solaris: rw,bg,vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,forcedirectio NetApp recommended mount options for Oracle9i RAC on Solaris: rw,bg,vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,forcedirectio,noac should never be used to mount volumes containing executables. Using it with a volume containing Oracle executables will prevent all executables stored on that volume from being started. If programs that normally run suddenly wont start and immediately core dump, check to see if they reside on a volume being mounted using forcedirectio. The introduction of forced direct I/O with Solaris 8 is a tremendous benefit. Direct I/O bypasses the Solaris file system cache. When a block of data is read from disk, it is read directly into the Oracle buffer cache and not into the file system cache. Without direct I/O, a block of data is read into the file system cache and then into the Oracle buffer cache, double-buffering the data, wasting memory space and CPU cycles. Oracle does not use the file system cache. Using system monitoring and memory statistics tools, NetApp has observed that without direct I/O enabled on NFS-mounted file systems, large numbers of file system pages are paged in. This adds system overhead in context switches, and system CPU utilization increases. With direct I/O enabled, file system page-ins and CPU utilization are reduced. Depending on the workload, a significant increase can be observed in overall system performance. In some cases the increase has been more than 20%. Direct I/O for NFS is new in Solaris 8, although it was introduced in UFS in Solaris 6. Direct I/O should only be used on mountpoints that house Oracle Database files, not on nondatabase files or Oracle executables or when doing normal file I/O operations such as dd. Normal file I/O operations benefit from caching at the file system level. A single volume can be mounted more than once, so it is possible to have certain operations utilize the advantages of forcedirectio while others dont. However, this can create confusion, so care should be taken. NetApp recommends the use of forcedirectio on selected volumes where the I/O pattern associated with the files under that mountpoint do not lend themselves to NFS client caching. In general these will be data files with access patterns that are mostly random as well as any online redo log files and archive log files. The forcedirectio option should not be used for mountpoints that contain executable files such as the ORACLE_HOME directory. Using the forcedirectio option on mountpoints that contain executable files will prevent the programs from executing properly.

Multiple Mountpoints To achieve the highest performance, transactional OLTP databases benefit from configuring multiple mountpoints on the database server and distributing the load across these mountpoints. The performance improvement is generally from 2% to 9%. This is a very simple change to make, so any improvement justifies the effort. To accomplish this, create another mountpoint to the same file system on the NetApp filer. Then either rename the data files in the database (using the ALTER DATABASE RENAME FILE command) or create symbolic links from the old mountpoint to the new mountpoint.

21

3.2.10. iSCSI Initiators for Solaris


Currently, NetApp does not support iSCSI initiators on Solaris. This section will be updated in the future when iSCSI initiators for Solaris become available.

3.2.11. Fibre Channel SAN for Solaris


NetApp introduced the industrys first unified storage appliance capable of serving data in either NAS or SAN configurations. NetApp provides Fibre Channel SAN solutions for all platforms, including Solaris, Windows, Linux, HP/UX, and AIX. The NetApp Fibre Channel SAN solution provides the same manageability framework and feature-rich functionality that have benefited our NAS customers for years. Customers can choose either NAS or FC SAN for Solaris, depending on the workload and the current environment. For FC SAN configurations, it is highly recommended to use the latest SAN host attach kit 1.2 for Solaris. The kit comes with the Fibre Channel HBA, drivers, firmware, utilities, and documentation. For installation and configuration, refer to the documentation that is shipped with the attach kit. NetApp has validated the FC SAN solution for Solaris in an Oracle environment. Refer to the Oracle integration guide with NetApp FC SAN in a Solaris environment ([6]) for more details. For performing backup and recovery of an Oracle Database in a SAN environment, refer to [7]. NetApp recommends using Fibre Channel SAN with Oracle Databases on Solaris where there is an existing investment in Fibre Channel infrastructure or the sustained throughput requirement for the database server is more than 1GB per second (~110MB per second).

3.3. Microsoft Windows Operating Systems 3.3.1. Windows Operating SystemRecommended Versions
Microsoft Windows NT 4.0, Windows 2000 Server and Advanced Server, Windows 2003 Server

3.3.2. Windows Operating SystemService Packs


Microsoft Windows NT 4.0: Apply Service Pack 5 Microsoft Windows 2000: SP2 or SP3 Microsoft Windows 2000 AS: SP2 or SP3 Microsoft Windows 2003: Standard or Enterprise

3.3.3. Windows Operating SystemRegistry Settings


The following changes to the registry will improve the performance and reliability of Windows. Make the following changes and reboot the server: The /3GB switch should not be present in C:\boot.ini. \\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer \Parameters\MaxMpxCt Datatype: DWORD Value: To match the setting above for cifs.max_mpx

\\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip \Parameters\TcpWindow Datatype: DWORD

22

Value: 64240 (0xFAF0) The following table explains some of these items and offers tuning suggestions: Item MaxMpxCt Description The maximum number of outstanding requests a Windows client can have against a NetApp system. This must match cifs.max_mpx. Look at the performance monitor redirector/current item. If it is constantly running at the current value of MaxMpxCt, then increase this value. The maximum transfer size for data across the network. This value should be set to 64,240 (0xFAF0).

TcpWindow

3.3.4. Windows NetworkingAutonegotiation and Full Duplex


Go to Control Panel -> Network -> Services tab -> Server and click the Properties button. Set the options to maximize network applications and network performance. NetApp recommends that customers use either iSCSI or Fiber Channel to run their Oracle Database on Windows.

3.3.5. Windows NetworkingGigabit Ethernet Network Adapters


Any database using NetApp storage should utilize Gigabit Ethernet on both the filer and database server to achieve optimal performance. NetApp has tested the Intel PRO/1000 F Server Adapter. The following settings can be tuned on this adapter. Each setting should be tested and optimized as necessary to achieve optimal performance.

Item Coalesce buffers = 32

Description The number of buffers available for transmit acceleration. The flow control method used. This should match the setting for the Gigabit Ethernet adapter on the NetApp system. This would allow larger Ethernet packets to be transmitted. NetApp filers will support this in Data ONTAP 6.1 and later releases. The number of receive buffers and descriptors that the driver allocates for receiving packets. The number of transmit buffers and descriptors that the driver allocates for sending packets.

Flow control = receive pause frame

Jumbo frames = disable

Receive descriptors = 32 Transmit descriptors = 32

3.3.6. Windows NetworkingJumbo Frames with GbE


Note: Be very careful when using jumbo frames with Microsoft Windows 2000.

23

If jumbo frames are enabled on the filer and the Windows server running Oracle and authentication is being done through a Windows 2000 domain, then authentication could be going out the interface that has jumbo frames enabled to a domain controller, which is typically not configured to use jumbo frames. This could result in long delays or errors in authentication when using CIFS.

3.3.7. iSCSI Initiators for Windows


NetApp recommends using either the Microsoft iSCSI initiator or the Network Appliance iSCSI host attach kit 2.0 for Windows over a high-speed dedicated Gigabit Ethernet network on platforms such as Windows 2000, Windows 2000 AS, and Windows 2003 with Oracle Databases. For platforms such as Windows NT, which does not have iSCSI support, NetApp supports CIFS for Oracle Database and application storage. It is recommended to upgrade to Windows 2000 or later and use an iSCSI initiator (either software or hardware). NetApp currently supports Microsoft initiator 1.02 and 1.03, available from www.microsoft.com.

3.3.8. FCP SAN Initiators for Windows


Network Appliance supports Fibre Channel SAN on Windows for use with Oracle Databases. NetApp recommends using Fibre Channel SAN with Oracle Databases on Windows where there is an existing investment in Fibre Channel infrastructure. NetApp also recommends considering Fibre Channel SAN solutions for Windows when the sustained throughput requirement for the Oracle Database server is more than 1GB per second (~110MB per second).

4. Oracle Database Settings


This section describes settings that are made to the Oracle Database application, usually through settings contained in the init.ora file. The reader is assumed to have an existing knowledge of how to correctly set these settings and an idea of their effect. The settings described here are the ones most frequently tuned when using NetApp storage with Oracle Databases.

4.1. DISK_ASYNCH_IO
Enables or disables Oracle asynchronous I/O. Asynchronous I/O allows processes to proceed with the next operation without having to wait for an issued write operation to complete, therefore improving system performance by minimizing idle time. This setting may improve performance depending on the database environment. If the DISK_ASYNCH_IO parameter is set to TRUE, then DB_WRITER_PROCESSES and DB_BLOCK_LRU_LATCHES (Oracle versions prior to 9i) or DBWR_IO_SLAVES must also be used, as described below. The calculation looks like this: DB_WRITER_PROCESSES = 2 * number of CPUs Recent performance findings on Solaris 8 patched to 108813-11 or later and Solaris 9 have shown that setting: DISK_ASYNCH_IO = TRUE DB_WRITER_PROCESSES = 1 can result in better performance as compared to when DISK_ASYNCH_IO was set to FALSE. NetApp recommends ASYNC_IO for Solaris 2.8 and above.

4.2. DB_FILE_MULTIBLOCK_READ_COUNT
Determines the maximum number of database blocks read in one I/O operation during a full table scan. The number of database bytes read is calculated by multiplying DB_BLOCK_SIZE * DB_FILE_MULTIBLOCK_READ_COUNT. The setting of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance. Increasing this value may improve performance

24

for databases that perform many full table scans but degrade performance for OLTP databases where full table scans are seldom (if ever) performed. Setting this number to a multiple of the NFS READ/WRITE size specified in the mount will limit the amount of fragmentation that occurs in the I/O subsystem. Be aware that this parameter is specified in DB Blocks, and the NFS setting is in bytes, so adjust as required. As an example, specifying a DB_FILE_MULTIBLOCK_READ_COUNT of 4 multiplied by a DB_BLOCK_SIZE of 8kB will result in a read buffer size of 32kB. NetApp recommends that DB_FILE_MULTIBLOCK_READ_COUNT should be set from 1 to 4 for an OLTP database and from 16 to 32 for DSS.

4.3. DB_BLOCK_SIZE
For best database performance, DB_BLOCK_SIZE should be a multiple of the OS block size. For example, if the Solaris page size is 4096: DB_BLOCK_SIZE = 4096 * n The NFS rsize and wsize options specified when the file system is mounted should also be a multiple of this value. Under no circumstances should it be smaller. For example, if the Oracle DB_BLOCK_SIZE is set to 16kB, the NFS read and write size parameters (rsize and wsize) should be set to either 16kB or 32kB, never to 8kB or 4kB.

4.4. DBWR_IO_SLAVES and DB_WRITER_PROCESSES


DB_WRITER_PROCESSES is useful for systems that modify data heavily. It specifies the initial number of database writer processes for an instance. If DBWR_IO_SLAVES is used, only one database writer process will be allowed, regardless of the setting for DB_WRITER_PROCESSES. Multiple DBWRs and DBWR IO slaves cannot coexist. It is recommended that one or the other be used to compensate for the performance loss resulting from disabling DISK_ASYNCH_IO. Metalink note 97291.1 provides guidelines on usage. The first rule of thumb is to always enable DISK_ASYNCH_IO if it is supported on that OS platform. Next, check to see if it is supported for NFS or only for block access (FC/iSCSI). If supported for NFS, then consider enabling async I/O at the Oracle level and at the OS level and measure the performance gain. If performance is acceptable, then use async I/O for NFS. If async I/O is not supported for NFS or if the performance is not acceptable, then consider enabling multiple DBWRs and DBWR IO slaves as described next. Multiple DBWRs and DBWR IO slaves cannot coexist. It is recommended that one or the other be used to compensate for the performance loss resulting from disabling DISK_ASYNCH_IO. Metalink note 97291.1 provides guidelines on usage. The recommendation is that DBWR_IO_SLAVES be used for single CPU systems and that DB_WRITER_PROCESSES be used with systems having multiple CPUs. NetApp recommends that DBWR_IO_SLAVES be used for single-CPU systems and that DB_WRITER_PROCESSES be used with systems having multiple CPUs.

4.5. DB_BLOCK_LRU_LATCHES
The number of DBWRs cannot exceed the value of the DB_BLOCK_LRU_LATCHES parameter: DB_BLOCK_LRU_LATCHES = DB_WRITER_PROCESSES Starting with Oracle9i, DB_BLOCK_LRU_LATCHES is obsolete and need not be set.

5. Backup, Restore, and Disaster Recovery


For additional information about strategies for designing backup, restore, and disaster recovery architectures, see [8], [9], and [10].

25

For additional information about implementing instantaneous backup and recovery of an Oracle Database running on UNIX, see [3] and [7].

5.1. How to Back Up Data from a NetApp System


Data that is stored on a NetApp system can be backed up to online storage, nearline storage, or tape. The protocol used to access data while a backup is occurring must always be considered. When NFS and CIFS are used to access data, Snapshot and SnapMirror can be used and will always result in consistent copies of the file system. They must coordinate with the state of the Oracle Database to ensure database consistency. With Fibre Channel or iSCSI protocols, Snapshot copies and SnapMirror commands must always be coordinated with the server. The file system on the server must be blocked and all data flushed to the filer before invoking the Snapshot command. Data can be backed up within the same NetApp filer, to another NetApp filer, to a NearStore system, or to a tape storage device. Tape storage devices can be directly attached to an appliance, or they can be attached to an Ethernet or Fibre Channel network, and the appliance can be backed up over the network to the tape device. Possible methods for backing up data on NetApp systems include: Use SnapManager for Oracle to create your online or offline backups Use automated Snapshot copies to create online backups Use scripts on the server that rsh to the NetApp system to invoke Snapshot copies to create online backups Use SnapMirror to mirror data to another filer or NearStore system Use SnapVault to vault data to another NetApp filer or NearStore system Use server operating systemlevel commands to copy data to create backups Use NDMP commands to back up data to a NetApp filer or NearStore system Use NDMP commands to back up data to a tape storage device Use third-party backup tools to back up the filer or NearStore system to tape or other storage devices

5.2. Creating Online Backups Using Snapshot Copies


NetApp Snapshot technology makes extremely efficient use of storage by storing only block-level changes between creating each successive Snapshot copy. Since the Snapshot process is virtually instantaneous, backups are fast and simple. Snapshot copies can be automatically scheduled, they can be called from a script running on a server, or they can be created via SnapDrive or SnapManager. Data ONTAP includes a scheduler to automate Snapshot backups. Use automatic Snapshot copies to back up nonapplication data, such as home directories. Database and other application data should be backed up when the application is in its backup mode. For Oracle Databases this means placing the database tablespaces into hot backup mode prior to creating a Snapshot copy. NetApp has several technical reports that contain details on backing up an Oracle Database. For additional information on determining data protection requirements, see [11]. NetApp recommends using Snapshot copies for performing cold or hot backup of Oracle Databases. No performance penalty is incurred for creating a Snapshot copy. It is recommended to turn off the

26

automatic Snapshot scheduler and coordinate the Snapshot copies with the state of the Oracle Database. For more information on integrating Snapshot technology with Oracle Database backup, refer to [3] and [7].

5.3. Recovering Individual Files from a Snapshot Copy


Individual files and directories can be recovered from a Snapshot copy by using native commands on the server, such as the UNIX cp command, or dragging and dropping in Microsoft Windows. Data can also be recovered using the single-file SnapRestore command. Use the method that works most quickly.

5.4. Recovering Data Using SnapRestore


SnapRestore quickly restores a file system to an earlier state preserved by a Snapshot copy. SnapRestore can be used to recover an entire volume of data or individual files within that volume. When using SnapRestore to restore a volume of data, the data on that volume should belong to a single application. Otherwise operation of other applications may be adversely affected. The single-file option of SnapRestore allows individual files to be selected for restore without restoring all of the data on a volume. Be aware that the file being restored using SnapRestore cannot exist anywhere in the active file system. If it does, the appliance will silently turn the single-file SnapRestore into a copy operation. This may result in the single-file SnapRestore taking much longer than expected (normally the command executes in a fraction of a second) and also requires that sufficient free space exist in the active file system. NetApp recommends using SnapRestore to instantaneously restore an Oracle Database. SnapRestore can restore the entire volume to a point in time in the past or can restore a single file. It is advantageous to use SnapRestore on a volume level, as the entire volume can be restored in minutes, and this reduces downtime while performing Oracle Database recovery. If using SnapRestore on a volume level, it is recommended to store the Oracle log files, archive log files, and copies of control files on a separate volume from the main data file volume and use SnapRestore only on the volume containing the Oracle data files. For more information on using SnapRestore for Oracle Database restores, refer to [3] and [7].

5.5. Consolidating Backups with SnapMirror


SnapMirror mirrors data from a single volume or qtree to one or more remote NetApp systems simultaneously. It continually updates the mirrored data to keep it current and available. SnapMirror is an especially useful tool to deal with shrinking backup windows on primary systems. SnapMirror can be used to continuously mirror data from primary storage systems to dedicated nearline storage systems. Backup operations are transferred to systems where tape backups can run all day long without interrupting the primary storage. Since backup operations are not occurring on production systems, backup windows are no longer a concern.

5.6. Creating a Disaster Recovery Site with SnapMirror


SnapMirror continually updates mirrored data to keep it current and available. SnapMirror is the correct tool to use to create disaster recovery sites. Volumes can be mirrored asynchronously or synchronously to systems at a disaster recovery facility. Application servers should be mirrored to this facility as well. In the event that the DR facility needs to be made operational, applications can be switched over to the servers at the DR site and all application traffic directed to these servers until the primary site is recovered. Once the primary site is online, SnapMirror can be used to transfer the data efficiently back to the production filers. After the production site takes over normal application operation again, SnapMirror transfers to the DR facility can resume without requiring a second baseline transfer. For more information on using SnapMirror for DR in an Oracle environment, refer to [12].

27

5.7. Creating Nearline Backups with SnapVault


SnapVault provides a centralized disk-based backup solution for heterogeneous storage environments. Storing backup data in multiple Snapshot copies on a SnapVault secondary storage system allows enterprises to keep weeks of backups online for faster restoration. SnapVault also gives users the power to choose which data gets backed up, the frequency of backup, and how long the backup copies are retained. SnapVault software builds on the asynchronous, block-level incremental transfer technology of SnapMirror with the addition of archival technology. This allows data to be backed up via Snapshot copies on a filer and transferred on a scheduled basis to a destination filer or NearStore appliance. These Snapshot copies can be retained on the destination system for many weeks or even months, allowing recovery operations to the original filer to occur nearly instantaneously. For additional references on data protection strategies using SnapVault, refer to [8], [9], and [11].

5.8. NDMP and Native Tape Backup and Recovery


The Network Data Management Protocol, or NDMP, is an open standard for centralized control of enterprise-wide data management. The NDMP architecture allows backup application vendors to control native backup and recovery facilities in NetApp appliances and other file servers by providing a common interface between backup applications and file servers. NDMP separates the control and data flow of a backup or recovery operation into separate conversations. This allows for greater flexibility in configuring the environment used to protect the data on NetApp systems. Since the conversations are separate, they can originate from different locations, as well as be directed to different locations, resulting in extremely flexible NDMP-based topologies. Available NDMP topologies are discussed in detail in [13]. If an operator does not specify an existing Snapshot copy when performing a native or NDMP backup operation, Data ONTAP will create one before proceeding. This Snapshot copy will be deleted when the backup completes. When a file system contains FCP data, a Snapshot copy that was created at a point in time when the data was consistent should always be specified. As mentioned earlier, this is ideally done in script by quiescing an application or placing it in hot backup mode before creating the Snapshot copy. After Snapshot copy creation, normal application operation can resume, and tape backup of the Snapshot copy can occur at any convenient time. When attaching an appliance to a Fibre Channel SAN for tape backup, it is necessary to first ensure that NetApp certifies the hardware and software in use. A complete list of certified configurations is available on the Network Appliance data protection portal. Redundant links to Fibre Channel switches and tape libraries are not currently supported by NetApp in a Fibre Channel tape SAN. Furthermore, a separate host bus adapter must be used in the filer for tape backup. This adapter must be attached to a separate Fibre Channel switch that contains only filers, NearStore appliances, and certified tape libraries and tape drives. The backup server must either communicate with the tape library via NDMP or have library robotic control attached directly to the backup server.

5.9. Using Tape Devices with NetApp Systems


NetApp filers and NearStore systems support backup and recovery from local, Fibre Channel, and Gigabit Ethernet SAN-attached tape devices. Support for most existing tape drives is included as well as a method for tape vendors to dynamically add support for new devices. In addition, the RMT protocol is fully supported, allowing backup and recovery to any capable system. Backup images are written using a derivative of the BSD dump stream format, allowing full file system backups as well as nine levels of differential backups.

5.10. Supported Third-Party Backup Tools

28

NetApp has partnered with the following vendors to support NDMP-based backup solutions for data stored on NetApp systems. Atempo Time Navigator www.atempo.com Legato NetWorker www.legato.com

BakBone NetVault www.bakbone.com

SyncSort Backup Express www.syncsort.com

CommVault Galaxy www.commvault.com Computer Associates BrightStor Enterprise Backup www.ca.com

VERITAS NetBackup www.veritas.com

Workstation Solutions Quick Restore www.worksta.com

5.11. Backup and Recovery Best Practices


This section combines the NetApp data protection technologies and products described above into a set of best practices for performing Oracle hot backups (online backups) for backup, recovery, and archival purposes using primary storage (filers with high-performance Fibre Channel disk drives) and nearline storage (NearStore systems with low-cost, high-capacity ATA and SATA disk drives). This combination of primary storage for production databases and nearline disk-based storage for backups of the active data set improves performance and lowers the cost of operations. Periodically moving data from primary to nearline storage increases free space and improves performance, while generating considerable cost savings. Note: If NetApp NearStore nearline storage is not part of your backup strategy, then refer to [6] for information on Oracle backup and recovery on filers based on Snapshot technology. The remainder of this section assumes both filers and NearStore systems are in use.

5.11.1. SnapVault and Database Backups


Oracle Databases can be backed up while they continue to run and provide service, but must first be put into a special hot backup mode. Certain actions must be taken before and after a Snapshot copy is created on a database volume. Since these are the same steps taken for any other backup method, many database administrators probably already have scripts that perform these functions. While SnapVault Snapshot schedules can be coordinated with appropriate database actions by synchronizing clocks on the filer and database server, it is easier to detect potential problems if the database backup script creates the Snapshot copies using the SnapVault snap create command. In this example, a consistent image of the database is created every hour, keeping the most recent five hours of Snapshot copies (the last five copies). One Snapshot version is retained per day for a week, and one weekly version is retained at the end of each week. On the SnapVault secondary software, a similar number of SnapVault Snapshot copies is retained. Procedure for performing Oracle hot backups with SnapVault:

29

1. 2. 3. 4. 5.

Set up the NearStore system to talk to the filer. Set up the schedule for the number of Snapshot copies to retain on each of the storage devices using the script-enabled SnapVault schedule on both the filer and NearStore systems. Start the SnapVault process between the filer and NearStore system. Create shell scripts to drive Snapshot copies through SnapVault on the filer and NearStore to perform Oracle hot backups. Create a cron-based schedule script on the host to drive hot backup scripts for Snapshot copies driven by SnapVault, as described above.

Step 1: Set up the NearStore system to talk to the filer. The example in this subsection assumes the primary filer for database storage is named descent and the NearStore appliance for database archival is named rook. The following steps occur on the primary filer, descent: 1. License SnapVault and enable it on the filer, descent: descent> license add ABCDEFG descent> options snapvault.enable on descent> options snapvault.access host=rook 2. License SnapVault and enable it on the NearStore appliance, rook: rook> license add ABCDEFG rook> options snapvault.enable on rook> options snapvault.access host=descent 3. Create a volume for use as a SnapVault destination on the NearStore appliance, rook: rook> vol create vault r 10 10 rook> snap reserve vault 0 Step 2: Set up schedules (disable automatic Snapshot copies) on filer and NearStore system. 1. Disable the normal Snapshot schedule on the filer and the NearStore system, which will be replaced by SnapVault Snapshot schedules: descent> snap sched oracle 0 0 0 rook> snap sched vault 0 0 0 2. Set up a SnapVault Snapshot schedule to be script driven on the filer, descent, for the oracle volume. This command disables the schedule and also specifies how many of the named Snapshot copies to retain. descent> snapvault snap sched oracle sv_hourly 5@This schedule creates a Snapshot copy called sv_hourly and retains the most recent five copies, but does not specify when to create the Snapshot copies specified by a cron script, described later in this procedure. descent> snapvault snap sched oracle sv_daily 1@Similarly, this schedule creates a Snapshot copy called sv_daily and retains only the most recent copy. It does not specify when to create the Snapshot copy. descent> snapvault snap sched oracle sv_weekly 1@This schedule creates a Snapshot copy called sv_weekly and retains only the most recent copy. It does not specify when to create the Snapshot copy.

30

3.

Set up the SnapVault Snapshot schedule to be script driven on the NearStore appliance, rook, for the SnapVault destination volume, vault. This schedule also specifies how many of the named Snapshot copies to retain. rook> snapvault snap sched x vault sv_hourly 5@This schedule creates a Snapshot copy called sv_hourly and retains the most recent five copies, but does not specify when to create the Snapshot copies. That is done by a cron script, described later in this procedure. rook> snapvault snap sched x vault sv_daily 1@Similarly, this schedule creates a Snapshot copy called sv_daily and retains only the most recent copy. It does not specify when to create the Snapshot copy. rook> snapvault snap sched x vault sv_weekly 1@This schedule creates a Snapshot copy called sv_weekly and retains only the most recent copy. It does not specify when to create the Snapshot copy.

Step 3: Start the SnapVault process between filer and NearStore appliance. At this point, the schedules have been configured on both the primary and secondary systems, and SnapVault is enabled and running. However, SnapVault does not know which volumes or qtrees to back up or where to store them on the secondary. Snapshot copies will be created on the primary, but no data will be transferred to the secondary. To provide SnapVault with this information, use the SnapVault start command on the secondary: rook> snapvault start -S descent:/vol/oracle/- /vol/vault/oracle Step 4: Create Oracle hot backup script enabled by SnapVault. Here is the sample script defined in /home/oracle/snapvault/sv-dohot-daily.sh: #!/bin/csh -f # Place all of the critical tablespaces in hot backup mode. $ORACLE_HOME/bin/sqlplus system/oracle @begin.sql # Create a new SnapVault Snapshot copy of the database volume on the primary filer rsh -l root descent snapvault snap create oracle sv_daily # Simultaneously 'push' the primary filer Snapshot copy to the secondary NearStore system rsh -l root rook snapvault snap create vault sv_daily # Remove all affected tablespaces from hot backup mode. $ORACLE_HOME/bin/sqlplus system/oracle @end.sql Note that the @begin.sql and @end.sql scripts contain sql commands to put the databases tablespaces into hot backup mode (begin.sql) and then to take them out of hot backup mode (end.sql). Step 5: Use cron script to drive Oracle hot backup script enabled by SnapVault from step 4. A scheduling application such as cron on UNIX systems or the Windows task scheduler program on Windows systems is used to create an sv_hourly Snapshot copy each day at every hour except 11:00 p.m. and a single sv_daily Snapshot copy each day at 11:00 p.m. except on Saturday evenings, when an sv_weekly Snapshot copy is created instead.

31

Sample cron script: # sample cron script with multiple entries for Oracle hot backup # using SnapVault, NetApp filer (descent), and NetApp NearStore (rook) # Hourly Snapshot copy/SnapVault at the top of each hour 0 * * * *: /home/oracle/snapvault/sv-dohot-hourly.sh # Daily Snapshot copy/SnapVault at 2:00 a.m. every day except on Saturdays 0 2 * * 0-5: /home/oracle/snapvault/sv-dohot-daily.sh # Weekly Snapshot copy/SnapVault at 2:00 a.m. every Saturday 0 2 * * 6: /home/oracle/snapvault/sv-dohot-weekly.sh; In step 4 above, there is a sample script for daily backups, sv-dohot-daily.sh. The hourly and weekly scripts are identical to the script used for daily backups, except the Snapshot copy name is different (sv_hourly and sv_weekly, respectively).

5.12. SnapManager for Oracle - Backup and Recovery Best Practices


SnapManager for Oracle is a host and client based product recently released from Network Appliance, Inc. that integrates with Oracle9i R2 and Oracle Database 10g R2. It allows the DBA or Storage Administrator to manage your database backup, restore, recovery and cloning while maximizing your storage utilization. SnapManager for Oracle utilizes Network Appliance Snapshot, SnapRestore and FlexClone technologies while integrating with the latest Oracle releases. SnapManager automates and simplifies complex and manual time consuming processes typically done by Oracle DBAs that improves aggressive backup and recovery service level agreements (SLAs). SnapManager for Oracle is protocol agnostic and thus works seamlessly when using with NFS and iSCSI protocols. It also integrates with native Oracle technologies such as (RAC, ASM and RMAN). In order to utilize the SnapManager for Oracle product you will need to have the following databases, applications and licenses enabled: SnapManager for Oracle Red Hat Enterprise Linux 3.0 Update 4 SnapDrive for UNIX V2.1 (Red Hat Enterprise Linux) (add FS supported listed) NetApp Data ONTAP 7.0 or higher Oracle 9iR2 or Oracle 10gR2 using NFS or iSCSI LUNs NetApp Host Agent 2.2.1 FlexClone license NFS or iSCSI license SnapRestore license

32

NetApp recommends to use SnapManager for Oracle if you are using Redhat Enterprise Linux 3.0 Update 4 (RHEL 3.0 Update 4) and greater and you wish to solve mission-critical enterprise issues as they relate to Oracle backup, recovery and cloning. For more information on SnapManager for Oracle in an Oracle environment, refer to [14].

5.12.1 SnapManager for Oracle ASM based backup and restore


SnapManager for Oracle provides capabilities that enable seemless backups of Oracle ASM based databases while deployed on Network Appliance storage-systems. This allows customers that are running Oracle 10gR2 while using the Automatic Storage Management (ASM) based databases on an iSCSI LUN to leverage the use of Network Appliances snapshot and snaprestore technology through the SnapManager for Oracle software. This is accomplished while maintaining the same flexibility and simplification that an Oracle ASM database was designed to achieve. NetApp requires that the ASMlib driver must be used on Red Hat Linux Enterprise 3 when used with ASM and SnapManager for Oracle. The ASMlib driver is a dependency for SnapManager for Oracle and will not function without it.

5.12.2 SnapManager for Oracle RMAN based backup and restore


SnapManager for Oracle provides integration with the Oracle RMAN architecture by allowing the functionality to register SnapManager based backups with the RMAN catalog. This allows the DBA to utilize Network Appliances snapshot and snaprestore technology for database backups and recovery through the use of SnapManager while still having access to the RMAN capabilities your DBA may have grown accustomed to using. Thus by allowing RMAN integration with Network Appliances SnapManager product the ability to do block level recovery using RMAN is not sacrificed. NetApp requires that all datafiles, logfiles and archive log files from the database to be backed up are stored on the Network Appliance storage-system in a flexible volume for any backup or recovery to be completed.

5.12.3 SnapManager for Oracle Cloning


SnapManager for Oracle allows database cloning of existing Oracle 9iR2 and Oracle Database 10gR2. Database cloning is available when used with Network Appliances FlexClone technology while running the SnapManager product. The database clone process is executed by providing both the old and new database sid name as well as a map file which allows the DBA or storage administrator to specify the new location of the files along with the new file names. NetApp requires that a database clone is completed against a database backup that was taken when the database was in offline mode. Hot database cloning will be available in a future release of SnapManager for Oracle.

References
1. NetApp supported NFS mount options for Oracle database files. http://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb7518 http://www.netapp.com/library/tr/3373.pdf 2. 3. 4. Data ONTAP 7GThe Ideal Platform for Database Applications http://www.netapp.com/library/tr/3373.pdf Database layout with FlexVol and FlexClones http://www.netapp.com/library/tr/3411.pdf Oracle9i for UNIX: Backup and Recovery Using a NetApp Filer: http://www.netapp.com/library/tr/3130.pdf

33

5. 6. 7. 8. 9.

Using the Linux NFS Client with NetApp: Getting the Best from Linux and NetApp http://www.netapp.com/library/tr/3183.pdf Installation and Setup Guide 1.0 for Fibre Channel Protocol on Linux: http://now.netapp.com/NOW/knowledge/docs/hba/fcp_linux/fcp_linux10/pdfs/install.pdf Oracle9i for UNIX: Integrating with a NetApp Filer in a SAN Environment: http://www.netapp.com/library/tr/3207.pdf Oracle9i for UNIX: Backup and Recovery Using a NetApp Filer in a SAN Environment: http://www.netapp.com/library/tr/3210.pdf Data Protection Strategies for Network Appliance Filers: http://www.netapp.com/library/tr/3066.pdf

10. Data Protection Solutions Overview: http://www.netapp.com/library/tr/3131.pdf 11. Simplify Application Availability and Disaster Recovery: www.netapp.com/partners/docs/oracleworld.pdf 12. SnapVault Deployment and Configuration: http://www.netapp.com/ibrary/tr/3240.pdf 13. Oracle8i for UNIX: Providing Disaster Recovery with NetApp SnapMirror Technology: http://www.netapp.com/library/tr/3057.pdf 14. NDMPCopy Reference: http://now.netapp.com/NOW/knowledge/docs/ontap/rel632/html/ontap/dpg/ndmp11.htm#1270498 15. SnapManager for Oracle: http://www.netapp.com/library/tr/3426.pdf

34

di

Revision History
Date 06/03/08 12/01/07 Name Padmanabhan Sadagopan NetApp team Description Update Creation

2008 NetApp. All rights reserved. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster, Data ONTAP, FilerView, FlexClone, FlexVol, NOW, SnapMirror, Snapshot, and WAFL are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. Windows is a registered trademark of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. Intel and Xeon are registered trademarks of Intel Corporation. Oracle is a registered trademark of Oracle Corporation. UNIX is a registered trademark of The Open Group. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. TR-3369

www.netapp.com
35

You might also like