You are on page 1of 7

ASAP Go-live Preparedness Checklist

Introduction
The ASAP checklist validates that operational best-practices and key procedures are in place prior to
going live or performing a major release upgrade. These operational best-practices cover non-functional
success factors such as performance, configuration, purging and DB statistics management.
Oracle recommends that implementation project teams should refer to the ASAP Installation Guide and
check the memory, disk space requirements and recommended OS kernel settings. They should also
review this material well ahead of the target go-live to ensure that project plans incorporate required
activities.
Oracle also recommends that implementation project teams plan to provide an explicit response to this
checklist, providing up front clarity on key success factors and risks to help Oracle provide better
Support and faster emergency response in case of problems.
At the implementation project teams discretion, additional information (e.g., performance test results)
can be shared with Oracle to provide better overall context on the deployment. All such information will
be maintained confidentially according to the Oracle Information Protection Policy. It can be deleted
upon request.
Please provide the checklist response as well as any additional information by opening an ASAP SR with
a title such as Go-live Preparedness Checklist at least one month prior to go-live. Oracle may request
clarifications through the SR, which can then be closed.
Upgrade to a Recent ASAP Patch Prior to Go-Live
Going live on a recent ASAP patch ensures that the deployment will not be susceptible to known issues
encountered by other customers, which may or may not manifest themselves in pre go-live tests. It also
greatly facilitates resolution of any new issues that may be encountered, which will generally be
resolved on recent patches.
For ASAP releases in Premier Support with active customers, Oracle will continue to release patch sets
(e.g., 7.2.0.4.x) at a regular interval. The patch set interval will depend on the level of customer activity
for a given release. For example, the patch set interval may be every 6 months or so. Patch sets are
where the majority of bugs are fixed and where important operations are delivered.
Oracle will also release interim patches to address specific problems that are blocking specific
customers. These interim patches will be released on a code branch for the given patch set. All interim
patches are cumulative (i.e., a superset) of prior interim patches for that branch. Changes released in an
interim patch will then be forwarded-ported into the next upcoming patch set.
Oracle generally only releases interim patches for the last two patch sets of a given release.
It is therefore important to plan to go-live or upgrade on a latest patch set (preferred).
1

In general, if the implementation project is expected to take more than 6 months we recommend that
an activity be planned to upgrade to a recent patch (on the same release) prior to go-live (e.g., before
UAT). Oracle can provide advanced notice of planned patch sets to assist in this planning. For example,
lets say that a project started 6 months ago using ASAP 7.2.0.3 (on the prior patch set) and plans to go
live in the next 3 months, we would recommend to check with Oracle when a 7.2.0.4 patch set may be
released and plan to upgrade prior to go-live, avoiding going-live on an older patch set.
Checklist:
If you are going live or upgrading within the next 3 months and are not on the latest patch set,
check the patches section of My Oracle Support (MOS).
If you have a long-running project (more than 6 months), plan to upgrade to a patch of the most
recent patch set prior to going-live or upgrading
Useful Information to share:

What is the 5 digit ASAP patch level targeted for the go-live or upgrade?

Deployment Architecture
Understanding the ASAP deployment architecture is of great value in supporting customers and may
shorten the resolution time for any configuration-related production issue. Diagram(s) depicting this
information would be ideal.
Checklist:
Refer to the ASAP Installation guides Pre-Production Checklist that lists the items that should be
checked or modified before moving ASAP to production.
Consider using fast Solid State Drives (SSD) disks for critical DB files (e.g., REDO logs)
Application logs (including WLS, ASAP) are written to a local file-system
ASAP requires that the Oracle client is 32 bit even if ASAP is installed on a 64 bit platform. Ensure
that the Oracle Client is 32 bit.
Ensure that you are running on a supported/valid DB configuration: RAC Active-Passive. (With
the non-supported RAC Active-Active configuration, there may be up to 70% degradation in
system performance.)

Useful information to share:


At a high-level, what are the North bound/external systems interfacing to ASAP?
Is SRT optional component part of the solution? If yes, please clarify the need and the count of
DB lookup tables implemented within the SRT component.
How many concurrent connections are opened by the North bound systems (such as Order
Management and other equivalent systems) to ASAP?
How many NEPs are currently configured on a per ASAP instance basis? Also, how many NEs are
managed by each of these NEPs and max number of concurrent connections to NEs ? (Note
that the number of NEs that can be managed by a particular NEP is limited only by available
system resources.)
Is RAC used and if so what RAC configuration is used (e.g., Active/Passive)?
2

What is the hardware specification of the DB server?


Are VMs used, if so which product and version? (Note: Oracle does not certify ASAP for VMware
please refer to KM Note : Support Position for Oracle Products Running on VMWare
Virtualized Environments (Doc ID 249212.1))
Are multiple instances of ASAP (e.g., Prepaid, Postpaid) deployed?
What are the version numbers for key other software (OSM, WLS, OS, DB, VM)?
What RAID storage configuration is used (e.g. RAID 1+0)?
What security provider configuration is used (e.g., internal or external LDAP directory) by ASAP?
What is your High Availability strategy ?
(Note: ASAP does not support Active-Active HA deployment configuration. The ASAP installer
supports deployment into either one admin or one managed Weblogic server.
ASAP can be deployed in a cold standby configuration. Cold standby allows ASAP to run on one
server at a time; the active server processes orders while the standby server is installed with the
same configuration as the active server. Failover from the active server to the cold server is a
manual process. For additional availability functionality customers may run ASAP in a clustered
environment using third-party clustering software. This accomplishes a warm standby
configuration with automatic failover. Implementations of ASAP using third-party highavailability clustering software have been accomplished by systems integrators with solutions
that are tailored to the customers needs. The procurement of third-party high-availability
clustering software is not included with ASAP and the action scripts to start, stop and monitor
ASAP are project specific.)

ASAP Workload & Performance


Is it crucial to validate the ASAP deploying at the full expected target volume prior to go-live. You may be
interested in KM Note : ASAP JVM FAQs (Doc ID 1346444.1) in My Oracle Support
For Oracle, understanding the work order volume for an ASAP production system is critical to
troubleshoot any performance-related problems. Gathering this information ahead of time may save
precious investigation time to resolve a production-impacting issue.
Checklist:
Performance tests have confirmed that the full expected maximum hourly rate volume can be
sustained on the ASAP target production system
o Ensure that representative mix of work order types and size is used in the test workload
o Ensure that the average size of the work order (with multiple CSDLs ) is aligned with
expected production order workload
o Ensure that the test order mix includes revision orders in sufficient percentage as well as
sufficient batch orders, if applicable
o Ensure that testing is performed by targeting all the NEs to be deployed in production to
gauge the network side impact. If needed have a plan to add new NEPs when needed
for general performance improvements especially when having a large number of NEs.
Now, the rule of the thumb is that one NEP may handle up to 50 NEs. Of course there
are several variables to be considered such as:
 type of protocol, number of simultaneous connections, order load etc
 NE interface/protocol driven- there are cases when developers are forced to
add a new NEP to avoid runtime libraries conflicts i.e., when using Corba or
other 3rd party libraries that are coming with our OSS stack
3

In general, number of NEPs should be arrived via internal performance testing


and come up with a recommended max number of NEs to be managed by 1
NEP

Performance tests have confirmed that the maximum work order size expected can be
processed
ASAP target production system has been tested with any expected large bulk/batch operations
A longevity test (e.g., 24hr) has been performed on the ASAP target production system, without
errors, that JMS messages do not accumulate, that Java Garbage collection is well behaved, that
DB locks are not observed and that Weblogic threads do not indicate any locking
Useful information to share:
What is the
o ASAP work order volumes per day (average and max) as well as during the busy hour
(average and max)?
o Average number of service actions per work order
o Average number of network actions per service action
Indicate the total number of Network Elements currently managed by ASAP on per instance
basis. Do each of the ASAP instances have access to all network elements?
o Number of NEPs
o Number of NEs per NEP
o Number of connections per NE
o Work order distribution
How many service and network cartridges are you planning to deploy in production?
Can you provide the list of Network Elements and their OS versions that are currently managed
by ASAP?
Describe any bulk/batch operations that may occurring during the day including the number of
orders submitted and when (what time) they will be executed.
Has Oracle previously provided hardware sizing recommendations for the deployment?
Please share a summary of performance test results including how your system responds (CPU,
memory, I/O) to its target peak load with the target number of cartridges deployed. This would
provide a baseline for comparison to investigate any future performance-related issue.
Do you collect any utilization statistics on a regular basis? For example, ASAP Work
Orders/period and ASDLs/period. Are you monitoring the CPU utilization for each of the ASAP
instances? (CPU/Memory utilization/stats can be viewed using asap_utils Technical Utilities i.e.,
options 100 to 109.)
Managing Database Capacity
Deciding on a purging policy is key to ensure that storage capacity is reclaimed in a timely fashion and
system performance maintained. Please ensure that you have reviewed KM Note: FAQs - ASAP Database
Purging (Doc ID 1182093.1) in My Oracle Support.
Checklist:
Housekeeping (including backup and restore) and purging strategy is in place. Please refer to KM
Note: Housekeeping Frequency Of ASAP Database Tables (Doc ID 860837.1) in My Oracle
Support.
Operational procedures are defined for purging and for deploying/un-deploying cartridges
4

The average expected daily and weekly storage consumption rate has been measured
If the solution introduces additional DB tables, ensure that you have DB management strategies
in place including purging (if applicable) and backup
Test that the expected data volume can be purged during peak periods.
Useful information to share:
How long do you plan to retain orders (order retention policy) after orders have completed?
How frequently do you plan to purge database? It is recommended to run the purge script
everyday mid-night or during off peak hours to purge the data older than certain number of
days (say 7 or 14 days).
Managing the Daily Production Schedule and Database Optimizer Statistics
As ASAP performance depends heavily on optimal DB performance, its important that DB statistics be
gathered properly on a daily basis. (Note, asap_utils can be used to gather statistics)

Checklist:
The daily & weekly ASAP production schedule is defined, including expected peak order
processing hours and time of ASAP batch operations
A backup and disaster recovery strategy is in place for this ASAP deployment
Useful information to share:
What is the daily ASAP production schedule (peaks, batch operations, backups and DB stats
gathering)?
What are the key elements of the ASAP DB statistics gathering procedure?
Planning for Outages and Order Failures
It must be possible to stop incoming orders, to be queued in an upstream system (e.g., OSM), for a
planned ASAP outage or an unplanned incident. Also, there may be planned outage for network
elements.
A strategy must be in place to identify and correct order-related failures.
Operational procedures are defined for stopping the creation of new orders into ASAP
Plan NE/network side maintenance and outages (ASAP allows to put the NE in "blackout" and
support NE maintenance outages- see standard product documentation)
A strategy, procedures and tools are defined to manage work orders in fallout.
Change Control Management and Pre-Production Environment
As for any mission-critical software, once ASAP is in service it is important that any changes to the
environment (e.g., configuration changes or cartridge deployments) be made in a controlled manner.
Note that in the event of a serious production issue, we may request your Change Control Log to
understand whether any recent changes may have contributed to a production issue, as is sometimes
the case. The availability of this information can greatly shorten the resolution of an issue.

Its also important to retain the ability to test the system under volume after go-live, which requires a
separate pre-production environment.
Checklist:
A Change Control Management strategy is in place that defines how changes are tracked,
applied one at a time (or in small groups), validated, approved and monitored after
implementation (in case of problems)
An initial baseline of configuration is captured and a Change Control Log is in place
Procedures are defined for applying patches (ASAP, WLS, DB), including a test strategy
Procedures are defined for gathering system statistics after major changes
A pre-production environment is in place to test future changes and investigate product issues
(including performance) that may arise
Useful information to share:
Describe the pre-production environment and how it compares (e.g., processing capacity) to the
production environment. Is it sufficient to validate performance/volume related issues or
changes?
System and Performance Monitoring
It is important to monitor the ASAP workload and key system metrics to quickly detect any changes in
workload or performance characteristics. We may request performance monitoring information to
investigate any performance-related issue that may arise in production.
Strategy for monitoring and managing ASAP (ASAP server processes, database, process info, file
system, system events etc)
Troubleshoot and monitor ASAP performance (example use of asap_utils utility).
A method is in place to capture and track the daily and hourly volume of ASAP orders created
and completed
If bulk/batch orders are part of the expected workload, the creation and completion of these
large orders are also captured and tracked
Key system performance metrics such as App Server and DB Server CPU utilization and memory
consumption as well as Storage I/O & capacity are captured and tracked
Refer to KM Note: Are Asap_utils Options Safe For Use In An ASAP System Which Is Processing
Work Orders? (Doc ID 1389269.1) in My Oracle Support.
Ensure that sufficient disk space is available for log files and daily archival policy/cron job is in
place.
Cartridge development best practices
It is important to develop a cartridge following the best practices / guidelines provided in ASAP cartridge
developer guide and complying with Java coding standards.
Rollback handling - The most important point is to have ASDLs kept atomic, to avoid scenarios
with partial success/failure which will complicate the rollback. It is recommended that old
parameters (needed for restoring data on delete/modify) to come from the upstream and not
by having ASAP querying the NE. If in special cases ASAP needs to get the old data from
6

network, keep the rollback ASDL generic (same as when data is coming on the order), and plugin a custom query NE ASDL that will return needed rollback data. That can be removed when
data will be available in the upstream (recommended)
Always return UDET as output parameter. They come handy later and can be used in
conditional ASDL execution logic in solution cartridge
Lessons Learned and Other Risk Factors
ASAP implementation projects may face other known risks. Sharing this information ahead of time with
Oracle may help mitigate these risks. Important lessons may have been learned during performance and
pre-production preparation activities described in this document.
Strategies have been defined to mitigate key additional risks
Lessons learned have been incorporated into procedures
Useful information to share:

Please share additional known risk factors that could impact ASAP operations or complicate
issue resolution
Operational lessons learned through the project that may be useful to Oracle and other ASAP
customers

You might also like