Professional Documents
Culture Documents
EnvironmentReadinessChecklist
PowerCenterEnterpriseGridwithHAInstallations
Informatica Proprietary
ApplicabletoPowerCenterVersions8.1.19.1
AUTHOR
CharlesW.McDonaldJr.
SystemsArchitect
CoreProductSpecialistsTeam
cmcdonald@informatica.com
cell:+1(817)7162680
direct:+1(940)6485314
ValuablecontributionsfromSymantec
Table of Contents
TableofContents.................................................................................................................................2
TableofFigures&Tables.....................................................................................................................3
Overview.............................................................................................................................................5
SupportedSharedFileSystems............................................................................................................5
GridvsHighAvailability.......................................................................................................................6
DisasterRecoveryvsHighAvailability..................................................................................................7
HighAvailability.................................................................................................................................12
CurrentHighAvailabilityCenterofExcellenceConfiguration..............................................................14
EnvironmentInformation.......................................................................................................................14
EnvironmentBreakdown........................................................................................................................15
DatabaseTier.....................................................................................................................................15
ApplicationTier..................................................................................................................................15
VirtualizationTier...............................................................................................................................15
NativeSoftwareArchitecture.................................................................................................................15
VirtualizedSoftwareArchitecture..........................................................................................................16
WhataboutVirtualization?................................................................................................................16
MinimumSystemRequirements........................................................................................................17
Clustering..........................................................................................................................................18
ClusterFileSystem..................................................................................................................................19
PlanningandSizingyourClusterFileSystem.........................................................................................21
VeritasClusterServerAgentforPowerCenter.......................................................................................22
GeneralClusterRequirements...............................................................................................................26
HeartbeatNetworkRequirements.........................................................................................................26
ClusterProduction&VIPNetwork.........................................................................................................28
QuorumClusteringImportantArchitectureConsiderations...............................................................29
ClusteredStoragevsClusteredFileSystem...........................................................................................29
NFSandCIFS...........................................................................................................................................29
Lustre......................................................................................................................................................30
UnderstandingtheApplicationStack.................................................................................................30
StateManagement............................................................................................................................31
App/ETLTierExampleClusterFileSystems........................................................................................32
Binaries...................................................................................................................................................32
Infa_shared.............................................................................................................................................33
DomainLogs...........................................................................................................................................34
OperatingSystemProfiles......................................................................................................................35
SubnetandInformaticaHeartbeats.......................................................................................................40
DatabaseTier.....................................................................................................................................41
10gOracleRACActive/ActiveSharedEverythingTechnology..........................................................41
SQLServer2005(HighAvailabilityviaMirroring)Active/Passive.....................................................42
SharedNothingTechnologywhenDatabaseMirroringExclusively..................................................42
SharedClusterTechnologywheninanMSCScluster........................................................................42
HighSafetymodeSessionwithAutomaticFailover..........................................................................42
DB2(HighAvailabilityviaClustering&ACR)Active/Passive.............................................................43
StorageTier.......................................................................................................................................44
Informatica Proprietary
Page2of78
3/31/2011
WhataboutWindows?......................................................................................................................46
PerformanceTuning...........................................................................................................................46
UnderstandingI/OandNetworkingInfrastructureChokepoints.........................................................47
AUseCaseinNetworkThroughput....................................................................................................50
WherecanIfindmoreinformation?..................................................................................................51
HighAvailability/Grid..............................................................................................................................51
GlossaryofTerms..............................................................................................................................51
Sources..............................................................................................................................................52
Appendix/HACOEBuildScript..........................................................................................................53
LocalWSC3560G24TSSswitchbuild(configuringandconnectingtocorenetwork):........................53
BaseRHELInstall:....................................................................................................................................54
O/SConfigurationPreInstallVeritasSFRACandSFCFSHA....................................................................55
BaseVeritasStorageFoundationforOracleRAC(SFRAC)Install:..........................................................62
OracleRACPreInstallSteps...................................................................................................................75
OracleRACInstallation/Configuration...................................................................................................77
InformaticaEnterpriseGridPreInstallSteps.........................................................................................78
Figure1PartitioningFileSystemResourcesComparison...........................................................................6
Figure2DROption1Single(NonSpanning)DomainConfiguration.......................................................8
Figure3DROption1Single(NonSpanning)DomainConfigurationConsiderations..............................8
Figure4DROption2SingleSpanningDomainConfiguration.................................................................9
Figure5DROption2SingleSpanningDomainConfigurationConsiderations.........................................9
Figure6DROption3SingleSpanningDomainConfiguration...............................................................10
Figure7DROption3SingleSpanningDomainConfigurationConsiderations......................................10
Figure8DROption4SingleSpanningDomainConfiguration...............................................................11
Figure9DROption4SingleSpanningDomainConfigurationConsiderations......................................11
Figure10InformaticaUptime....................................................................................................................13
Figure11HACOEFinishedStateArchitecture..........................................................................................14
Figure12HACOECapturedI/OThroughput(NativeHW)........................................................................16
Figure13Howtocheckblocksize(defaultallocationsize).....................................................................21
Figure14Howtocheckforcolumncounts/volumeVeritasServerSideStriping...............................21
Figure15DependencyTreefortheHACOEVCSCluster..........................................................................23
Figure16PropertiesforPowerCenterServiceGroupinHACOEVCSCluster..........................................23
Figure17PropertiesforPowerCenterSvcMgrAgentTypeinHACOEVCSCluster...................................24
Figure18MultipleDomainsManagedwithinsameVCSServiceGroupinHACOE..................................24
Figure19PowerCenterServiceManagerIntentionallykilledinHACOEonpslxhacoe01........................25
Figure20VCSAgentPowerCenterSvcMgrmonitorsanddetectsfailureeventonpslxhacoe01.............25
Figure21VCSrecoveryofServiceManageronpslxhacoe01...................................................................25
Figure22PrivNICStanzaExampleVeritasStorageFoundationSuiteforOracleRAC...........................27
Figure23ReferenceArchitectureTheApplicationStackasitrelatestoPowerCenter........................31
Figure24ExamplefilesystemlayoutsforPowerCenterEnterpriseGrid/HA........................................32
Figure25MotherboardsHighLevelHowTheyWork............................................................................48
Figure26MotherboardsACloserLookatI/OChokepoints...................................................................48
Figure27RealThroughputCapabilitiesComparedNetworktoI/O.........................................................49
Informatica Proprietary
Page3of78
3/31/2011
Figure28NetworkThroughputUseCase(Followthemath)...................................................................50
Table1SummarySystemRequirements..................................................................................................17
Table2TypicalHAClusterRequirements.................................................................................................19
Table3ClusterFileSystemHARequirements..........................................................................................20
Table4HeartbeatNetworkRequirements...............................................................................................26
Table5ClusterProductionNetworkRequirements.................................................................................28
Table6StorageInfrastructureRequirements..........................................................................................45
Informatica Proprietary
Page4of78
3/31/2011
Overview
ThroughBIA(BusinessImpactAnalysis)scoring,moreITdivisionsarebeingchallengedwiththeimplementationof
highlyavailableverylowdatalatencydatawarehousesthusthegrowingdemandforPowerCenterEnterprise
GridwithHA.ThepurposeofthisdocumentistoprovideadditionalinformationtothePowerCenterPreInstall,
Install,andAdministrationguideswithafocusonbestpracticesforHAandGridoptionsforPowerCenter.This
documentshouldnotbeusedasastandalonesystemsrequirementsdocumentbutanadditionaltoolcombined
withthestandardInformaticadocumentationtoensurethesmoothestpossibleinstallationwiththebestoverall
outoftheboxresultswiththehighestpossibleavailabilityoftheInformaticaproduct.Thisdocumentshouldalso
notbeusedasastepbystepyoumustdoitthiswaycookbook,butasaguideofabuildexamplewherebest
practicesarediscussedaswellasclearlyarticulatingwhattoavoidtohavethebestpossibleoutofboxexperience
withInformaticaEnterpriseGrid/HA.
Asyoureadthroughthisdocument,youwillnotonlynoticethedifferencebetweentherecommendedand
minimumrequirements,buttakenotealsooftheareashighlightedingreenindicatingsignificantperformance/
throughputtips.Therecommendedrequirementsareconsideredbestpractice.
ThebaserequirementforEnterpriseGrid/HAoptionsthatgeneratestheneedforsuchadetailedtechnical
documentasthisisasfollows:Ahighperformance,highlyavailable,sharedfilesystemvisibletoall
PowerCenternodesparticipatinginthegridandfullyPOSIXcompliant(fsync,fcntl,andlockf).Thisbase
requirementwillbereferredtoseveraltimesthroughoutthisdocument.
NOTE:InstallingPowerCenterEnterpriseGridHAonanenvironmentthatdoesnotmeetthebestpracticessetforth
inthisdocumentmayyieldlessavailabilityandperformanceinthePowerCenterproductwithEnterpriseGridand
HAoptions.
WithintheAppendixyouwillseeabuildscriptweusedtobuildInformaticasHighAvailabilityCenterofExcellence
andisrepresentativeofInformaticasrequirementsusingInformaticasITenvironmentstandards.
Othersolutionswereworkingtocertifyandsupport:
GPFSonLinux
GiventhatthereisnoCFSforWindowsandnoCIFSsupportfromInformatica,Windowsplatformsarenotsuitable
atallatthistimeforEnterpriseGrid/HAdeploymentsofPowerCenter!
SpecificallyasitrelatestodeployingNFSv3.xthefollowingshouldbeconsidered:
Ithasaknownissuethatrequiresthelockmanagementservertobebouncedincaseswhereahostislostthat
hadalockonafileonthesharedfilesystem.ThisisnotanInformaticaissue!ItisanissuewithNFSandcannot
beavoidedinv3ofNFSregardlessofwhetherornotMPFSisusedinconjunction.
Informatica Proprietary
Page5of78
3/31/2011
Atleast1dedicatedhalfduplex10Gbpath(orfaster)totheNASstorageshouldbeprovidedperlogicalnode.
TheNASheadshouldbeclusteredforHAconfigurationandtransparentfailover.
OpensourceNFSv3clientshouldbeusedoneachlogicalnode.
Performanceexpectationsshouldbeloweredtoallpotentialusergroupswithexpectedsignificantsharedfile
systemload.Expectedperformanceshouldbe35%100%slowerthanthatofaproperlyconfiguredCFSonhigh
rotationalratedriveswithlargeSANcache(suchasDMX4withVxSFCFSHAconfiguredwithhostsidestriping).
NootherstoragesolutionotherthanNetappWAFLorEMCNASisconsideredasupportedconfiguration.
Example,configurationsexplicitlynotsupportedarethingssuchasexportinglocalRAIDgroupsfromonelogical
hosttoanotherandcallingthatasharedfilesystemforPMRootDir.
NFSv3.xcannotbeusedinconjunctionwithanyothersharingtechnology.Forexample,itisunsupportedto
configureaCFSthenexportthatviaNFSv3.x.
AllNFSconfigurationshavethesameflawasitrelatestopartitioning,theydonotprovideahoststriping
mechanismtopresentthestorageatthehostlevelasmorethan1inoderegardlessofhowmanyphysicaldrives
areusedonthemountpoint.Regardlessofwhetherornotthedrivesarestripedonthestoragebackend,this
mattersinanInformaticaPowerCenterimplementationfor2goodreasons:
Theoperatingsystemwillonlyleverage1threadonthefilesystem/logicalnodeatatimesoregardlessof
thestripingonthebackendtheperformancewillbediminishedbecauseitisnottakingadvantageofthe
bandwidthavailabletothestoragearray.
Partitioningleveragesmultithreadedoperationsandifthoseoperationsaredirectedatanythingthatmight
beonthesharedfilesystem(sourcefiles,tgtfiles,persistentcache,etc)thenpartitioningperformancewill
berestrictedbythenumberofinodes.Thus,youllseegreatperformancescalinguptothepointofone
partitionperlogicalnodethenitwillimmediatelyplateau.Youllstillbeabletoscalethepartitioncountsas
muchasyoulike,butthetotalthroughputyieldwillbevirtuallythesameaswhatyouwouldhavegotwith
onepartitionperlogicalnode.ThisproblemcanONLYbeovercomebyusingaproperlyconfiguredCFS
solutionusinghostsidestriping(preferablyonestripe/LUNused).Seefigurebelowtakenfrom
PowerCenterAdvancedWorkflowGuide:
Figure1PartitioningFileSystemResourcesComparison
Storage Group
Storage Group
Page6of78
3/31/2011
hostoutagerootcauses,andtheentirenetworkofcomputingresources(orcluster)hardenedagainstbroader
outagerootcausesandcapableofrecovery,reassignment,andbusinesscontinuitywithinagivenphysical
location.HA,initsmostsimplisticdefinitionmeansnosinglepointsoffailure(SPOFs)indesignorimplementation
ofasinglesiteenvironment.
Now,totheextentpossiblebyyourchosenclusterwareproductandpresentationlayerrequirementsyourHA
clustermayormaynotbeheterogeneousorinexpensive.Theruleofthumbhereisthattheleastcommon
denominatorofconstraintsrules.Thus,ifyourHAclusterwaredictatesthatinordertobuildtheclusterfile
systemrequiredfortheHAoptionyoumusthavehomogeneoussystems(OS,patchlevel,etc)thenthatbecomes
theoverridingconstraintofyourEnterpriseGridregardlessofthedefinitionofGridcomputingorInformaticas
abilitytooperateinaheterogeneousenvironment(notapplicableforsessionongridcapability).
Insummary,planverycarefullyifyourintentistodesign,buildandmaintainaheterogeneousEnterpriseGrid
implementationwithHA.Theleastcommondenominatorofconstraintsmayprohibitsuchadesign.
Weseesomeclientsimplementingmultisiteinstallationsthatworkinconcertwithoneanotherinorderto
achievealevelofavailabilityrequiredbytheirbusiness.Realizethistypeofimplementationisnotthegenerally
accepteddefinitionofHA,butanactive/activeDRconfigurationwhichmayimposecertainlimitationsonthe
baserequirementofthesharedfilesystemandthatlimitationmayseriouslyimpactyourtargetstate
architectureforPowerCenterHA/Grid.TheHAoptionforPowerCenterisdesignedtoprovidehighavailability
withintheframeworkofthegenerallyaccepteddefinitionofsinglesiteHA,howevermayhavesomevery
significantvalueinaDRscenariodependingonyourusecase.Forexample,inanactive/passiveDRsingle
spanningdomaindesign,theHAoptionprovidesautomatedfailoveroftheservicesandrecoveryofanyin
processworkflows.IfnoHA,thenyouwouldmanuallyreassigndiandrepositoryservicesfromprimarynodesto
DRnodes.
BelowaresomeacceptedDRoptionsforyourconsideration(NOTE:thisisnotintendedtoprovidetheentire
universeofDRoptionsandconsiderations):
Informatica Proprietary
Page7of78
3/31/2011
Figure2DROption1Single(NonSpanning)DomainConfiguration
Figure3DROption1Single(NonSpanning)DomainConfigurationConsiderations
Informatica Proprietary
Page8of78
3/31/2011
Figure4DROption2SingleSpanningDomainConfiguration
Figure5DROption2SingleSpanningDomainConfigurationConsiderations
Informatica Proprietary
Page9of78
3/31/2011
Figure6DROption3SingleSpanningDomainConfiguration
Figure7DROption3SingleSpanningDomainConfigurationConsiderations
Informatica Proprietary
Page10of78
3/31/2011
Figure8DROption4SingleSpanningDomainConfiguration
Figure9DROption4SingleSpanningDomainConfigurationConsiderations
Informatica Proprietary
Page11of78
3/31/2011
High Availability
HighAvailability(HA),initsmostsimplisticdefinition,meansNoSPOFSNosinglepointsoffailures.Itisamyth
thattwoofeverythingconstitutesHA.Forexample,twoNICsinahostprovidesnomoreavailabilitythanone,
becausewithouttheappropriatelayer3(OSIStack)bindingtheoperatingsystemwillonlyusethebaseinterface
(ce:0,hme:0,qfe:0,eth:0,etc).Twointernalcontrollercardsholdsthesametruism,asdoestwohostbusadapters
connectingtotheSAN,etc.HighAvailabilityisadesigncriteriadefiningtheappropriatehardwareandsoftware
requiredtoworkinconcerttocloseallvulnerabilitygapsthusnoSPOFSwithinagivendatacenter
implementation.
Given,itisunderstoodthatthefewersinglespointsoffailurewithintheapplicationstackthegreaterthe
availabilityatthepresentation(dataintegration)layerandthemostpreferredsolutionisonewithoutanysingle
pointsoffailureanywhereinthestack.
Giventhat,PowerCenterwithEnterpriseGrid&HAoptionsrequiresatasharedfilesystemmeetingthebase
requirementdiscussedintheoverviewsection.And,giventhatInformaticasupportsandrecommendsasitsbest
practicesolutionaVeritasStorageFoundationSuiteClusterFileSystemHAconfiguredwithhardwarebasedI/O
Fencing,thisconfigurationrequires(inaninteroperablestack):
amultichannel/pathprivateheartbeatnetworkfullduplexGigabit,infiniband,orFastEthernet
pendingclusterwarerequirements
amultichannel/pathproductionnetworkfullduplexGigabitATLEAST(10Gbpreferred)
IPloadbalancing(IPMP/IPBIND,channelbonding,teaming,etc)
amultichannel/pathstoragenetworkconnection24GbHBAs(2ormoreWWchannels/host)
anHBAloadbalancingsolution(compatiblewithyourSANchoice)
o VeritasDMP(DynamicMultiPathing)
o EMCPowerPath
o etc
aStorageArrayNetworkwithSCSIIIIPGR(persistentgroupreservation)HDDorSSDdrives
aclusterwaresolutionforcreatingtheclusteredfilesystem
o VeritasStorageFoundationSuiteClusterFileSystemHA
o IBMGPFS(onlysupportedbyInfaonAIXPowerSeriesequipment,notLinux)
AHardwareBasedI/OFencingsolution(typicallyprovidedasasuitesubproductinclusterwaresolution)
*ItshouldbenotedthatwhileRedHatGFSmay,intheory,meetthebaserequirementofthesharedfilesystemreferredtoin
theOverviewsection,itsinternalimplementationsatInformaticahavebeenseriouslytroubledinbothperformanceand
availabilitywhenusingeithertheirhardwarebasedorsoftwarebasedI/Ofencing.Someclientimplementationshaveyielded
similarresultsandthusitisnotrecommendedforusewithPowerCenterEnterpriseGrid/HA.
Belowyouwillfindamorecompletelistofclusteredfilesystemoptions(pleaserefertothesupportedsharedfile
systemsectionbeforemakingaselectionfromthelistbelow):
CFS
Tru64 Cluster 5.x
VERITAS Cluster File System
Sun Cluster 3.0
Generalized Parallel File System (GPFS)
DataPlow
PolyServe
GFS
NonStop Cluster Unixware
O/S
HP Tru64
Sun Solaris, HP/UX, AIX, Linux
Sun Solaris
IBM AIX, Linux
Linux, Solaris, Windows, IRIX
Linux, Windows
Linux
Informatica Proprietary
Page12of78
3/31/2011
Peripheralrequirementsimpliedtocascadefromthemainrequirementsaboveinclude:
separateprivateswitchesonseparatepowergrids
separatepublicswitchesonseparatepowergrids
separateVLANs/channelorpath
aclusterwareclusterserversolution
ahighavailabilitydatabasesolution
o OracleRACActive/Active
o DB2(HighAvailabilityviaClustering&ACR)Active/Passive
o SQLServer(MirroredSolution)Active/Passive
NOTE:OneshouldalwaysconsulttheProductAvailabilityMatrixwhenconsideringtheHAdatabaseoptionsfor
PowerCenterrepositories.OnethingyoumightdiscoveristhatMetadataManager(PCAE)repositoriesmighthavea
differentsupportmatrixthanstandardPowerCenterrepositorieswhichmaydriveyoutowardusingtheHAdatabase
optionssupportedbystandardPowerCenter.
Manyoftheserequirementscascadefromthesharedfilesystembaserequirementgettingfurtherinto
deeperlevelsofgranularity.Thisdocument,again,isintendedtoclearupthemainconfusionaroundwhatis
consideredhighavailabilityandwhatistrulyrequiredtoachieveasuitablelevelofavailabilityatthe
presentationlayerwithintheapplicationstack.And,thegranularityrequiredfortheaforementioned
cascadingrequirementsaredescribedwithinthisdocument.Theserequirementsarevisuallyrepresentedin
thenextclusteringsections.
WhenconfiguredproperlylikeInformaticasHACOE,PowerCentercanremainupandperfectlystablefor
extendedperiodsoftime.Intheillustratedexamplebelow,applicationserviceswererebootedfromtimeto
timetoregisterpluginsandchangeintegrationservicevariablesandwhatnot,butifyoulookatthedomain
gatewayservicestheyvebeenrunningsincethelastplannedmaintenanceeventon2/27forPowerCenter
Upgrade.TheHACOErunsonRHEL5.5withVxSFRAC5.1P3andRHEL5.6onvSpherewithVxSFCFSHA5.1SP1.
Figure10InformaticaUptime
NOTE:Duringthese104daysofuptime,theprimaryRACDBinstancedrep0011onpslxhacoe03faulted3timeson
2/28/09,3/13/09,and4/8/09.However,Informaticastayedoperationaltheentiretime,immediatelyseekingmetadata
informationfromthealternateRACDBinstancedrep0012onpslxhacoe04.
Informatica Proprietary
Page13of78
3/31/2011
Figure11HACOEFinishedStateArchitecture
NOTE:Thediagramaboveshowsasinglecoordinationpointserverwhichisasinglepointoffailure,butthe
VMwaresideoftheDom_hacoe_primarydomainisnotaproductionenvironment.Itisreallyonlyusedfor
customerdemos,performancecomparisons,andcertificationtesting.TheprinciplesoftheCPServer
architecturecanbetestedwithonlyoneCPServerandthatwasfundamentaltocertificationtestingVMware
withEnterpriseGrid.AproductionVMware/InformaticaEnterpriseGridenvironmentwouldneedatleast3
CPserversoraclusterof3CPserversifusingVxSFCFSHAfortherequiredhighperformance,highlyavailable
sharedfilesystem.
Environment Information
QTY1HPc7000BladeChassis
HPEVA41006TBRAW
QTY2Cisco3560WSC3560G24TSSEthernetSwitches
QTY2HPStorageWorks4/16FCSwitches
QTY4HPBL465cG7CTOBlade(AMD6172x2(12coreseach))24x64GBRAM
o HPBLcQLogicQMH25628GbFCHBAOpt
o HPSmartArrayBL465c/685cG7FIOController
Informatica Proprietary
Page14of78
3/31/2011
o HP146GB6GSAS15K2.5inDPENTHDD
QTY2HPBL685cG7CTOBlade(AMD6172x4(12coreseach))48x96GBRAM
o HPBLcQLogicQMH25628GbFCHBAOpt
o HPSmartArrayBL465c/685cG7FIOController
o HP146GB6GSAS15K2.5inDPENTHDD
QTY2HPBLcVCFlex10EthernetModule
QTY2HPBLcVC8GbFC20Port
Environment Breakdown
Ofwhatyouseeabove,theenvironmenthardwareisbrokendownasfollows:
Database Tier
QTY2HPBL685cG7CTOBlade(AMD6172x4(12coreseach))48x96GBRAM2.1GHzchipset
HPBLcQLogicQMH25628GbFCHBAOpt
HPSmartArrayBL465c/685cG7FIOController
HP146GB6GSAS15K2.5inDPENTHDD
VxSFRAC5.1P3
OracleRAC11gR2
MultipleRACInstancessupportingInformaticaRepositories,SourcesandTargets
NativeHWenvironment
o
o
o
Application Tier
QTY2HPBL685cG7CTOBlade(AMD6172x4(12coreseach))48x96GBRAM2.1GHzchipset
o HPBLcQLogicQMH25628GbFCHBAOpt
o HPSmartArrayBL465c/685cG7FIOController
o HP146GB6GSAS15K2.5inDPENTHDD
QTY2HPBL465cG7CTOBlade(AMD6172x2(12coreseach))24x64GBRAM2.1GHzchipset
o HPBLcQLogicQMH25628GbFCHBAOpt
o HPSmartArrayBL465c/685cG7FIOController
o HP146GB6GSAS15K2.5inDPENTHDD
VxSFRAC5.1P3
NativeHWenvironment
NOTE:MultiTennantinbluetxt.
Virtualization Tier
QTY2HPBL465cG7CTOBlade(AMD6172x2(12coreseach))24x64GBRAM2.1GHzchipset
o HPBLcQLogicQMH25628GbFCHBAOpt
o HPSmartArrayBL465c/685cG7FIOController
o HP146GB6GSAS15K2.5inDPENTHDD
VxSFCFSHA5.1SP1
vSphere4.1U1
Allx64Architecture
RHEL5.5
VeritasStorageFoundationSuiteforOracleRAC5.1P3
OracleRAC11gR2
Informatica Proprietary
Page15of78
3/31/2011
InformaticaPlatform9.0.1HF2
InformaticaB2BDT9.0.1HF1
InformaticaB2BDXHA9.0.1HF1
Allx64Architecture
VmwareESXi4.1U1HPOEMSpecific(4.1_U1_Feb_2011_ESXi_HD_USB_SDImgeInstlr_Z7550_00031)
vCenterServerWindows2008R2Standardx64
vCenterServer4.1(VMwarevpxall4.1.0258902)
RHEL5.6
VeritasStorageFoundationSuiteClusterFileSystemEnterpriseHA5.1SP1
OracleRAC11gR2usedfromNativeHWside
InformaticaPlatform9.0.1HF2
InformaticaB2BDT9.0.1HF1
InformaticaB2BDXHA9.0.1HF1
Realperformancethroughputnumbersleveragingthisconfiguration(NOTEthisexampleisGigabytes,notGigabits):
Figure12HACOECapturedI/OThroughput(NativeHW)
PowerCenterI/OThroughputPerformancein
Informatica'sHACOE
800
CACHEonCFS(GB)
700
600
500
400
300
200
100
0
1
10
15
20
25
30
Minutes
Untilthenpleasereferto:https://communities.informatica.com/docs/DOC3533InformaticaSupportStatement
forVirtualizationv2.1
Informatica Proprietary
Page16of78
3/31/2011
Processor
RAM
RAM
Disk Space
Disk Space
DomainwithallDataQuality,
DataServices,and
PowerCenterservicesrunning
ononenode
DomainwithallPowerCenter
servicesrunningonone
node
4CPUCores
8GB
24GB
20GB
50GB
4CPUCores
4GB
16GB
4GB
20GB
DomainwithallPowerCenter
servicesrunningonone
nodeexceptMetadata
ManagerServiceandReporting
Service
2 CPUCores
2GB
8GB
3GB
10GB
MetadataManagerService
runningonaseparatenode
2 CPUCores
2GB
8GB
3GB
10GB
ReportingServicerunningona
separatenode
1 CPUCores
512MB
4GB
3GB
8GB
Note:Whenyouupgrade,theinstallerrequiresanadditional4GBdiskspaceplustheamountofdiskspaceused
bytheexistinginfa_shareddirectory.
Althoughyoumayselectanyinstallationconfigurationthatmeetstheminimumrequirements,wedooffer
recommendationsregardingconditionsforcombiningdifferentproductcomponents.Thegridaboveillustrates
onlytypicalrecommendedsystemrequirementsyourindividualprogramrequirementsmaydemandfurther
analysisofthetruerecommendedsystemrequirements.
Formoredetails,seetheProductAvailabilityMatrixathttp://mysupport.informatica.com
AsoftheInformaticaPlatform9.0.1release,changingthePOSIXfiledescriptorparametersettingsmaybe
required.TypicallyinLinuxthisisdefaultedto1024whichyoucanverifybydoinga"ulimita"fromthecommand
lineoftheshellusingInformaticaservices:
pslxhacoe01.informatica.com:/home/infads#ulimita
corefilesize(blocks,c)unlimited
datasegsize(kbytes,d)unlimited
schedulingpriority(e)0
filesize(blocks,f)unlimited
pendingsignals(i)278527
maxlockedmemory(kbytes,l)32
maxmemorysize(kbytes,m)unlimited
openfiles(n)1024
pipesize(512bytes,p)8
POSIXmessagequeues(bytes,q)819200
Informatica Proprietary
Page17of78
3/31/2011
realtimepriority(r)0
stacksize(kbytes,s)10240
cputime(seconds,t)unlimited
maxuserprocesses(u)278527
virtualmemory(kbytes,v)unlimited
filelocks(x)unlimited
InRHELyoucanmodifythisbyeditingthe/etc/security/limits.conffileandaddingthefollowinglinesnearthe
bottom:
*softnofile4096
*hardnofile65535
Makesureyousetthesoftlimittoexceed4000,becausetheo/swillalwaysdefaulttothesoftlimitwhenauser
logsinandweneedthatsoftlimittoexceed4000inINFA9.0.1.
Afteryouloginthenexttimeyoursessionparametersshouldlooklikethis:
pslxhacoe01.informatica.com:/home/infads#id
uid=1013(infads)gid=1000(infaadm)
groups=1000(infaadm),1001(presales),1002(rd),1004(ips),1005(gcs),1006(products),1007(infacommon),2000(infaa
dm2)
pslxhacoe01.informatica.com:/home/infads#ulimita
corefilesize(blocks,c)unlimited
datasegsize(kbytes,d)unlimited
schedulingpriority(e)0
filesize(blocks,f)unlimited
pendingsignals(i)278527
maxlockedmemory(kbytes,l)32
maxmemorysize(kbytes,m)unlimited
openfiles(n)4096
pipesize(512bytes,p)8
POSIXmessagequeues(bytes,q)819200
realtimepriority(r)0
stacksize(kbytes,s)10240
cputime(seconds,t)unlimited
maxuserprocesses(u)278527
virtualmemory(kbytes,v)unlimited
filelocks(x)unlimited
Clustering
AllHighAvailabilityproducts,thisoneincluded,areintendedtoresideonhighavailabilityenvironmentswhich
meansclusteredenvironmentsjoinedtogetherbyhighavailabilityclusteringpackages.
Examplesinclude:
VeritasStorageFoundationSuite
IBMHighAvailabilityClusterMultiProcessingEnhancedScalability
*SeeSharedFileSystemsSupportedsectionforsupportedclusterwareconfigurations.
Informatica Proprietary
Page18of78
3/31/2011
ThesharedfilesystembaserequirementreferredtointheOverview,inalmosteverycase,impliesaclustered
filesystem(NOTE:noteveryclusterfilesystemsmeetthosebaserequirementsofPOSIXcompliance)which,in
turn,impliesaclusterofnodesonahighbandwidthnetworkwithhighbandwidthprivateheartbeatnetworks.It
isalsoimportanttoremembertheHAClusteringsoftwarevendorwillhaveminimumandrecommendedCPU,
RAMandDiskrequirementsconsideredadditionaltothesystemrequirementstableillustratedinTable1.
AnotherfactortokeepinmindisthattheInformaticatomcatservicesdonotrestartontheirownsoitis
consideredbestpracticetohavenotonlyaclusteredfilesystem,butaclusterserverproductdesignedtowork
withtheclusteredfilesystem(e.g.theyrebothfromthesamevendorandthesameversionandpatchlevel).This
waythetomcatservicescanbeconfiguredtorestartunderclusterservercontrol.Theimportantdistinctionhere
isthatthetomcat,a.k.a.servicemanager,willfailoveronitsownfromonegatewayinthegridtoanother,butit
willneverrestartitselfontheoriginalmastergatewaywithoutbeingmanuallyrestartedorautomaticallyrestarted
byaclusterservermanageraspreviouslymentioned.
Thetablebelowillustratestheclusterrequirementsatahighlevel:
Table2TypicalHAClusterRequirements
Heartbeat
Attribute
Best Practice
Minimum
Requirement
Not Recommended
Comments
FileSystem
Clustered(HA)File
Systemconfiguredfor
HardwareBasedI/O
Fencing.
ClusteredFileSystems
notconfiguredI/O
Fencing,NFSmounts,
UNCShares,etc.
Mustbeahigh
performance,highly
availablesharedfile
systemcapableof
supporting
simultaneousread.If
usingNAS,atleast
useaseparateNIC
Channeloutsideof
thepublicchannel(s)
forcommunication
betweenhostand
filerhead.
HeartbeatNetwork
Meetorexceed
Requirementsin
heartbeatnetwork
section.
NAS ifenvironment
isintendedalmost
entirelyforrelational
torelational
workloads.See
sharedfilesystems
supportedsectionfor
supportinformation
onNFSv3withNAS.
OtherwiseClustered
(HA)FileSystem
configuredfor
HardwareBasedI/O
Fencing.
Meetorexceed
Requirementsin
heartbeatnetwork
section.
N/Aoutsideof
clustering.
NetworkInterface
Cards
Atleast2(multi
interface)/host(full
duplexGigabit
minimum)bound
togetheratlayer3of
OSIbyTCP/IPbinding
agent.
Public,Mixed
public/private,or
inconsistentwithtable
inheartbeatnetwork
section
1NIC/hostofanykind,
or2NICs/hostofany
kindnotIPbound.
Atleast2(multi
interface)/host
(Gigabitminimum)
boundtogetherat
layer3ofOSIby
TCP/IPbindingagent.
Client
Anyhighavailability
productmustreside
ontrulyhighly
available
infrastructure.
Page19of78
3/31/2011
EverynodeparticipatinginthePowerCenterEnterpriseGridmustbewithinthesamecluster.Informatica
PowerCenterEnterpriseGridwithHAfurtherrequiresthatallnecessaryclientsand/orprotocolsrequiredto
connecttosourceandtargetmappingsalsobelocatedonaclusteredfilesystemwithinthesamecluster
(definedbytheclusterwareheartbeatnetwork),notnecessarilythesameclusteredfilesystemmountpoint.
Forexample,Oracleclientisrequiredonaclusteredfilesystemwithinthesameserverclusterastheclustered
filesystemusedforthePowerCenterinstallations.
Thetablebelowillustratessomeminimumrequirementsandbestpracticesforthedescribedclusteredfile
systems:
Table3ClusterFileSystemHARequirements
Heartbeat
Attribute
Best Practice
Minimum
Requirement
Not Recommended
Comments
Disks/Spindles
SCSIIIIPGRcapable,
elseCPServer/Client
BasedFencing
SCSIIIIPGRcapable,
elseCPServer/Client
BasedFencing
SCSI2(includesFast
20,Fast40,etc)
technologyofanykind
andolder.
Requiredforsimultaneous
read/writeHAclustering
software.
SolidStateDrives
SCSIIIIPGRcapable,
elseCPServer/Client
BasedFencing
SCSIIIIPGRcapable,
elseCPServer/Client
BasedFencing
SSDinframesnot
supportingSCSIIII
commandinterface
structures
Engineeringgroupstellus
thatSSDsdonotperformas
wellastraditionalhigh
rotationalrate15KRPM
driveswithbigcacheSAN
frameswhenitcomesto
randomreadperformanceso
youmaywishtoconsultyour
engineeringcontacts.
HeartbeatNetwork
Meetorexceed
Requirementsbelow
MeetRequirements
below
Publicorinconsistent
withtablebelow
DefaultBlockSize
(Allocationsize)
816K(fortables,
indexesandmedlarge
filemovement),8192
fortraditionalETL
parameter,cacheand
statefilesandother
generalAppTier
applications.
4096
1024,exceptforlogfile
locations
Considerthis,Oracle10g
usesadefaultblocksizeof
16K.Ifyourfilesystemis
formattedwith1024block
sizes(andalmostallofthem
are)thenforeveryOracle
fetch10OSI/Ofetchesare
requirednormally
sequentially.Seeexample
fstyp.
ServerSideStriping
Recommendedin
columnsgreaterthan4
butlessthan20
typicallydoneat
diskgrouplevel.No
morethan1column
perLUNusedwithin
thediskgroupusedto
maketheclusteredfile
system.
Notrequired,but
availableinVeritas
StorageFoundation
Suite,JFS2.0and
RedHatGFS.Ifyou
dontusehostside
stripingthenvisit
PartitioningFile
SystemResources
Comparisonasthis
partitioningproblem
willrelatetoyour
Informatica Proprietary
Page20of78
Client
Considerthis,nearlyall
volumesarecreatedwith
concatenatedlayoutwhich
presentsonly1I/Ostreamto
theOSperoperation.
Multiplecolumnspresentthe
samenumberofI/Ostreams
availabletotheOSper
operation.
3/31/2011
intendeddesign.
NOTE:Greenhighlightedrequirementsarethosethatwillmakeasignificantimpactonperformanceifbestpracticeis
followed.Bluehighlightarebrannewrequirementsdrivenbynewlyemergingtechnologies.
Figure13Howtocheckblocksize(defaultallocationsize)
root@galaxy:/root
>fstypv/dev/vx/dsk/diskgroup/u04a_vol
vxfs
magica501fcf5version6ctimeSatNov1111:54:562006
logstart0logend0
bsize8192size16384000dsize0ninode16384000nau0
defiextsize0ilbsize0immedlen96ndaddr10
aufirst0emap0imap0iextop0istart0
bstart0femap0fimap0fiextop0fistart0fbstart0
nindir2048aulen32768auimlen0auemlen1
auilen0aupad0aublocks32768maxtier15
inopb32inopau0ndiripau0iaddrlen1bshift13
inoshift5bmaskffffe000boffmask1fffchecksumea5e5224
oltext111oltext29476oltsize1checksum20
free16374721ifree0
efree12111122200110111212222000000000
Figure14Howtocheckforcolumncounts/volumeVeritasServerSideStriping
root@galaxy:/usr
>vxstatgdiskgroupi3vsu04a_vol
OPERATIONSBLOCKSAVGTIME(ms)
TYPNAMEREADWRITEREADWRITEREADWRITE
ThuDec0720:24:582006
volu04a_vol3109182162001330650.24.5
sddiskgroup0101302410615946222960.21.8
sddiskgroup02017913242205940.410.0
sddiskgroup03010310225910.04.2
sddiskgroup04010110225280.011.8
sddiskgroup05010110225280.010.9
sddiskgroup060161112225281.710.9
(a+b)/n=i,((a+b)/n)+2=C,S/L=s
a=totaldesiredreadpartitionpipelines
b=totaldesiredwritepartitionpipelines
n=logicalnodes(o/spartitionsorothervirtual/physicalmachineseparation)
i=totalnumberofinodesrequired/logicalnode
Informatica Proprietary
Page21of78
3/31/2011
L=totalnumberofLUN'srequiredtobuildinformaticasharedfilesystem
s=sizeofeachLUNpresentedviathesameLUNvaluetoeachnodeinthecluster
S=totalsizerequirementofInformaticasharedfilesystem
C=totalcores/logicalnode(NOTE:Cmust=/>corecountderivedbyvolumemetricanalysisin
InformaticaPowerCenterSizingTool)
i/L=shouldequalwholeinteger(1,2,3,4,etc)
TheHACOE/u01/app/infa_shared/presalesCFSMountismadeof2050GBRAID10LUNswithRAID0host
sidestripes20ways(1foreach50GBLUN).AslongastheHACOEsneedsdonotexceed10PowerCenter
Pipelinepartitions/nodeweshouldcontinuetoseeanormal(linearlyscaling)performancecurveupuntilthe
10thpartition/nodeisexceededunlesssomeotherlimitingfactorispreventingthescalingfromhappening
(exampleSANthroughput,HBAqdepth,etc).Therationaleforstayingunder10isthatwehaveonemount
pointforreadandwriteRAID10paritystripingontheSAN.10read+10write=20total,andwehave20
columnsofI/O/nodesoweshouldbegoodaslongaswedonthavesomeotherlimitingfactorintrudinginto
theequation.
Whatdoesthisagentdo?
Viaconstantclusteredmonitoring,increasestheavailabilityofthePowerCenterServiceManager
whichholdsresponsibilityformanagingallsubordinatePowerCenterprocesses(repository
services,webserviceshub,integrationservices,reportingservices,etc).Theagentincreasesthe
availabilityofServiceManagerinthesensethatif,foranyreason,ServiceManagershouldgo
downonanygivennodetheVCSagentwillcontinuetoattempttorestarttheagentforaslong
asyouwouldlikeautomatically.PowerCenterwillhavealreadyfailedoverMasterGateway
responsibilityfromtheServiceManagernodethatdiedtothenextavailableGatewayhostinthe
samedomain,butInformatica,byitself,doesntattempttorestartServiceManageronafailed
nodeautomatically.Thus,VCS+PowerCenteryieldsthemaximumpossibleavailabilityofthe
mostnodesinadomain.
TheagentallowsforordinalprioritytobeestablishedinaVCSclustersothatservicesmaybe
shutdownandstartedinproperorderwithoutmanualintervention.Thisisextremelyhelpful
duringdatacenteroutagesandsuchwherehostsinaPowerCenterdomainarecompletely
down.Assoonasthehostsarepoweredontheentiredomaincanbeautomaticallyrecovered
solongasthedatabasewasmadeHAbysimilarclusteringcapabilities.
Whatdoesthisagentdoesnotdo?
Itdoesnotoverlap,inanyway,withtheHAfunctionalitynativelyprovidedinPowerCenter.
Itdoesnotstart,restart,ormonitoranythingbutPowerCenterServiceManager.
ItdoesnotfailoveranythingrelatedtoPowerCenterthatisleftentirelyInformatica.
Itdoesnotrecoveranyjobs,sessions,orservices.
BelowaresomeexamplesofhowthisVCSagentcanbeconfiguredbasedonhowitiscurrentlyconfiguredin
InformaticasHACOEwhichusesVeritasStorageFoundationEnterpriseClusterFileSystemHA(theHAtag
includestheVCSpackage)version5.1SP1.
Informatica Proprietary
Page22of78
3/31/2011
Figure15DependencyTreefortheHACOEVCSCluster
Figure16PropertiesforPowerCenterServiceGroupinHACOEVCSCluster
Informatica Proprietary
Page23of78
3/31/2011
Figure17PropertiesforPowerCenterSvcMgrAgentTypeinHACOEVCSCluster
Figure18MultipleDomainsManagedwithinsameVCSServiceGroupinHACOE
Informatica Proprietary
Page24of78
3/31/2011
Figure19PowerCenterServiceManagerIntentionallykilledinHACOEonpslxhacoe01
Figure20VCSAgentPowerCenterSvcMgrmonitorsanddetectsfailureeventonpslxhacoe01
NOTE:InformaticaPowerCenterHAhasalreadycausedthemastergatewayelectiontooccurrecoveringallmaster
gatewayresponsibilitiesovertopslxhacoe02sincepslxhacoe01isnotresponding.However,PowerCenterdoesnotattempt
torecoverServiceManageronpslxhacoe01.ThatfunctionalityhasneverexistedinPowerCenterbefore.Nowitispossible
toachievethisrecoveryviaVCSasillustratedbelow.
Figure21VCSrecoveryofServiceManageronpslxhacoe01
NOTE:NoticeServiceManagerrecoveredandthenPowerCentersubsequentlyrecoveredsubordinateservices(inthis
domainjusttheintegrationserviceonagrid).
Informatica Proprietary
Page25of78
3/31/2011
AVAILABILITYNOTE:ThisVCSagenthasbeentestedandisnowavailablefromSymantec.Itwillbeavailablefor
VxSFCFSHA4.1MP4upthroughmostcurrentversionsofVxSFCFSHAonLinux/Unixplatforms.ContactSymantecformore
details.
Table4HeartbeatNetworkRequirements
Heartbeat
Attribute
Best Practice
Minimum
Requirement
Not Recommended
Bandwidth
Gb Full Duplex,
Infiniband for Oracle RAC
Gb half duplex
100Mb, 10Mb
Network
Private
Private
Public
Connection
1500
<1500
Subnet
Paths
Informatica Proprietary
Page26of78
Comments
Client
3/31/2011
mode.
TCP/IP
link-lowpri
eeprom parameter
local-mac-address
TRUE
TRUE
Example stanza:
PrivNIC
FALSE
When connecting
multiple interfaces from
the same host to the
same switch assignment
of the same MAC
address will occur if not
set to TRUE. This line
applies to Solaris.
Figure22PrivNICStanzaExampleVeritasStorageFoundationSuiteforOracleRAC
FromHACOEenvironmentrunning11gR2OracleRACwithVxSFRAC5.1P3
PrivNICora_priv(
Critical=1
Device={eth10=0,eth11=1,eth12=2,eth13=3}
Address@pslxhacoe03="10.1.45.136"
Address@pslxhacoe04="10.1.45.137"
NetMask="255.255.255.240"
)
NOTE:pslxhacoe03referstohost0inRAC,pslxhacoe04referstohost1noticethateachhosthasanactiveCRSVIPthat
floatsbetweeneachactivepublicinterface.CRStrafficridesontheprivateinterconnectbuiltontheprivateheartbeat
network,buttheCRSVIPprovidesavitallinkbetweenthepublicinterfacesandtheheartbeatnetworkitisthedoorway
betweenthepublicandprivatesoOracleknowswheretheblockrequestscamefromsothatitcanproperlymanagethe
blockrequestsfortheappropriaterequestingprocess/thread.
Informatica Proprietary
Page27of78
3/31/2011
Best Practice
Minimum
Requirement
Not
Recommended
Comments
Bandwidth
Gb Full Duplex
(1000FDX)
100Mb, 10Mb
Be careful using
autoneg. It doesnt
always negotiate at the
rate you might think it
should and may have to
be forced.
Connection
If your attempting to
active/active team
across a single VLAN
spanning two switches
you can have MAC
flapping. Be cautious
about this.
1500
<1500
Subnet
Same subnet
Paths
Single path
IP Binding / Layer 3
Binding
IP Binds in different
groups or not used.
IP Binding Protocol
No IP Binding in use
2 or more
Client
Informatica Proprietary
Page28of78
3/31/2011
NOTE:Therearemanydefinitionsforquorumpendingcontext,butthisonefromWikipediaholdsrelevance,Aquorumin
alegislativebodyisnormallyamajorityoftheentiremembershipofthebody.
ClusteredStorageontheotherhandiswherestorageisclusteredatthestoragemanagement/storage
intelligencelayer.Thereisawhitepaperonthistopic(publishedbyIsilonSystem)thatdiscussessomeofthe
detailsaroundclusteredstoragewhichcanbefoundat
http://www.isilon.com/pdfs/white_papers/Clustered_Storage_Revolution.pdf.
ThiswhitepaperiswrittenfromaNASpointofviewandisfairlydetailed,butdoeshavesomegaps.For
example,itdoesntmentionclusteredfilesystemsasapossiblesharedstoragesolution,butitdoesmention
NAS,DAS,andSAN.
Infact,whenwediscussHANASthroughoutthisdocument,wearediscussingatypeofclusteredstorage.
WhatIsilonSystemsdescribesas2waysimpleclustering.Otherclusteredstoragetechniquestalkedabout
intheirwhitepaperincludeNamespaceAggregationandClusteredStoragewithDistributedFileSystems
(DFS).Atthistime,2waysimpleclusteringisamorecommonimplementation.However,researchwould
needtobedonetoensureNamespaceAggregationandClusteredStoragewithDistributedFileSystems(DFS)
weresupportablesharedfilesystemsforPowerCenterEnterpriseGrid/HA.Ifthisresearchconcludedthatitis
notcurrentlyasupportablesolutionandtherewasclientinterestinproductionleveldeploymentofthis
categoryofclusteredstorage,thenaPOCand/orclientfundedPOCmayberequiredtoattainalevelof
supportforthatgivensolution.
NFSprovidesweakmetadataanddatacachecoherenciesbetweentheclientnodes.
Metadata,whichinthiscasereferstothefilename,size,modificationtime,etc.,isallowedtogooutofsync
betweenclientnodesforshortperiods.WhileNFSimplementationsprovideoptionstoforcefull
synchronization,theyaretypicallynotusedbecauseoftheperformancepenaltytheyimpose.
Thisisnottosuggestthatthefilesystemitselfdoesnotmaintainitsintegrityatalltimes.Itdoes.Theonly
problemisthattheclientnodesrunningtheapplicationdonothaveaconsistentviewofthefilesystematall
times.
Informatica Proprietary
Page29of78
3/31/2011
Thislackofstrictcachecoherencygoesunnoticedinthenormalcase.Simpleapplicationsdonotexpect
differentclientnodestowritetothesamefilesanddirectoriesconcurrently.
Itcausesproblemswhenrunningapplicationsthatexpectaconsistentviewofthefilesystemonallclient
nodesatalltimes.Thisbecomesveryapparentwhenoneattemptstoscaleoutanapplicationthatspawns
multipleprocessesreadingandwritingtothesamesetoffiles.
MakingtheseapplicationsworkonNFSrequireseithermakingtheapplicationworkaroundtheweakcache
coherency,or,bydisablingthevariouscaches.Theformerrequiresapplicationchanges;thelatterimposesa
significantperformancepenalty.
SeanMa,InformaticaSr.ProductManagerforHAandGrid,reportedthefollowingregardingNFS:
NFSv2hasseverallimitationsintermsoflocking,reliabilityandinabilitytoaccess64bitfileoffsets.
Also,industryhasmostlymigratedtoNFSv3.
NFSv3hasimprovedfilelockingsupport,butremainsastatelessprotocoldependentonaseparate
lockmanagerthiscancauseissueswithrecoveryscenarios.Primaryconcerniswithsinglepointof
failureintheNFSserver.BasedondiscussionswithNetapp&EMCthisisresolvedinmostNAS
appliancesbyintegratingtheNFSserversupportintotheirproprietaryfilesystem.Openconcernstill
existsforstandaloneNFSserverprocesses.CurrentlysupportedwithNetappWAFLNASandEMC
CelerraNASwithsomecaveats.Pleaseseesupportedsharedfilesystemsection.
NFSv4snewdesignseemshaveresolvedlingeringissueswithNFSv3slockingandrecovery.
However,industryadoptionforthisremainsloweventhoughOSsupportthiswiththelatestkernel
patches.NotyetsupportedbyInformatica.
Criticalmaintainingawritelockonthemasterpmserverlckfileusedtoarbitratesplitbrain
scenarios.ThisrequirestheNFSservertosupportlockrecoveryinthecaseoffailover.
Lustre
Lustrehasgainedsomerecenttractionduetoitsextremelylowcostandreputationforsolidperformance
usedinmanyoftheworldssupercomputerimplementations.Intheoryitcanclusterstoragepresentedfrom
SANandNASstoragesubsystems.ThereareafewimportantfactstonoteaboutLustrebeforemakingany
decisiontouseitwithPowerCenter.Fact1)itisnotcurrentlyacertifiedsharedfilesystemforusewith
PowerCenter;Fact2)itisnotatrueclusteredfilesystemasitprovidesnoselfcontainedheartbeatorHA
capabilities;Fact3)therearelittletononeclientimplementationsofPowerCenterEnterpriseGrid/HAon
Lustresoaframeofreferenceforperformance,availabilityandstabilitymaynotbereadilyavailableatthis
time.
Informatica Proprietary
Page30of78
3/31/2011
Figure23ReferenceArchitectureTheApplicationStackasitrelatestoPowerCenter
Withsomanyhardwareandsoftwareproductsintheapplicationstackitincreasesboththeprobabilityof
performanceissuesaswellasthedifficultyofdrivingtorootcause.Thegranularperformanceandstabilitytuning
tips,providedinthispreinstallchecklist,drillintotheClusterVolumeManagerandClusterFileSystemlayers
amongmanyotherlayersillustratedhereinthevisualrepresentationoftheapplicationstack.Yoursystem
architects,DBAsandSysAdminswillwanttoreviewthischecklistideallybeforeacquiringorbuildingthe
intendedPowerCenterenvironment.
State Management
ThePowerCenter8.x.xHAOptionisdesignedtotoleratenodefailure.Itutilizesasystemofmasterandbackup
gatewaystoensurethatthedomaincontinuestofunctionifanodebecomesunavailable.
WiththeHAOption,thePowerCenteradministratordesignatesmultiplegatewaynodes.Onegatewayservesas
themastergateway,andtheothersarebackupgateways.PowerCenterstoresthisinformationinthedomain
configurationdatabase.
TheServiceManagerrunningonthemastergatewaynodeperformsdomainfunctionssuchasuser
authentication,serviceaddressresolution,requestauthorization,licenseregistrationandverification,andevent
logging.Theseimportantfunctionsmustbeprotectedfrommastergatewaynodefailure.Tomakesurethemaster
gatewaynodeisupandrunning,theServiceManageronthemastergatewaynodeperiodicallyupdatesthe
domainconfigurationdatabasewithtimestampinformation.Ifthetimestampdoesnotchange,ServiceManager
processesrunningonthebackupgatewayselectanewmasterusinganintelligent,distributedelectionalgorithm.
Themastergatewayelectionalgorithmusesthedomainconfigurationdatabaseasthearbitratorin
determiningwhichnodebecomesthemastergateway.Themastergatewayperiodically(every8seconds,by
default)updatesthedatabasewithitslatesttimestampinformation.Whenanewgatewaynodeisstarted,it
checksthetimestampinformationinthedomainconfigurationdatabaseandtriestocontactthemastergateway
nodesothatitcanjointhedomain.Ifthenewnodecannotcontactthemastergateway,itwaitsforapredefined
intervaloftime,andthenrechecksthetimestampvalueforthemastergatewaynode.Ifthetimestampvaluehas
notchanged(indicatingthatthemastergatewayhasfailed),thenewnodebecomesthemastergateway.Ifthe
timestampvaluehaschanged(indicatingthatthemastergatewayisupandrunning),thenewnoderetriesthe
connectionuntilitcanconnecttothemastergatewayandjointhedomainoruntilthecurrentmastergateway
Informatica Proprietary
Page31of78
3/31/2011
fails,andthenewnodebecomesthemastergateway.Ifthemastergatewayfailsandmultiplegatewaynodesare
startedatthesametime,thefirstnodetoobtainarowlockinthedatabasebecomesthenewmastergateway.
Thisintelligentelectionprocessensuresthatamastergatewayisalwaysavailable.Domainfunctionscancontinue
eveniftheoriginalmastergatewaynodefails.
ThePowerCenter8HighAvailabilityOptionalsoguardsagainstbackupgatewayandworkernodefailure.Every
nodeinthedomainsendsaheartbeattothemastergatewayataparticularinterval.Theheartbeatincludesalist
ofservicesrunningonthenode.Ifanodefailstosendaheartbeat,themastergatewaymarksthenode
unavailableandreassignsitsservicestoanothernode.ThisprocessensuresthatPowerCenterservicescontinueto
rundespitenodefailure.
DomainLogsthecentralizedlocationforPowerCenterdomainlogs,rememberthatinagrid,thereisone
domainwithonelogserviceinoperationatagiveninstantintime.Intheeventofanelection(service
managerrecoveryfromonemastergatewaytoanothergatewayinthegridturningitintothemaster)the
domainlogsmustbeinacentralized(highlyavailable)filejustasthestateinformationunder$PMRootDir.
infa_homethelocationforPowerCenterbinaries,typicallyseparateinstallation/hosttypicallyalsoonthe
clusteredfilesystem,thoughnotcentralized,forHApurposes.**
infa_sharedthelocationforparameterandstatefilesfortheentireEnterpriseGrid,mustbeacentral
locationonclusteredfilesystemthustherequirementforsimultaneousread/write
**NOTE:WhileitiscertainlyanoptiontoinstallInformaticabinariesonlocal(ornonhighavailability)drives,itisdiscouraged
forinstallswiththeHAoptionsincethebinariesthemselveswouldnotbehighlyavailableunlessstoredonahighlyavailablefile
systemunderautomaticclustercontrol.
Figure24ExamplefilesystemlayoutsforPowerCenterEnterpriseGrid/HA
Binaries
Host1infa_home/u01/app/infa_bin/host1/Informatica/PowerCenter
Host2infa_home/u01/app/infa_bin/host2/Informatica/PowerCenter
(where/u01/app/infa_binisaclusteredfilesystemonadifferentdiskgroupand/orvolumethanthatofthe
CFSforinfa_shared)
Host1TNS_ADMIN/u02/app/oracle/product/db/network/admin
Host2TNS_ADMIN/u02/app/oracle/product/db/network/admin
(where/u02/app/oracleisaclusteredfilesystemonadifferentdiskgroupand/orvolumethanthatoftheCFS
forinfa_sharedandinfa_bin).TheHACOEinstalledthefullDB11.2.0.1producthereinsteadofjusttheclient
sothatasingleinstancedbcouldbecreatedforRMANrepository.[dhacoe]
NOTE:BydefaultInformaticausesversioninformationwhencreatingthehomebinarydirectory.Itisrecommendedthat
youremovethisversioninformationasitwillbecomeinapplicableasyoumovethroughservicepacksandupgrades.Ifyou
Informatica Proprietary
Page32of78
3/31/2011
dontwishtodothisyoucanleavetheversioninginthenamingconventioninplaceandwhenyouchosetoperformyour
upgradeyoucanhavethebinarieslocatedtoanewlocationwiththeproperversioninginthename.However,ifyouhave
scriptsinusewiththepreviousversioninthenameyoullhavetoupdateallthosescriptsunlesstheyareparameterdriven.
Justsomethingtothinkabout
Infa_shared
ParameterandStateFiles(SrcFiles,TgtFiles,Cache,Storage,etc)
Inthisexample,test,prod,anddevareallonthesametwohostsinthesameCFScluster.
Host1infa_shared(a.k.a.$PMRootDir)/u01/InformaticaCommon/prod/infa_shared
Host2infa_shared(a.k.a.$PMRootDir)/u01/InformaticaCommon/prod/infa_shared
Host1infa_shared(a.k.a.$PMRootDir)/u01/InformaticaCommon/test/infa_shared/${username}
Host2infa_shared(a.k.a.$PMRootDir)/u01/InformaticaCommon/test/infa_shared/${username}
Host1infa_shared(a.k.a.$PMRootDir)/u01/InformaticaCommon/dev/infa_shared/${username}
Host2infa_shared(a.k.a.$PMRootDir)/u01/InformaticaCommon/dev/infa_shared/${username}
(where/u01/InformaticaCommon/prodor/testor/devisaclusteredfilesystemassumeso/sprofilesindev
andtest)
ParameterandStateFiles(SrcFiles,TgtFiles,Cache,Storage,etc)
Inthismoretypicalexample,test,prod,anddevareallonseparatehostsinseparatepurposebuiltclusters.
Host1infa_shared(a.k.a.$PMRootDir)/u01/app/infa_shared
Host2infa_shared(a.k.a.$PMRootDir)/u01/app/infa_shared
Host3infa_shared(a.k.a.$PMRootDir)/u01/app/infa_shared/${username}
Host4infa_shared(a.k.a.$PMRootDir)/u01/app/infa_shared/${username}
Host5infa_shared(a.k.a.$PMRootDir)/u01/app/infa_shared/${username}
Host6infa_shared(a.k.a.$PMRootDir)/u01/app/infa_shared/${username}
(assumeso/sprofilesindevandtest)
Takingfilesystemresourceparallelismintoaccountwhencreatingyour$PMRootDir
LetssayyourePMRootDirisonamount/u01/app/cfs1andinfa_shared2isonamount/u01/app/cfs2:
/u01/app/cfs1isonefilesystemhostsidestriped8wayswith8LUNsandonlyhasthefollowingdirectory
structure:
$PMRootDir/SrcFiles
$PMRootDir/BadFiles
$PMRootDir/Staging
$PMRootDir/WorkflowLogs
$PMRootDir/log
$PMRootDir/Temp
$PMRootDir/Backup
$PMRootDir/BWParam
$PMRootDir/LkpFiles
$PMRootDir/Cache>>>>>>>>>>>>>>>>>>>>>symboliclinkto$infa_shared2/Cache
$PMRootDir/TgtFiles>>>>>>>>>>>>>>>>>>>>>symboliclinkto$infa_shared2/TgtFiles
$PMRootDir/SessLogs>>>>>>>>>>>>>>>>>>>>>symboliclinkto$infa_shared2/SessLogs
Informatica Proprietary
Page33of78
3/31/2011
ThenmakeanothercompletelyseparatefilesystemnotsharinganyLUNswiththeotherone.Callit
whateveryoulike:
/u01/app/cfs2isanotherfilesystemhostsidestriped8wayswith8LUNsandonlyhasthefollowing
directorystructure:
$infa_shared2/Cache
$infa_shared2/TgtFiles
$infa_shared2/SessLogs
Ifyoureusingo/sprofiles,thenitmightlooklikethis:
YourbaselinePMRootDircouldbeonamount/u01/app/cfs1andinfa_shared2isonamount/u01/app/cfs2:
/u01/app/cfs1isonefilesystemhostsidestriped8wayswith8LUNsandonlyhasthefollowingdirectory
structure:
YouperformamkdirunderbaselinePMRootDirforeachusernamenlikethis:
$PMRootDir/usernamen>>>>>>>>>o/sprofilesownership=usernamen:usernamen
Thuseachuserhasapersonal$PMRootDir=baseline$PMRootDir/usernamenthatonlytheycanread/write
towiththeexceptionofroot.Thispreventsdevelopersfromblowingawayotherdevelopersworkinashared
servicesICCdeploymentespeciallyiftheuserid:groupidisusernamen:usernamen.
Withineachpersonal$PMRootDirlocationthestructurewouldlooklikethis:
$PMRootDir/SrcFiles
$PMRootDir/BadFiles
$PMRootDir/Staging
$PMRootDir/WorkflowLogs
$PMRootDir/log
$PMRootDir/Temp
$PMRootDir/Backup
$PMRootDir/BWParam
$PMRootDir/LkpFiles
$PMRootDir/Cache>>>>>>>>>>>>>>>>>>>>>symboliclinkto$infa_shared2/usernamen/Cache
$PMRootDir/TgtFiles>>>>>>>>>>>>>>>>>>>>>symboliclinkto$infa_shared2/usernamen/TgtFiles
$PMRootDir/SessLogs>>>>>>>>>>>>>>>>>>>>>symboliclinkto$infa_shared2/usernamen/SessLogs
ThenmakeanothercompletelyseparatefilesystemnotsharinganyLUNswiththeotherone.Inthisexample
=
/u01/app/cfs2isanotherfilesystemhostsidestriped8wayswith8LUNsandonlyhasthefollowing
directorystructure:
$infa_shared2/usernamen
$infa_shared2/usernamen/Cache
$infa_shared2/usernamen/TgtFiles
$infa_shared2/usernamen/SessLogs
Now,isthisalotofwork?Itcanbeifyoudontscriptit,buttheperformanceandeaseofmanagementafter
setupisdefinitelyworthit.Otherwise,withoutthiskindofsetupwitho/sprofilesyoureguaranteedtobe
restoringfilesfromtapeorDDTwhenusersinevitablycometoyousaying,someotheruserinmygroupblew
awaymyfiles.
Domain Logs
Host1/u01/app/infa_bin/domain_logs
Host2/u01/app/infa_bin/domain_logs
Informatica Proprietary
Page34of78
3/31/2011
(where/u01/app/infa_bin/domain_logsisaclusteredfilesystem)typically100MBissufficientfordomain
logsonly.UseInformaticasCFSsizingmodeltodeterminehowyourspecificrequirementswilldrivesizing
forinfa_binandinfa_sharedPMRootDirCFSmountlocations.YoucangetaccesstotheInformaticaCFSSizing
Modelthroughyouraccountteam.
Benefitsofo/sprofilesincludewhenrunningasharedICCenvironmentyoutypicallyhavePowerCenter
developer/usersconstantlyuploading,altering,deleting,etcSrcFiles,TgtFiles,Cache,etcininfa_shared
directoriesdescribedabove.Youcouldorganizethembygroup/org,andcreateasharedfilesystemforeach,
buttheproblemwiththatisthatuserswithinthesamecouldstillaltersomeoneelsesfileswithinthatgroup.
BestpracticewouldbethateachindividualuserhadtheirownPMRootDirlocationfortheirownSrcFiles,
TgtFiles,Cache,etc.
BelowisalookathowwedeployedO/SprofilesintheInformaticaHACOE:
pslxhacoe01.informatica.com:/u01/app/infa_shared/presales#lsalrt
total364
drwxrwxrx2infasalepresales96Feb1308:52lost+found
drwxrwxrx2infasalepresales96Feb1317:34Backup
drwxrwxrx2infasalepresales96Feb1317:34BWParam
drwxrwxrx2infasalepresales96Feb1317:34log
drwxrwxrx2infasalepresales96Feb1317:34LkpFiles
drwxrwxrx15infasalepresales8192Feb1713:39MetadataManager
drwxrxrx7rootroot4096Mar1913:32..
drwxrxrx14lkanglkang8192Apr2812:32lkang
drwxrxrx14pkumarpkumar8192Apr2812:32pkumar
drwxrxrx14glynnglynn8192Apr2812:33glynn
drwxrxrx14mscottmscott8192Apr2812:33mscott
drwxrxrx14kworrellkworrell8192Apr2812:33kworrell
drwxrxrx14ademaioademaio8192Apr2812:33ademaio
drwxrxrx14rvellalarvellala8192Apr2812:33rvellala
drwxrxrx14ramblirambli8192Apr2812:34rambli
drwxrxrx14sdorceysdorcey8192Apr2913:41sdorcey
drwxrxrx14mparrottmparrott8192Apr2914:07mparrott
drwxrxrx14glallglall8192Apr2914:07glall
drwxrxrx14lengelhalengelha8192Apr2914:24lengelha
drwxrxrx14awoolfolawoolfol8192Apr2914:41awoolfol
drwxrxrx14kjaslowkjaslow8192Apr2914:44kjaslow
drwxrxrx14cmcdonalcmcdonal8192Apr2916:23cmcdonal<<<noticetheindivuser:groupsettings
rwrr1rootroot32768May418:10quotas.grp<<<noticethebestpracticeuseofdiskquotas
drwxrwxrx2infasalepresales8192May814:04TgtFiles
drwxrwxrx2infasalepresales8192May3005:21BadFiles
drwxrwxrx2infasalepresales8192May3006:23SessLogs
drwxrwxrx2infasalepresales8192May3006:26Cache
drwxrxrx2hkaplanpresales96Jun1806:59hkaplan
drwxrxrx2dnestorpresales96Jun1806:59dnestor
Informatica Proprietary
Page35of78
3/31/2011
drwxrxrx14jwaitepresales8192Jul313:41jwaite
drwxrwxrx2infasalepresales8192Jul321:42Temp
drwxrwxrx2infasalepresales8192Jul913:46WorkflowLogs
rwrr1rootroot36864Jul1220:28quotas
drwxrwxrx3infasalepresales16384Jul1809:51SrcFiles
drwxrxrx15ndobbinsndobbins8192Jul1913:55ndobbins
drwxrwxrx2infasalepresales16384Jul2314:12Storage
pslxhacoe01.informatica.com:/u01/app/infa_shared/presales#dfk.
Filesystemkbytesusedavailcapacitymountedon
/dev/vx/dsk/app_ess_dg/u01_vol_ess41872384198284082204397647%/u01/app/infa_shared/presales
NoticehowtheCFSmountpointissingular/u01/app/infa_shared/presales,buthowwecreatedindividual
PMRootDirlocationsforeachpresalesuser.Alsonoticehowwecreatedthestandard$PMRootDirdirectories
inthesingularCFSmountpointlocationjustincasewewantedtorunanintegrationservicepointedthere
withouttheuseofO/Sprofiles(youllhavetodothisforWorkflowLogsandStorage)asthosearenotsaved
ataindividuallevelwhentheintegrationserviceimpersonatestheindividualuser.
ThisdocumentwillnotgointoallthedetailscoveredintheInstallGuideorAdministrationGuideasthisisa
documenttosupplementthosedocuments,butashortlistofo/sprofilebestpracticescanbeaddressed.
1) EnsureyourenvironmentvariablefortheapplicationuseraccountthatrunsPowerCenterhasaumaskset
to000.Noothersettingshouldbeattempted!
2) Obviouslyanychangeinenvironmentvariablesmeansacompleterestartoftheentiredomainso
performthattaskviaascheduleddowntimewindow.
3) CreateyourStorageandWorkflowLogsdirectoriesataminimumattherootofyourCFS/NASmountpoint
intendedfortherootPMRootDirlocation.
4) Createyourindividualuserdirectoriesownedbytheindividualuserandrecommendedthatyoucreatea
groupwiththesamenameofthatindividualuser(seeaboveexamplefromHA/COE).Itisnotnecessary
fortheapplicationuseraccounttobeabletoreadfromorwritetothisdirectoryatall.Theintegration
servicewillimpersonatetheuserrunningthejobatruntimeviaabinarycalledpmimpprocesslocatedin
$INFA_HOME/server/bin.
pslxhacoe01:/u01/app/infa_bin/PowerCenter/pslxhacoe01/server/bin#lsalrtpmimp*
rsrsrx1rootroot12600Dec122007pmimpprocess
5) Configurepmimpprocessinthe$INFA_HOME/server/binlocationofeverynodesindividualinstallationas
perthePowerCenterAdministrationGuide.Acopyofthoseinstructionsispastedbelow:
Informatica Proprietary
Page36of78
3/31/2011
6) CreateyourindividualO/SprofilesaccordingtoeachusersindividualPMRootDirlocation.Seeexample
below:
Informatica Proprietary
Page37of78
3/31/2011
Informatica Proprietary
Page38of78
3/31/2011
7) Next,usingtheRepositoryManagerclienttool,loginwithauserwithAdministrativeprivilegesand
create/editthefolderforthatusertonowuseO/Sprofiles.Seetheexamplebelow:
Informatica Proprietary
Page39of78
3/31/2011
8) Finally,usingtheadminconsole,loginwithauserwithAdministrativeprivilegesandtheintegration
servicetouseO/Sprofiles.Seetheexamplebelow:
9) Restarttheintegrationservicejustmodified.Done!
HereisanexcerptfromInformaticasTechBriefonHighAvailabilityinPowerCenter,ThePowerCenter8High
AvailabilityOptionalsoguardsagainstbackupgatewayandworkernodefailure.Everynodeinthedomain
sendsaheartbeattothemastergatewayataparticularinterval.Theheartbeatincludesalistofservices
runningonthenode.Ifanodefailstosendaheartbeat,themastergatewaymarksthenodeunavailableand
reassignsitsservicestoanothernode.ThisprocessensuresthatPowerCenterservicescontinuetorundespite
nodefailure.
FormoreinformationonthistopicpleasereadtheInformaticawhitepaper,HowtoAchieveGreater
AvailabilityinEnterpriseDataIntegrationSystemsandHowtoEnsureDataContinuityThroughResilience,
Failover,andRecoveryUndertheHoodoftheHighAvailabilityOptionNowAvailableThroughInformatica
PowerCenter8.
Informatica Proprietary
Page40of78
3/31/2011
Database Tier
10g Oracle RAC Active / Active Shared Everything Technology
ForInformaticaPowerCenterEnterpriseGridwithHA,oranyotherHAapplicationconnectingtoadatabase,
tobeconsideredtrulyhighlyavailablethedatabase(andallconnectionstoandfromit)mustalsobehighly
available.OracleRACcreatessomeuniqueenvironmentrequirementsthatarenotedbelowagainthis
informationshouldbeconsumedinconcertwithofficialOracleproductdocumentation.
i.
IfInfasessionpartitioningwillbeused,TAFshouldbeconfiguredprimary/availableasopposedto
primary/primaryasmultipleactivenodescancausedeadlockswithvolumerunningthrough
partitionedsessions.
ii.
IfyourPowerCenterversionisbelow8.1.1SP2,then8.1.1SP2shouldbeappliedwithJDBC
workaroundtomakeatrulyHAconnectstringfordomainrelatedprocesses.
Thisexampleisforconfiguring(onlyinaNEWDomain)theJDBCconnectionURLforanOracleRACinstance:
shinfasetup.shdefineDomaindnDOMAIN_TESTadadminpdadminnnNODE_TESTnahostname:6999mi
6100ma6200ld/Informatica/server/infa_shared/logrf/Informatica/server/tomcat/bin/nodeoptions.xml
cs"JDBC:informatica:oracle://server1:1521;ServiceName=ORCL;AlternateServers=
(server2:1521,server3:1521,server4:1521);LoadBalancing=true"duuserdppasswddtORACLEf
NOTE:Donotattempttheabovecommandinanexistingdomainasitwilloverrideallexistingdomain
informationwhichwillhavetoberecovered.
Forexistingdomains,youcanusethefollowingcommandtoupdateeachgatewaynodewiththeappropriate
TAFawarejdbcconnectstring:
firstbackupthefollowingfilesonallnodesindomain(nodemeta.xmlin$INFA_HOME/server/config)and
(server.xmlin$INFA_HOME/server/tomcat/conf)
shinfasetup.shupdateGatewayNodecs"jdbc:informatica:oracle://rachost1:1521;ServiceName=ORCL
;AlternateServers=(rachost2:1521;ServiceName=ORCL);LoadBalancing=true"duuserdppassworddt
oraclednDOMAIN_TEST
Thecorrespondingtnsnames.oraentryforExample2aboveis:
TNS_NAME=(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=server1)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=server2)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=server3)(PORT=1521))
(LOAD_BALANCE=on))
(CONNECT_DATA=(SERVICE_NAME=ORCL)))
WhereORCL=TAFservicename.
iii.
WhenusedontopofVeritasStorageFoundationSuiterequiresSFRACandI/OFencingtoprevent
splitbrainevents
iv.RequiresdiskthatisSCSIIIIPGR(PersistentGroupReservation)capable
v.PrivateInterconnectisalotmorethanasimpleheartbeatnetworkOracleRACandCRSturnsthe
heartbeatnetworkinfrastructureintoahighspeedmemoryblockmovementmediacalledthe
PrivateInterconnectorCacheFusionNetwork.ThePrivateInterconnectshouldnotbeconfusedwith
Informatica Proprietary
Page41of78
3/31/2011
CRS(ClusterReadyServices).CRSisaOracleproprietaryclusteringmanagementapplicationridingon
topofthePrivateInterconnectstandingontopoftheheartbeatnetwork&cachefusion
infrastructure.
NOTE:ThereisonlyonecopyofeachRACdatabase/instanceinanygivenOracleRACenvironment.While
unlikely,thisisasinglepointoffailurewithinRACarchitectureevolvingfromtheinherentdesignofRACwith
aprimarymissionofmultinodescalability.
NOTE:FailoverclustersprovidehighavailabilitysupportforanentireMicrosoftSQLServerinstance,in
contrasttodatabasemirroring,whichprovideshighavailabilitysupportforasingledatabase.Database
mirroringworksbetweenfailoverclustersand,also,betweenafailoverclusterandanonclusteredhost.
Typically,whenmirroringisusedwithclustering,theprincipalserverandmirrorserverbothresideonclusters,
withtheprincipalserverrunningonthefailoverclusteredinstanceofoneclusterandthemirrorserver
runningonthefailoverclusteredinstanceofadifferentcluster.Youcanestablishamirroringsessioninwhich
onepartnerresidesonthefailoverclusteredinstanceofaclusterandtheotherpartnerresidesonaseparate,
nonclusteredcomputer,however.
Ifaclusterfailovermakesaprincipalservertemporarilyunavailable,clientconnectionsaredisconnectedfrom
thedatabase.Aftertheclusterfailovercompletes,clientscanreconnecttotheprincipalserveronthesame
cluster,oronadifferentclusteroranonclusteredcomputer,dependingontheoperatingmode.
Whendecidinghowtoconfiguredatabasemirroringinaclusteredenvironment,theoperatingmodeyouuse
formirroringissignificant.
Informatica Proprietary
Page42of78
3/31/2011
Ifthenoderunningthecurrentprincipalserverfails,automaticfailoverofthedatabasebeginswithinafew
seconds,whiletheclusterisstillfailingovertoanothernode.Thedatabasemirroringsessionfailsovertothe
mirrorserverontheotherclusterornonclusteredcomputer,andtheformermirrorserverbecomesthe
principalserver.Thenewprincipalserverrollsforwarditscopyofthedatabaseasquicklyaspossibleand
bringsitonlineastheprincipaldatabase.Aftertheclusterfailovercompletes,whichtypicallytakesseveral
minutes,thefailoverclusteredinstancethatwasformerlytheprincipalserverbecomesthemirrorserver.
DB2ESENonPartitionedDatabaseEnvironments:
TheuseofACRinnonpartitioneddatabaseenvironmentscanalsoleadtodataintegrityissues.Assumingdisk
failovertechnology,suchasIBMAIXHighAvailabilityClusterMultiprocessor(HACMP),MicrosoftCluster
Service(MSCS),orHPsServiceGuard,isnotinusethenthestandbydatabasewillnothavethedatabase
transactionlogsthatexistedontheprimarydatabasewhenitfailed.Therefore,therecoveryofindoubt
transactionsinscenariosasdescribedintheHighAvailabilityDisasterRecoveryNEARSYNCsectionabove,can
resultindataintegrityproblems.
NOTE:DB29HighAvailabilitysupportsIdleStandbyandMutualTakeover,notActive/Active.
IdleStandbyInthisconfiguration,onesystemisusedtorunaDB2instance,andthesecondsystemisidle,
orinstandbymode,readytotakeovertheinstanceifthereisanoperatingsystemorhardwarefailure
Informatica Proprietary
Page43of78
3/31/2011
involvingthefirstsystem.Overallsystemperformanceisnotimpacted,becausethestandbysystemisidle
untilneeded.
MutualTakeoverInthisconfiguration,eachsystemisthedesignatedbackupforanothersystem.Overall
systemperformancecanbeimpacted,becausethebackupsystemmustdoextraworkfollowingafailover:it
mustdoitsownworkplustheworkthatwasbeingdonebythefailedsystem.
Failoverstrategiescanbeusedtofailoveraninstance,adatabasepartition,ormultiplelogicalnodes.
Storage Tier
HighAvailabilitystoragesolutionsareprevalentintodayscomputingworld.Infact,criticalsystemfaultsare
rarelydriven(anymore)torootcauseonaSANorNASFrame.Certainly,disks/spindlesfailwithfrequent
regularity,butsolongastheframeishardenedagainstsuchcommonfailuresacatastrophicsystemwidefailureis
unlikely(disasterexceptionsnoted).However,itisaveryimportantdistinctiontounderstandthatanHAStorage
SolutiondoesnotequalanHAfilesystemforInformatica(orwhateveryourpresentationlayerhappenstobe).
Remember,inanHAenvironment,thefilesystemstillhastobegracefullyandprogrammaticallyautomounted
andthatgracefulautomountprocesshastobehighlyavailableitself(likeinaserverclusteringsolutionVCS,
VxCFS).Againitiscriticalthatthisgracefulandhighlyavailableautomountisdesignedtobeinteroperablewith
theclusteredfilesystemofchoiceinyourenvironment.
Insummary,anHAstoragesolutionisamustbutalsoonlythestartingpointforthebroaderfoundationforthe
presentationlayeroftheapplicationstack.Thesharedfilesystempresentedtothepresentationlayermustalso
behighlyavailable,mountedautomatically,andhardenedagainsthostandstackrelatedoutagerootcauses.
Anythinglessdegradesthelevelofavailabilityyoursystemanduserswillexperience.
ForstorageInformaticahighlyrecommendsTier1SANinproduction.WhatisTier1SAN?Tier1ifyoulookto
webdefinitionsmostlyreferstothedatasittingonthestoragemeaningTier1ismissioncriticaldatawhichmost
ofourclientswouldagreetheirDI/DQdatatouchedbytheInformaticaPlatformisindeedmissioncriticalfortheir
Enterprise.Tier1categorizationmayvaryfromclienttoclient,butsowecanspeakthesamelanguageinthisdeck
Tier1means59sofreliabilitysinglesite,15KRPMSCSIIIIPGRdriveswithatleast4physicalpathstothedisk
andatleasttwoenterpriseclassfibrechannelswitcheszonedappropriatelyforhighavailabilityandthroughput.
Fromwhatvendor,wereallydontcare.MostofourclientsuseEMCDMX3/4orhigherfortheirTier1production
DI/DQInformaticaPlatformdeployments,butHPandIBMSharkisverycommonaswell.InformaticausesHP
internally.NOTE:SolidStatedrivesarefinetooaslongastheyreinaTier1framethatsupportsSCSIIIIPGR
protocols.SATAdrivesnotrecommended!
HBAadaptersshouldbemultipathed(EMCPowerPath,VeritasDMPareexamples)ratedatleast4Gbperpath.
MostcustomersuseQlogic,butEmulexisalmostaspopular.WhateverHBAbestsupportsyourconfiguration
(O/S,SAN,switches,CFSSW,etc)iswhatyoushoulduse.Informaticahasnopreference,butweuseQlogicmostly
internally.
SANFrameRAIDLevelThiscangetintoaphilosophicalandevenareligiousdebatewithsomeITpersonnel,but
whatismostcommonlyseenisRAID5orRAIDS.WhatwerecommendisRAID10WhywerecommendRAID10
canbestbedescribedathttp://www.acnc.com/04_01_10.html.Informaticaisaveryhighperformance
applicationandifyouwantbothhighperformanceonreadaswellashighperformanceonwriteaswellashigh
availabilityyousimplycantbeatRAID10.Ifyouhaveabetterarchitecturesuggestionletusknow,butRAID10
iswhatweuseinourHighAvailabilityCenterofExcellenceandseveralotherhighperformanceenvironments
internally.
Informatica Proprietary
Page44of78
3/31/2011
Table6StorageInfrastructureRequirements
Storage Attribute
Best Practice
Minimum
Requirement
Not Recommended
Comments
RAID 1, RAID 3,
http://www.acnc.com/04_01_00.html....NOTE: With
SSDs, RAID 10 may not be a supported RAID
configuration. Per EMC DMX 4 specs, RAID 1, 5,
and 6 are the only supported RAID groups for
homogeneous sets of SSDs.
RAID 5, or 1 for
SSD RAID
groups
Client
RAID 5, or 1 for
SSD RAID groups
RAID Configuration
App/ETL Tier
RAID 1, RAID 3,
RAID 5, or 1 for
SSD RAID
groups
RAID 5, or 1 for
SSD RAID groups
HDD
RAID 4
15000RPM
10000RPM
<10000RPM
SSD
Connection
Fibre Channel
SAN (4Gb HBAs
recommended)
Device Naming
Must Appear as
same device on
each node in the
cluster.
Must Appear as
same device on
each node in the
cluster.
Incongruent device
names/numbers on each
host.
Informatica Proprietary
Page45of78
3/31/2011
CIFSwithNASisnotsupportedforEnterpriseGrideither,butifWindowsisyouronlyplatformavailableCIFSwith
NASisyourbestoption.Thus,itishighlyrecommendedyouuseUNIX/LinuxforyourEnterpriseGrid/HA
deploymentsofPowerCenterforthetimebeing.
Performance Tuning
Somerulesofthumbarebetterthanotherstheruleofthumbbelowisaverygoodindicatorofwheretostart
whendiagnosingperformanceissuesandunderstandingtheimpacteachlayerofanenvironmentcanmake:
ImpacttoEnvironmentPerformanceWheredoIlookfirstandwhatdoIchange?
Infrastructure70%
o Storage35%
SpindleSpeed30%
FrameParallelism30%
RAIDConfiguration20%
DynamicMultiPathingHBAadapterconnectivitytoSAN20%(2GB/cardmin,4GB
preferred)
o InformaticaFileSystem(s)25%
FileSystemBlockSize40%
Layout(ServerSideStripedvs.Concatenated)40%
FileSystemFormat(UFS,GPFS,VXFS,JFS,QFS)20%
o Database(Internal)20%
DatabaseFileSystems(DataandIndex)35%
FileSystemBlockSize40%
Layout(ConcatenatedorStriped)40%
FileSystemFormat(UFS,GPFS,VXFS,JFS,QFS)20%
Memoryallocationtoinstance20%
DatabasePartitioning20%
Parallelism(sharedservers,parallelservers,etc)15%
LogManagement10%
o Network20%
Autonegotiationforcing1000FDXand10000FDX(whereapplicable)40%
MTUframesize15%(hastobeenabledattheswitchandthehost)
Networkbinding/loadbalancing(IPMP,IPBind,etc)15%
Latencyparameters(tcp/udplowat,hiwat,etc)10%
Dedicatedswitches20%
Application30%
o InformaticaSessionPartitioning35%
o SQLTuning(correctusageofoptimizers)25%
Informatica Proprietary
Page46of78
3/31/2011
o
o
GatheringDBstatistics20%
Applicationparameters(processmanagement,portmanagement,runqueue,20%
NOTE:The%numbersabovedonotreflectexpected%improvement,butaframeworkofrelativityofimpacttothe
overallperformanceoftheenvironmentbycategory.
Eventoday32bitwidthmotherboardsarecommon.InformaticasHighAvailabilityCenterofExcellencebuiltnew
inDec2007has32bit1000MHzmotherboardswhichequatesto4GB/sec.Widthxclockcycle=throughputrate.
Sothatmeans4GB/sTOTALmaxtheoreticalthroughputatanyoftheillustratedchokepoints(regardlessofeach
individualcardfeaturethroughput).Consideringeachchokepointisasharedpipeandtypicallyunidirectional,
yourelookingatrealthroughputanywherefrom20%60%oftheoreticalrate.
Informatica Proprietary
Page47of78
3/31/2011
Figure25MotherboardsHighLevelHowTheyWork
SometimesHWvendorsdontreallywantingtoopenlytalkaboutordisclosebusspeedsoutsideofthefrontside
busratingsEvenoncommodityserversasrecentasofDec2009,PCIebusrateswerelimitedto100MHz.
Considerwhatthatdoestotherealisticthroughputvaluediscussedpreviouslyyougetavalueonly2060%of
400MB/sec.And,thatsw/odiggingdeeperintothelanediscussion.NoticebelowhowthelanesonthePCIebus
assignedtothePCIeconnectorareonlyhalfthelanesontheconnector!Thesenumbersmatchtheevidenceseen
intheHACOEpriortotheHWrefreshwhereHPc7000chassiswithG7bladeswereinstalledinplaceofDL585G2s.
SeeHACOECurrentConfigurationsectionformoredetails.
Figure26MotherboardsACloserLookatI/OChokepoints
PCI(PeripheralComponentInterconnect)hasevolvedfromserialtoparalleltopointtopointserial(withmultilane
architecture).WhatyoucanexpectfromPCIPerformance?
Informatica Proprietary
Page48of78
3/31/2011
Type
Bandwidth
Width(bits)
PCI
Serial
133MB/s
32
Parallelmightsoundfaster,butparallelarchitectureshavetheirownproblemspacketreassembly,
electromagneticinterference,etc.Theparadigmhasbeenmovingbacktowardshighclockcycle,highlyoptimized
serialconnections.WithPCIe,pointtopointserialconnectionsareestablishedinswitchlikeoperationsthatwork
moreanetworkthanabus.
YoullhearmetalkaboutlanesfromtimetotimewhenreferringtoPCIe.Alaneisapairofpointtopointserial
connections(oneineachdirection).
Goingbacktothepreviousslide(underthemagnifyingglass)thereseemstobecontradictinginformationtothis
slidesinformation.IfPCIex16maxtheoreticalthroughputis16GB/sec(1GB/lane)howcanwebelimitedto
400MB/sec?RegardlessoflanesandsplittingyourloadovermultiplecardspluggedintomultiplePCIeconnector
slots,youcantgetpastthefactthatthesaturationlevelis100MHzx32bitsperclockcycle=400MB/sec.
Heresthemathematics:
(CoverttoHz) (Bits) (CoverttoBytes)(ConverttoMBytes)
(100MHz(1,000,000)(32))=(3,200,000,000)/8)=400,000,000/1,000,000=400MBytes
Now,thatwe'vedonethismathconversionandweknowtherealthroughputpotentialoftheI/Oofmostsingle
physicalhostmodernservers(midrangeandcommodity),wecannowcomparethattotherealpotential
throughputofthenetworkandseewherethechokepointscouldbehiding...
Figure27RealThroughputCapabilitiesComparedNetworktoI/O
Gb Ethernet real throughput
Gbit/s
1
Mbit/s
1024
DiminishingRate
0.5
Gbit/s
10
Mbit/s
10240
DiminishingRate
0.5
Gbit/s
Informatica Proprietary
Page49of78
3/31/2011
Networkspeeddescriptionsareinbits,notbytes,sotheyareautomatically8timesslowerthanyouwould
intuitivelythink.Forexample,GigabitEthernetwouldproducethefollowingresults:
Inaperfectenvironmentmove81.25Mbytes/secassuming35%packetoverheadandanabsolutely
purenegotiatedGbconnectiononbothendsandateachswitchport.So,thisparticularhostonthis
particularperfectconnectioncouldonlymoveamaximumof4.875Gigabytes/minor292GB/hr.We
haveseveralclientsthathavedatavolumerequirementswellinexcessoftheexampleIjustprovided.
Moretypicalspeedsareanadditional50%lowerthanthebulletaboveduetoswitchload,switchcross
link,andswitchuplinkspeeds.Keepinmindthatbusandcardspeedsalsocanbethelimitingspeedin
thischain.Forexample,itscommontoseeactualnetworkthroughput(atthephysicalcardduetobus
speeds)limitedto5080MB/secwith50beingthemorecommonvalue.Alsoconsidermultiplecards
drawingonthebussresourcesdrawingdownthenetworkthroughputcapabilitiesevenfurther.Factorin
the50%diminishedthroughputduetoswitchloadandtrunkingandyouseethroughputratesof2.4
GB/minor146GB/hr.Factorinpotentialbuslimitationsandyoucouldseethroughputanywherefrom20
or30GB/hrto170GB/hrasbusbandwidthcontentionactsonthetotaltheoreticalnetworkcapabilities
perhost.
TheInformaticaHACOEuseslayer2GbMMSXfiberchannelbondingfromtwoseparatenetworks
producingapproximatelya6GbuplinktotheHACOEenvironmentasawhole.Thisspeedisthenswitch
brokenamongallthepublicconnections.Theheartbeatconnectionsuseswitchports,butusenoneof
theavailable6Gbuplinkbandwidth.
ObviouslyFastEthernetwouldbe10timesslowerthantheexamplesillustratedabove.Thisveryclearlyproves
whyfullduplexGigabitEthernetistheabsoluteminimumrecommendednetworksizingforallpublicconnections
usedforPowerCenterwhetherHA/Gridimplementedornot.
Ivecreatedalittlesamplesheettoillustratesomeofthesenetworkcapacityplanningpointsdiscussedabove.
Youcanleveragethisinformationtoimproveorvalidateyourdesign.
Assumptions:
1. AssumeTCP/IPPacketoverhead=35%
2. Assumethroughputlossduetoswitchandtrunkingload,crosslink,anduplink=50%(illustratedin
diminishedrate#)
3. Assumebuspci(NIC)=32bit@33MHzclockcycle,maxtheoreticaloutput=133MB/sec,maxactual
=50MB/sec
Figure28NetworkThroughputUseCase(Followthemath)
Gbit/s
1
Mbit/s
1024
DiminishingRate
0.5
Informatica Proprietary
Mbit/s(35%overhead)
665.6
MB/s(35%overhead)
83.2
GB/s
0.08125
GB/min
4.875
GB/hr
292.5
Page50of78
MB/s(diminished)
41.6
GB/s
0.040625
GB/min
2.4375
GB/hr
146.25
BusBottleNeckMB/s
50
GB/s
0.048828125
GB/min
2.9296875
GB/hr
175.78125
OtherBusLoads
4
GB/hr
43.9453125
3/31/2011
NOTE: In the example illustrated above, you could expect the truth to be somewhere between the two bolded
values of 44GB/hr and 146GB/hr and again that is per host per full duplex Gigabit channel.
Glossary of Terms
I.
II.
III.
IV.
V.
VI.
VII.
VIII.
IX.
X.
XI.
XII.
XIII.
XIV.
XV.
XVI.
XVII.
XVIII.
XIX.
XX.
XXI.
XXII.
XXIII.
XXIV.
XXV.
XXVI.
XXVII.
BIA=BusinessImpactAnalysisoftenthegenesisofthebusinessrequirementforHAandDR
CFS/GFS=Clustered/GlobalFileSystempresentedthrougheitherSANorNAStechnologybutclustered
togethersuchthateveryparticipatingnodeintheclustercansimultaneouslyreadandwritetothesame
mountpointsatthesameinstantthroughHAclusteringtechnology.
CP=CoordinationPoint(analternativeHWbasedI/Ofencingmechanism)
CVM=ClusterVolumeManager
DASD=DirectAttachedStorageDevice
GAB=GlobalAtomicmessagingandBroadcast,Veritasproprietaryprotocolforheartbeatnetworktraffic
Grid=clusterofsmallercomputingentitiesengineeredforcoordinatedcomputingandgreaterhorizontal
scaling
HA=HighAvailability,definedasanenvironmentbuiltwithalltherequiredhardwareandsoftware
ensuringtherearenosinglepointsoffailure.
HACOE=HighAvailabilityCenterofExcellenceInformaticainternalHAenvironmentusedto
demonstratebestpracticesandproofperformancebetweenvirtualandnativeenvironmentfoundations
HDD=HardDiskDrive,usuallyinreferencetoSANs.
ICC=IntegrationCompetencyCenter
IOV=I/OVirtualization
KB=KnowledgeBase
LLT=LowlatencyTransport,Veritasproprietaryprotocolforheartbeatnetworktraffic
LUN=Thetermhasbecomecommoninstorageareanetworks(SAN)andotherenterprisestoragefields.
Today,LUNsarenormallynotentirediskdrivesbutrathervirtualpartitions(orvolumes)ofaRAIDset
(e.g.portionsofSANhypervolumes).
LVM=LogicalVolumeManager(usuallyano/sfeaturetoprovidesimplisticmanagementofRAW
volumes)
MACAddress=MACaddressesareusedintheMediaAccessControlprotocolsublayeroftheOSI
referencemodel
NPIV=N_PortIDVirtualization
OSI=OpenSystemsInterconnect(modelforcommonnetworkstackphysicaltoabstract)
POSIX=PortableOperatingSystemInterface[forUnix]
RAID=RedundantArrayofIndependentDisks
SAN=StorageAreaNetworkorStorageArrayNetwork
SCSI=SmallComputerSystemInterfacestandardsbasedbususedtoaccesslocalandremotehost
devices
SonG=SessiononGrid
SPOF=SinglePointofFailure
SSD=SolidSateDrive,akaflashdrives,usuallyinreferencetoSANs.
Subnet=arangeoflogicalnetworkaddresseswithintheaddressspacethatisassignedtoan
organization.
Informatica Proprietary
Page51of78
3/31/2011
XXVIII.
XXIX.
XXX.
XXXI.
VCS=VeritasClusterServer
VLAN=VirtualLocalAreaNetwork
VxSFCFSHA=VeritasStorageFoundationClusterFileSystemHighAvailability(includesVCS)
WW=WorldWide,usuallyinreferencetoWWnamesforI/O(SAN)configurationasitrelatestothis
document
Sources
MicrosoftSQLServer2005BooksOnline
VeritasStorageFoundationHASuiteforLinuxDocumentation
IBMDB29HAGuide&AdministrativePlanningGuide
InformaticaPowerCenterHATechBriefAchievingGreaterAvailabilityinDataIntegrationSystems
SomedefinitionsintheGlossaryofTermsarefromwww.Wikipedia.org
EnterpriseStorageForumhttp://www.enterprisestorageforum.com/ipstorage/features/print.php/3714311
WithintheappendixwesourcefromseveralVeritasStorageFoundationforLinuxproductguides
WithintheappendixwesourcefromseveralOracleDatabase11gR2onLinuxproductguides
Informatica Proprietary
Page52of78
3/31/2011
PluginSXSFPmodulesintoSFPmoduleports
PluginSFPportstofeedGigabitCisco3560switches
6500seriesnonprodInformatica.comdomain
Connectconsolecable
Powerup
InitialsetupmayormaynotbeusedManagementvlan(VLAN45)andipaddress
ConfigureVLANsforusage
ConfigmodeVLAN111public,stateactive,repeatforotherVLANs
Configmodeinterfacevlan111,thendefaultgatewayIP,thenNetmask,thennoshutdown
Configmodeinterfacevlan112,thendefaultgatewayIP,thenNetmask,thennoshutdown
Configmodeinterfacevlan800,thendefaultgatewayIP,thenNetmask,thennoshutdown
Configmodeinterfacegigabit0/1,enter,switchport,enter,switchportaccessVLAN451,enter,noshutdown
CreateUplinksBetweenSwitches(SFPs)Layer2linkbetweenswitches
Configmodeinterfacegigabit0/27,enter,switchport,enter,switchporttrunkencapsulationdot1Q,enter,switchportmodetrunk,enter,
channelgroup1modedesireable,enter,noshutdown
Configmodeinterfacegigabit0/28,enter,switchport,enter,switchporttrunkencapsulationdot1Q,enter,switchportmodetrunk,enter,
channelgroup1modedesireable,enter,noshutdown
CreateetherchannelforLayer2
ConfigmodeInterfaceportchannel1,enter,switchporttrunkencapsulationdot1Q,enter,switchportmodetrunk,enter,noshutdown
CreateUplinkstoDevelopment6500s(SFPs)Layer3linkbetweenswitches
CreateetherchannelforLayer3uplinkto6500s
Intportchannel2
Noswitchport
Ipaddress10.1.251.x255.255.255.x
Informatica Proprietary
Page53of78
3/31/2011
Showruninterfaceportchannel2
Configmodeinterfacegigabit0/25,enter,noswitchport,enter,channelgroup2modedesireable,enter,noshutdown
Configmodeinterfacegigabit0/26,enter,noswitchport,enter,channelgroup2modedesireable,enter,noshutdown
<<<NOTE>>>connectingto10.1.251.132/3010.1.251.136/30<<<thesearejustthenetworkanalogousto10.1.251.0/30binarycountsto
252netmask
UseOSPForEIGPdynamicroutingconfiguration
Dependsoncurrentstateenvironment,consultnetworkengineerformodifications
Iprouting
Forinfa,createaroutinginstanceconfigmoderoutereigrpautonomoussystem#(10),network10.1.45.0,10.1.251.0
wr
1.
2.
RefertoRHEL5.xInstallGuidefornecessarydetailsnotansweredinthisbuildscript.
KeypointsduringtheRHEL5.xinstall:
a. DonotuseaGRUBpassword(bootloaderpassword)
b. Thebootinstallationprogramstartsautomaticallywithin1minuteifnoactionistaken
c. Thehardwareautodetectwillbeginfromthispoint
d. ShouldseeRHEL5.xsplashscreenclickNEXT
e. Selectlanguage
f. Selectkeyboardlanguage
g. Enterinstallationnumber(perlicensekeydocumentation)
h. RemoveLINUXpartitionsonselecteddrive(s)andcreatedefaultlayout(noneedforadvance
storageconfig)
i. UsingDiskDruid,Partitionasfollows
j. DonotuseLVMorsoftwareRAIDindiskdruid(youllwanttodeletethelogicalvolumescreated
bydefaultO/Sinstallstartup)
VolumeGroup
FSMount
GB
inbytes
HWRAIDControlledLogicalVolume(appearsasphysicaldisktoRHEL)/
6
6144
0.6
614.4
HWRAIDControlledLogicalVolume(appearsasphysicaldisktoRHEL)/boot
HWRAIDControlledLogicalVolume(appearsasphysicaldisktoRHEL)/usr
7
7168
HWRAIDControlledLogicalVolume(appearsasphysicaldisktoRHEL)/usr/local
6
6144
6
6144
HWRAIDControlledLogicalVolume(appearsasphysicaldisktoRHEL)/tmp
HWRAIDControlledLogicalVolume(appearsasphysicaldisktoRHEL)swap
64
65536
HWRAIDControlledLogicalVolume(appearsasphysicaldisktoRHEL)/var
7
7168
HWRAIDControlledLogicalVolume(appearsasphysicaldisktoRHEL)/home
28
28672
12
12288
HWRAIDControlledLogicalVolume(appearsasphysicaldisktoRHEL)/opt
Remaining
0.152136752
k. Partitiontypesshouldallbeext3exceptforSWAP
l. SelectRHELServerforGRUBtoloadautomaticallyeachtimebydefaultinstalledintheMBR
(masterbootrecord)GRUBwillbootonlytoabootpromptwhereyoucanthenbootRHEL
AFTER5secondsGRUBwilltimeoutandthenbootthedefaultO/S
m. SelectNEXTafterMBRisselectedinGRUBdialogbox
n. GRUBinRHEL5isautomaticallySMPandhyperthreadingenabledandisbackwardcompatible
tosinglecoreprocessorservers.
Informatica Proprietary
Page54of78
3/31/2011
o.
p.
q.
r.
s.
t.
u.
SelecteachNICandclickEDIT,thereyoucanconfiguretheIPv4settingsforeachNICandchose
toactivateitonboottime.Youmaynothavethisoptioninvmwareguestsasthevmwaretools
wouldnotbeinstalledyet.
ConfigurestaticIPandnetworkinformationbasedonpreinstalldata.
TimezoneconfigurationPST,selectsynchronizetimewithNTPserversbeforestartingservices.
OpportunitywillbepresentedtoaddNTPtimeservers
i. usntp1.informatica.com
ii. usntp2.informatica.com
iii. usntp3.informatica.com
iv. usntp4.informatica.com
Setrootpassword
Nowyouhavetoselectthedefaultpackageforthesystem
i. Selectcompatibilityarchsupport(pkg)forbackward32bitcompatibility
ii. Selectcompatibilityarchdevelopment(pkg)forbackward32bitcompatibility
iii. Selectsoftwaredevelopmentpackage
iv. Selectopensshandopensshserverpackages
v. DONOTSELECTWebServerpackage<<thismayinterferewithtomcatoperationsand
apacheorWLSifweendupusingBEA
vi. Selectadministrativetools,base,dialupnetworking,legacysoftwaresupport,x
windows,systemtools,butNOTJava
vii. ChoseXwindowsstartxautomaticallyfordefaultGUIbootingandloading
Packageswillnowstartinstalling,andrebootatendofinstalltocompleteconfigurationupon
completionofreboot.
AssumesNativeHW,orvmwaretoolsalreadyinstalled
1. #setumaskto022forroot
2. /usr/sbin/usermodgoinstallGroot,bin,daemon,sys,adm,disk,wheel,vrtsadmroot(assumesyouve
alreadyaddedthenondefaultgroups)
3. #ifconfigeth210.1.45.17netmask255.255.255.128up
4. #ifconfigeth610.1.45.18netmask255.255.255.128up
5. #netstatnrtoviewroutingtable
6. #routeadddefaultgw10.1.45.1eth2
7. #routeadddefaultgw10.1.45.1eth6
8. Whenthisisdone,youneedtoupdateyour/etc/sysconfig/networkfiletoreflectthechange.Thisfileis
usedtoconfigureyourdefaultgatewayeachtimeLinuxboots:
NETWORKING=yes
HOSTNAME=pslxhacoe01
GATEWAY=10.1.45.1
ItispossibletodefinedefaultgatewaysintheNICconfigurationfileinthe/etc/
sysconfig/networkscriptsdirectory,butyouruntheriskofinadvertently
assigningmorethanonedefaultgatewaywhenyouhavemorethanoneNIC.
9. ConfirmautonegotiateandNICspeedoneachNIC
10. #ethtooleth2
11. #ethtooleth6
12. #ethtoolseth2speed1000duplexfullautonegoff
13. #ethtoolseth6speed1000duplexfullautonegoff
14. Nowletspermanentlysettheduplexsotheywillnotchangeonreboots:
Informatica Proprietary
Page55of78
3/31/2011
15.
16.
17.
18.
19.
Unlikemiitool,ethtoolsettingscanbepermanentlysetaspartoftheinterface's
configurationscriptwiththeETHTOOL_OPTSvariable.Inournextexample,thesettings
willbesetto1000Mbps,fullduplexwithnochanceforautonegotiationonthenextreboot:
#
#File:/etc/sysconfig/networkscript/ifcfgeth2
#
DEVICE=eth2
IPADDR=10.1.45.15
NETMASK=255.255.255.128
MASTER=bond0
SLAVE=yes<<<<<allinterfacesbondedareslaves,thebondisthemaster>>>>>>>>>
BOOTPROTO=static
ONBOOT=yes
ETHTOOL_OPTS="speed1000duplexfullautonegoff"
#
#File:/etc/sysconfig/networkscript/ifcfgeth6
#
DEVICE=eth6
IPADDR=10.1.45.16
NETMASK=255.255.255.128
MASTER=bond0
SLAVE=yes<<<<<allinterfacesbondedareslaves,thebondisthemaster>>>>>>>>>
BOOTPROTO=static
ONBOOT=yes
ETHTOOL_OPTS="speed1000duplexfullautonegoff"
Nowweneedtobondeth2toeth6bycreatingafilein/etc/sysconfig/networkscript.Thefileneedsto
becalledifconfigbond0<<<<werecreatingthechannelbondingGROUP>>>>
#
#File:/etc/sysconfig/networkscript/ifcfgbond0
#
DEVICE=bond0
Network=10.1.45.0<<<<<weneedtoverifythisvalue
IPADDR=10.1.45.x<<<<<weneedtoverifythisvalue,thisshouldbetheDNShostaddress
NETMASK=255.255.255.0
BOOTPROTO=static<<<<<nonemaybethecorrectvalue>>>>
ONBOOT=yes
USERCTL=no
ETHTOOL_OPTS="speed1000duplexfullautonegoff"
BONDING_OPTS=<parameters>
BONDING_OPTS=mode=4<<<<<LACP>>>>>>>>>
Addthefollowinglineto/etc/modprobe.conf
aliasbond0bonding
Youmayneedtoedit/etc/rc.localtoadddefaultgwstatementssuchthattheyconsistentlybringupthe
rightdefaultgwonbond0
Testthebondinginterface#tailf/var/log/messages,andcheckstatusat/proc/net/bonding/bond0
Openanotherxwindowandtestotherchannelbondingparameterswhileobservingmessages.Use
/sbin/insmodbond0<parameter=value>
pslxhacoe01.informatica.com:/root#modprobelbonding
/lib/modules/2.6.1853.1.6.el5/kernel/drivers/net/bonding/bonding.ko
pslxhacoe01.informatica.com:/root#modinfobonding
Informatica Proprietary
Page56of78
3/31/2011
filename:/lib/modules/2.6.1853.1.6.el5/kernel/drivers/net/bonding/bonding.ko
author:ThomasDavis,tadavis@lbl.govandmanyothers
description:EthernetChannelBondingDriver,v3.1.2
version:3.1.2
license:GPL
srcversion:6CD19765D6431C07199456E
depends:
vermagic:2.6.1853.1.6.el5SMPmod_unloadgcc4.1
parm:max_bonds:Maxnumberofbondeddevices(int)
parm:miimon:Linkcheckintervalinmilliseconds(int)
parm:updelay:Delaybeforeconsideringlinkup,inmilliseconds(int)
parm:downdelay:Delaybeforeconsideringlinkdown,inmilliseconds(int)
parm:use_carrier:Usenetif_carrier_ok(vsMIIioctls)inmiimon;0foroff,1foron(default)(int)
parm:mode:Modeofoperation:0forbalancerr,1foractivebackup,2forbalancexor,3for
broadcast,4for802.3ad,5forbalancetlb,6forbalancealb(charp)
parm:primary:Primarynetworkdevicetouse(charp)
parm:lacp_rate:LACPDUtxratetorequestfrom802.3adpartner(slow/fast)(charp)
parm:xmit_hash_policy:XORhashingmethod:0forlayer2(default),1forlayer3+4(charp)
parm:arp_interval:arpintervalinmilliseconds(int)
parm:arp_ip_target:arptargetsinn.n.n.nform(arrayofcharp)
parm:arp_validate:validatesrc/dstofARPprobes:none(default),active,backuporall(charp)
module_sig:
883f350478dc94746c77dc34fe08934112867f0a090b28f1050e8e22596dbf1d4b7ebf72252a54609f6edc76
ec3270c671b4bb3780ee64d9f493632
20. Stillinnetworkingletsconfiguretheheartbeatnetworkandbackupnetworkconnections(heartbeat
first):
pslxhacoe03.informatica.com:/etc/sysconfig/networkscripts#
rwrr1rootroot192Jan519:17ifcfgeth3
rwrr1rootroot192Jan519:18ifcfgeth4
rwrr3rootroot192Jan519:18ifcfgeth7
rwrr3rootroot192Jan519:18ifcfgeth8
#IntelCorporation82571EBGigabitEthernetController(Copper)
DEVICE=eth3
HWADDR=00:1c:c4:48:02:68
ONBOOT=yes
TYPE=Ethernet
ETHTOOL_OPTS="speed1000duplexfullautonegoff"
BOOTPROTO=none
#IntelCorporation82571EBGigabitEthernetController(Copper)
DEVICE=eth4
HWADDR=00:1c:c4:48:02:6b
ONBOOT=yes
TYPE=Ethernet
ETHTOOL_OPTS="speed1000duplexfullautonegoff"
BOOTPROTO=none
#IntelCorporation82571EBGigabitEthernetController(Copper)
DEVICE=eth7
Informatica Proprietary
Page57of78
3/31/2011
HWADDR=00:1C:C4:48:07:00
ONBOOT=yes
TYPE=Ethernet
ETHTOOL_OPTS="speed1000duplexfullautonegoff"
BOOTPROTO=none
#IntelCorporation82571EBGigabitEthernetController(Copper)
DEVICE=eth8
HWADDR=00:1C:C4:48:07:03
ONBOOT=yes
TYPE=Ethernet
ETHTOOL_OPTS="speed1000duplexfullautonegoff"
BOOTPROTO=none
Whenfinished,ifconfigashouldlooklikethefollowing:
pslxhacoe03.informatica.com:/root#ifconfiga
bond0Linkencap:EthernetHWaddr00:1C:C4:48:02:69
inetaddr:10.1.45.11Bcast:10.1.45.127Mask:255.255.255.128
inet6addr:fe80::21c:c4ff:fe48:269/64Scope:Link
UPBROADCASTRUNNINGMASTERMULTICASTMTU:1500Metric:1
RXpackets:36726errors:0dropped:0overruns:0frame:0
TXpackets:25926errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:0
RXbytes:12145847(11.5MiB)TXbytes:6794060(6.4MiB)
eth2Linkencap:EthernetHWaddr00:1C:C4:48:02:69
inet6addr:fe80::21c:c4ff:fe48:269/64Scope:Link
UPBROADCASTRUNNINGSLAVEMULTICASTMTU:1500Metric:1
RXpackets:18566errors:0dropped:0overruns:0frame:0
TXpackets:12954errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:1000
RXbytes:6137618(5.8MiB)TXbytes:3389300(3.2MiB)
Baseaddress:0x9000Memory:fdee0000fdf00000
eth3Linkencap:EthernetHWaddr00:1C:C4:48:02:68
inet6addr:fe80::21c:c4ff:fe48:268/64Scope:Link
UPBROADCASTRUNNINGMULTICASTMTU:1500Metric:1
RXpackets:146123errors:0dropped:0overruns:0frame:0
TXpackets:150252errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:1000
RXbytes:11996790(11.4MiB)TXbytes:13729044(13.0MiB)
Baseaddress:0x9020Memory:fdea0000fdec0000
eth4Linkencap:EthernetHWaddr00:1C:C4:48:02:6B
inet6addr:fe80::21c:c4ff:fe48:26b/64Scope:Link
UPBROADCASTRUNNINGMULTICASTMTU:1500Metric:1
RXpackets:146120errors:0dropped:0overruns:0frame:0
TXpackets:150259errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:1000
RXbytes:11991790(11.4MiB)TXbytes:13754030(13.1MiB)
Informatica Proprietary
Page58of78
3/31/2011
Baseaddress:0xa000Memory:fdfe0000fe000000
eth5Linkencap:EthernetHWaddr00:1C:C4:48:02:6A
BROADCASTMULTICASTMTU:1500Metric:1
RXpackets:0errors:0dropped:0overruns:0frame:0
TXpackets:0errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:1000
RXbytes:0(0.0b)TXbytes:0(0.0b)
Baseaddress:0xa020Memory:fdfa0000fdfc0000
eth6Linkencap:EthernetHWaddr00:1C:C4:48:02:69
inet6addr:fe80::21c:c4ff:fe48:269/64Scope:Link
UPBROADCASTRUNNINGSLAVEMULTICASTMTU:1500Metric:1
RXpackets:18160errors:0dropped:0overruns:0frame:0
TXpackets:12972errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:1000
RXbytes:6008229(5.7MiB)TXbytes:3404760(3.2MiB)
Baseaddress:0x7000Memory:fdce0000fdd00000
eth7Linkencap:EthernetHWaddr00:1C:C4:48:07:00
inet6addr:fe80::21c:c4ff:fe48:700/64Scope:Link
UPBROADCASTRUNNINGMULTICASTMTU:1500Metric:1
RXpackets:146128errors:0dropped:0overruns:0frame:0
TXpackets:150257errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:1000
RXbytes:11999533(11.4MiB)TXbytes:13748947(13.1MiB)
Baseaddress:0x7020Memory:fdca0000fdcc0000
eth8Linkencap:EthernetHWaddr00:1C:C4:48:07:03
inet6addr:fe80::21c:c4ff:fe48:703/64Scope:Link
UPBROADCASTRUNNINGMULTICASTMTU:1500Metric:1
RXpackets:146118errors:0dropped:0overruns:0frame:0
TXpackets:150259errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:1000
RXbytes:11996266(11.4MiB)TXbytes:13735857(13.0MiB)
Baseaddress:0x8000Memory:fdde0000fde00000
loLinkencap:LocalLoopback
inetaddr:127.0.0.1Mask:255.0.0.0
inet6addr:::1/128Scope:Host
UPLOOPBACKRUNNINGMTU:16436Metric:1
RXpackets:2445errors:0dropped:0overruns:0frame:0
TXpackets:2445errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:0
RXbytes:3664774(3.4MiB)TXbytes:3664774(3.4MiB)
sit0Linkencap:IPv6inIPv4
NOARPMTU:1480Metric:1
RXpackets:0errors:0dropped:0overruns:0frame:0
TXpackets:0errors:0dropped:0overruns:0carrier:0
collisions:0txqueuelen:0
RXbytes:0(0.0b)TXbytes:0(0.0b)
21. Enablefulldesktopvnc
Informatica Proprietary
Page59of78
3/31/2011
a.
b.
c.
d.
e.
f.
Makevncserverservicestartuponboot(administration,services)save
#vncserver
Establishvncpassword
Gotouser$HOME/.vnc
Editxstartup
#Uncommentthefollowingtwolinesfornormaldesktop:
i. unsetSESSION_MANAGER
ii. exec/etc/X11/xinit/xinitrc
22. Enablevsftpdserviceforftpaccess
a. Makevsftpdservicestartuponboot(administration,services)save
23. Recommendedrpmstoaddpostbuild
a. Add/removepackages
i. procinfo
ii. sysstat
iii. iptraf
iv. ktune
v. dstat
vi. Rdesktop
vii. Arpwatch
viii. Tsclient
ix. Libaiodevel
x. unixODBCdevel
xi. rdstools
xii. ypserv
xiii. portmap
xiv. nscd
xv. yptools
24. Enablediskquotas
a. Addusrquotato/homeinthe/etc/fstab
b. /dev/volgroup/logvolname/homeext3defaults,usrquota,grpquota12
c. #quotacheckuvg/dev/cciss/c0d0p3
d. #quotaonvug/dev/cciss/c0d0p3
pslxhacoe02.informatica.com:/home#quotaonvug/dev/cciss/c0d0p3
/dev/cciss/c0d0p3[/home]:groupquotasturnedon
/dev/cciss/c0d0p3[/home]:userquotasturnedon
e. #edquotausername<<<atexteditoropensforyoutochangethevaluesforthatuser>>>values
inblockswhichweneedtotranslatetobytes
f. Blockswilltranslatebasedonblockallocationunitsize.So,ifwegetourdesired32Kblocksthan
eachblockrepresents32Ksoifwewanteachuserlimitedto500MBwesetthe2ndand3rd
columnsto160000
#edquotagpresales
Diskquotasforgrouppresales(gid1001):
Filesystemblockssofthardinodessofthard
/dev/cciss/c0d0p332160000180000700
pslxhacoe01.informatica.com:/home/infasale#edquotagpresales
pslxhacoe01.informatica.com:/home/infasale#edquotagproducts
pslxhacoe01.informatica.com:/home/infasale#edquotagips
pslxhacoe01.informatica.com:/home/infasale#edquotaggcs
pslxhacoe01.informatica.com:/home/infasale#edquotaginfaadm
pslxhacoe01.informatica.com:/home/infasale#edquotaginfaadm2
Informatica Proprietary
Page60of78
3/31/2011
pslxhacoe01.informatica.com:/home/infasale#edquotagdba
pslxhacoe01.informatica.com:/home/infasale#edquotagoinstall
pslxhacoe01.informatica.com:/home/infasale#edquotagoper
pslxhacoe01.informatica.com:/home/infasale#edquotagrd
pslxhacoe01.informatica.com:/home/infasale#edquotagqaed
runatestwithaverylargefilethatwouldexceedthequotaforauserinoneoftheabovegroups:
pslxhacoe01.informatica.com:/home/cmcdonal/install_dr/vrts#lsalrt
total817384
rwxrxrx1rootbin704674Apr302007getting_started.pdf
drwxrxrx12rootroot4096May22007rhel5_i686
drwxrxrx12rootroot4096May22007rhel5_ia64
drwxrxrx7rootroot4096May22007.
drwxrxrx12rootroot4096May22007rhel5_x86_64
rwxrxrx1cmcdonalcmcdonal835435008Dec1311:41sf_ha.4.1.40.00.rhel5.tar<<<<triedthisone
drwxrxrx2rootroot4096Feb515:17hotfixforllt
drwxrxrx5cmcdonalcmcdonal4096Feb1012:13..
drwxrxrx3rootroot4096Feb1117:15hotfixforalua
pslxhacoe01.informatica.com:/home/cmcdonal/install_dr/vrts#cppsf_ha.4.1.40.00.rhel5.tar~infasale
cciss/c0d0p3:warning,groupblockquotaexceeded.
cciss/c0d0p3:writefailed,groupblocklimitreached.
cp:writing`/home/infasale/sf_ha.4.1.40.00.rhel5.tar':Diskquotaexceeded<<<lookslikeitsworking
25. Configuresshoneachnodeseesshdmanpage/etc/ssh/sshd_config,tostart/sbin/servicesshdstart,
placeinscriptedstartupin/etc/rc.d/init.d
a. Withoutpasswordisdonebygeneratingkeypairs
i. BeforegeneratingRSApairedkeysfortrustedconnectionthereisanundocumented
stepfortherootaccountrequiredbyoracleracthatyoushouldperformfirst.Skipto
sectionwhereyoucreatetheoinstallgroupaccountandoracleaccountthenreturn
here.
ii. Nextexecute:
#/usr/sbin/usermodgoinstallGroot,bin,daemon,sys,adm,disk,wheel,vrtsadmroot
NOTE:IfyoudidnotcreateavrtsadmgroupalreadyyouneedtoforremoteVEAusage.
iii. GeneratingakeypairforRSASSHisdoneasfollows:
iv. #sshkeygentrsa
v. #sshkeygentdsa
vi. Acceptdefaultlocationof~/.ssh/id_rsaandenterapassphrasedifferentthanthatof
rootpassword
vii. Thepublickeyiswrittento~/.ssh/id_rsa.pub
viii. Changepermissionson~/.ssh#chmod755~/.ssh
ix. Copythecontentsofthe~/.ssh/id_rsa.pubfileofeachnodeintothe
~/.ssh/authorized_keysonnode0inthecluster
x. scpauthorized_keystothe$HOME/.ssh/directoryofeachnodeintheclusterlikethis
example:
#scpauthorized_keyspslxhacoe04:/root/.ssh/
xi. #chmod644~/.ssh/authorized_keys,actuallyoracleracrequireschmod600insteadof
644soyoumaywanttoconsiderusingthatsothatitpassesthecluvfytestsfororacle
rac
xii. Dothesamefortheoracleaccount,butalsodothefollowingfororacle:
1. Ensureyouaretheoracleaccount#whoami
Informatica Proprietary
Page61of78
3/31/2011
26.
27.
28.
29.
30.
2. #sshpslxhacoe03datefrompslxhacoe03,answeryes
3. #sshpslxhacoe03datefrompslxhacoe04,answeryes
4. #sshpslxhacoe04datefrompslxhacoe03,answeryes
5. #sshpslxhacoe04datefrompslxhacoe04,answeryes
6. #exec/usr/bin/sshagent$SHELL
7. #/usr/bin/sshadd
b. IfyoureinstallO/SbesuretobackupRSAsshkeys/etc/ssh/ssh_host*key*
c. UsinggraphicalServicesConfigurationTool(systemconfigservices)Disable
i. telnetpg317deploymentguide
ii. rlogin
iii. rshmustbeleftonforORACLERAC
d. valueinhomedirectoryisfoundunder(~/.ssh/config)
e. id_dsacontainstheDSAprivatekeyoftheuser
f. id_dsa.pubcontainstheDSApublickeyoftheuser
g. id_rsacontainstheRSAprivatekeyoftheuser
h. id_rsa.pubcontainstheRSApublickeyoftheuser
Setuprootenvironmentvariables
PATH=/usr/sbin:/sbin:/usr/bin:/usr/lib/vxvm/bin:\/opt/VRTSvxfs/sbin:/opt/VRTSvcs/bin:/opt/VRTS/bin:\
/opt/VRTSvcs/rac/bin:/opt/VRTSob/bin:$PATH;exportPATH
Forrootuser,donotdefinepathstoaclusterfilesystemintheLD_LIBRARY_PATHvariable.Forexample,
define$ORACLE_HOME/libinLD_LIBRARY_PATHforuseroracleonly.Thepathdefinedas
/opt/VRTSob/binisoptionalunlessyouchoosetoinstallVEA.
exportMANPATH=$MANPATH:/opt/VRTS/man
addLC_ALL=Cto.bash_profileforroot<<<<ensuresproperterminalviewingofmanpages
Reboot
NOTE:InthecaseoftheHACOEweusedVxSFRAC5.1P3,whichrequiredaninstallationofVxSFCFSHA5.1RP2
first,thenlib&licensemodificationstomorphtheinstallationintoVxSFRAC.LaterversionsofSFRACdonot
requiresuchcomplications.Ifyoudeployasimilarbuild,youllwanttofollowtheregularinstallguideforthe
firstpart(SFCFSHAsfcfs_install_lin.pdf),thentheSFRAC5.1P3documentation(sfrac_install_lin.pdf).One
exampleofthepostSFCFSHAinstallstepsyoullberequiredtotakeaccordingtosfrac_install_lin.pdf
(5.1Patch3)isrpmivhVRTSdbac5.1.000.300P3_RHEL5.x86_64.rpmoneachnodeinthecluster.Othersteps
includecopying/linkinglibrariesandconvertingSFCFSHAkeystoSFRACENTlicensekeys.
Informatica Proprietary
Page62of78
3/31/2011
1.
2.
Assumingallgoeswell,runinstallerscripttostarttheactualinstall(installsfrac,installsfcfs)
YoullbeaskedifyouwishtoinstallSFCFSorSFCFSHAandyouneedtochooseSFCFSHAwhichincludes
VCS
3.
Installerchecksforlicensekeys,keyrpms,andinfrastructurerpms(installsifnecessary)generallybest
tochooseinstallationofALLrpmsespeciallyifyouregoingtouseCPServerfencing
4.
Installerasksforhostnameswithinthiscluster(pslxhacoe03andpslxhacoe04separatedbyspace)
5.
Systemnextchecksforrpms,requiredpatchesandO/Slevel,andprocessesthatcouldconflictwithinstall
Informatica Proprietary
Page63of78
3/31/2011
6.
Afterrpmsareinstalledyoullbeaskedtoapplyalicensekey
7.
8.
YoullbeaskedifyouwishtoenableVeritasVolumeReplicator
YoullbeaskedifyouwishtoenableGlobalClusterOption
Informatica Proprietary
Page64of78
3/31/2011
9.
YoullbeaskedifyouwishtoconfigureSFCFSHA.Youshoulddothisnow,butcandoitaftertheinstallby
running./installsfcfsconfigure.Bepreparedtoprovideuniqueclusternameanduniqueclusterid.
10. YoullbeaskedtoconfigureVCS.
Informatica Proprietary
Page65of78
3/31/2011
11. IfyouconfigureVCSnow,youllbeaskedifyouwishtoconfigureheartbeatlinks.Bestpracticeistouse
thesameEthernetlinksoneachhostandtheymustbeofthesamelinkspeed!
12. Configureatleast2Heartbeatlinksforthecluster,atleast3forSFRAC.Noticeintheaboveillustration
thatthesameEthernetlinksareusedbetweenthetwoRACnodes(pslxhacoe03andpslxhacoe04),butit
wasnotpossible,duetothewaytheHPFlex10VirtualConnectmoduleprovisionsMACaddressesto
eachserverprofiletohavethesameethdeviceprovidethesamenetworkpurposeoneachnodesince
pslxhacoe03andpslxhacoe04werefullheightbladeswith410GbLOMsandtheotherswerehalfheight
bladeswith210GbLOMs.But,itstillworksjustfinesolongasthe/etc/llthostsand/etc/llttaboneach
nodeintheclusterissetupproperly.Theheartbeatswillworksetuplikethiswhichgivesyoumore
flexibilityindesigningyourcluster.
13. YoullbeaskedtoconfigureaVirtualIPaddressfortheVCScluster.
Informatica Proprietary
Page66of78
3/31/2011
14. Youllbeaskedifyouwishtoenable/configureSymantecSecurityServices.Dosoaccordingtoyour
requirements,butbewarnedifyousetthisupinyourVCSclusterandyouintendtouseCPServer
FencingyoullhavetoestablishsecurityservicelinksbetweentheCPServersSymantecSecurityServices
andtheclustersSymantecSecurityServices.
15. YoullbeaskedifyouwishtoconfigureVCSusers.
Informatica Proprietary
Page67of78
3/31/2011
16. YoullbeaskedtoconfigureSMTP,andSNMPwhichyoumaywishtobypass.
Informatica Proprietary
Page68of78
3/31/2011
17. YoullbeaskedifyouwishtoenableI/OFencing.SaynoifusingCPServerfencing.SayyesifusingSCSIIII
PGRfencingANDyouhavealreadySHAREDprovisionedyourcoordinatorpointdevices.
18. Afterthatitwillimmediatelyattempttoapplyconfigurationchanges,creatingthemain.cffileand
distributingitthroughouttheclusterinanattempttostartthefullyconfiguredclusterforthefirsttime.
19. IfyoudidnotconfigureI/Ofencingduringtheinstall,youmaywishtosetupCPServerforfencingand
thenruninstallsfcfsfencingfrom/opt/VRTS/install.SeeInformaticaEnterpriseGridonvmware
DeploymentGuideformoredetails.Youcangetaccesstothisdocumentthroughyouraccountteam.
20. IfyouenabledI/OFencingduringtheinstallandtoldittousetheSCSIIIIPGRcoordinatorsharedLUNs
youpresentedtoeachhostintheclusteryouwilllikelybeaskedtodeclareadefaultdiskgroupdefault
dgattheendoftheinstall.
VxSFRAC/VxSFCFSHAConfigurationPostInstall
1. YouwilllikelywanttoconfigureDynamicMultiPathingorsomeothermultipathingforyourHBA
adaptersrightaftertheVeritasSFRAC/SFCFSHAinstallcompletes.
i. #vxdiskadmoption18(Allowmultipathing/UnsuppressdevicesfromVxVMsview)
ii. Selectoption8fromDMPmenu(Listcurrentlysuppressed/nonmultipatheddevices)forverificationof
currentstate
iii. Selectoption5fromDMPmenu(AllowmultipathingofalldisksonacontrollerbyVxVM)
iv. Selectoption8fromDMPmenuverifychanges
v. Selectoption1fromDMPmenu(UnsuppressallpathsthroughacontrollerfromVxVMsview)
vi. Selectoption8fromDMPmenuverifychanges
vii. Discoveringnewdevices(sharedSANforCFSandRAC)
Whenyouphysicallyconnectnewdiskstoahostorwhenyouzonenewfibrechanneldevicesto
ahost,youcanusethevxdctlenablecommandtorebuildthevolumedevicenodedirectories
andtoupdatetheDMPinternaldatabasetoreflectthenewstateofthesystem.
Informatica Proprietary
Page69of78
3/31/2011
ToreconfiguretheDMPdatabase,firstrebootthesystemtomakeLinuxrecognizethenewdisks,
andtheninvokethevxdctlenablecommand.Seethevxdctl(1M)manualpageformore
information.
Youcanalsousethevxdiskscandiskscommandtoscandevicesintheoperatingsystemdevice
tree,andtoinitiatedynamicreconfigurationofmultipatheddisks.
IfyouwantVxVMtoscanonlyfornewdevicesthathavebeenaddedtothe
system,andfordevicesthathavebeenenabledordisabled,specifythefoption
toeitherofthecommands,asshownhere:
viii. #vxdctlenable
ix. #vxdiskfscandisks
x. #vxdiskscandisksnewand#vxdiskscandisksfabricmayalsoberequiredasnewdevicesareadded
xi. #vxdisklist
2. NextyoullwanttoactivateVeritasEnterpriseAdministratorservicesoneachnodeinthecluster.
i. #/opt/VRTSob/bin/vxsvcctrlactivate<activatingVEAServices
ii. #/opt/VRTS/bin/vxsvcctrlstop
iii. #/opt/VRTS/bin/vxsvcctrlstart
3. NextpresentSCSIIIIPGRdrivesforcoordinatordisksifyouhadnotalreadydonesobeforetheSFRAC
SFCFSHAinstallationensuringeachLUNidisthesameacrossallnodesinthecluster.Youmustusean
oddnumberofcoordinatordisks(assumesyourenotusingCPServerfencing)greaterthan1.Best
practiceistouse5.
4.
InitializeoneofthecoordinatordevicesfromthemasternodeintheVxSFRAC/VxSFCFSHAcluster.You
onlyneedtodoone,becausetherestwillbedoneinthenextfewsteps.
Informatica Proprietary
Page70of78
3/31/2011
5.
Nextstepistocreatethevxfencoorddgdiskgroupasadeporteddiskgroup.
#vxdisklist
DEVICETYPEDISKGROUPSTATUS
EVA4K6K0_0auto:cdsdiskEVA4K6K0_0vxfencoorddgonline
EVA4K6K0_1auto:cdsdiskonline
EVA4K6K0_2auto:cdsdiskonline
EVA4K6K0_3auto:cdsdiskonline
cciss/c0d0auto:noneonlineinvalid
#vxdginitvxfencoorddgEVA4K6K0_0
#vxdggvxfencoorddgadddiskEVA4K6K0_1
#vxdggvxfencoorddgadddiskEVA4K6K0_2
#vxdgdeportvxfencoorddg
#vxdgtimportvxfencoorddg
#vxdgdeportvxfencoorddg
#echo"vxfencoorddg">>/etc/vxfendg
#more/etc/vxfendg<<<<oneverynodeinthecluster
vxfencoorddg
#/etc/init.d/vxfenstart<<<<oneverynodeinthecluster
StartingVxFEN:
VxFEN:Alert:loadingacompatiblemodulebinary
Startingvxfen..Done
pslxhacoe03.informatica.com:/dev/vx/dmp#VCSFENvxfenconfigNOTICEDriverwilluseSCSI3
compliantdisks.
pslxhacoe04.informatica.com:/opt#/etc/init.d/vxfenstart
StartingVxFEN:
VxFEN:Alert:loadingacompatiblemodulebinary
Startingvxfen..Done
Informatica Proprietary
Page71of78
3/31/2011
6.
7.
pslxhacoe04.informatica.com:/opt#VCSFENvxfenconfigNOTICEDriverwilluseSCSI3compliant
disks.
#hastopall
#init6<<<<eachnodeincluster
Afterserverscomebackup:
#gabconfiga
GABPortMemberships
===============================================================
Portagen524401membership01<<<GAB
Portbgen524403membership01<<<I/OFencing
Portfgen52440dmembership01<<<ClusterFileSystem
Porthgen524405membership01<<<HAD
Portvgen524409membership01<<<ClusterVolumeManager
Portwgen52440bmembership01<<<vxconfigd
PortFunction
aGAB
bI/Ofencing
dODM(OracleDiskManager)
fCFS(ClusterFileSystem)
hVCS(VERITASClusterServer:highavailabilitydaemon)
oVCSMMdriver
vCVM(ClusterVolumeManager)
wvxconfigd(moduleforCVM)
NowyoucanstartcreatingyourhostsidestripedCFSmountsforInformaticaoryourDBassumingyou
wishtousethistypeofCFSforyourDB.TheHACOEusesVxSFRACCFSmountsforallits11gR2Oracle
RACdatabaseinstances.Thestepsfordoingthisarethesameandrepeatableasrequired.
a. PresentsharedLUNsfordesiredCFSmount&diskgroup
b. SharedLUNdevicesInitializedisksfromclustermaster
c. Createclustereddiskgroupusingoneoftheinitializeddevices,includingallothersharedLUN
devicesyoujustinitialized.
d. Usingallinitializedshareddevicesinclustereddiskgroup,createnewclusteredvolumewithCFS
usingVxFSand8Kblocksize(orhigherifusingSolaris/AIX).Usestriped(RAID0onhostonly)
volumewith1columnofI/Opersharedinitializeddevice.Intheexamplebelowweuse6stripes
for6LUNs.
Informatica Proprietary
Page72of78
3/31/2011
e.
Hereyoullgetachancetodeclarethemountpoint,FStypeandblocksize.Ensureaddtofile
systemtableisNOTCHECKED.
Informatica Proprietary
Page73of78
3/31/2011
f.
g.
GotoMountFileSystemDetailsandtellitwhatnodesintheclusterthisCFSmountisforand
declarethenameoftheVCSservicegroupinwhichtheywillbelong.
AfteryouOKoutofthisyoullseetheCFSclustermountrightaway.
Informatica Proprietary
Page74of78
3/31/2011
h.
YoucandothisasmanytimesasnecessaryuntilallAppTierandDBtierCFSmountsandlocal
mountsarecreated.
1. CreateOSDBAgroup(typicallydba)assumingyouhaventaddedthesegroupsalreadyfromabovesteps
a. groupaddg1050dba
2. CreateOSOPERgroup(typicallyoper)
a. groupaddg1052oper
3. Verifytheunprivilegedusernobodyexistsonthesystem.Thenobodyusermustowntheexternaljobs
(extjob)executableaftertheinstallation.
4. CreatetheOracleInventorygroup(typically,oinstall)
a. groupaddg1051oinstall
5. CreatetheOracleuseraccountwithoinstallastheprimarygroupandOSDBAandOSOPERasthe
secondarygroups.
a. useraddgoinstallGdba,operc"OracleAccount"d'/home/oracle's'/bin/ksh'mu11000
oracle
6. whenbeginninginstallationloginasoracleand:
a. #exec/usr/bin/sshagent$SHELL
b. #/usr/bin/sshadd
c. exportDISPLAY=hostname:0
d. vi~oracle/.ssh/config
Host*
ForwardX11no
Informatica Proprietary
Page75of78
3/31/2011
7.
insertthefollowingintothe.profilefororacleuser:
if[t0];then
sttyintr^C
fi
umask022
#exportORACLE_BASE=/u01/app/oracle<<<leavecommentedoutfornow
#exportORACLE_HOME=/u01/app/oracle/product/db<<<leavecommentedoutfornow
ensure$ORACLE_HOME/binisbefore/usr/X11R6/bininthepathenvironmentvariables
8. Startxtermsession(sshXoracle@hostname)
9. Determineshell#echo$SHELL
10. ProvisionsharedLUNsintendedforOCR(oracleclusterregistry)andvotingfilesystemlocations.11gR2
nolongersupportsRAWvolumesforOCRorvoting.Ifusingnormalredundancy,11gR2willrequire2OCR
and3votingfs.GiventhisconfigurationandplacingOracleRACunderVCScontrolitishighly
recommendedtouseexternalvolumereplication(inthiscaseCVMreplication)anddeclareexternal
redundancy.InordertoprovideCVMreplicationexternalredundancyyoullneedtopresenttwicethe
numberofsharedLUNsaswhatOracleRACwillrequireduringtheinstall.Inotherwords,twice1OCR
and1votingfslocationsatotalof4sharedLUNs2replicatedclusterfilesystems.Seebelow:
Theaboveillustrationisalittlemessybecauseoldervolumescreatedforinternal(oracle)replicatedocrand
votinglocationswerenevercleanedup.LookingatthetwoCFSmountsontheright(highlighted)noticethe
applicationcssdsdependencyonthemtostart.ThatisOracleRACsmainapplicationstartprogramunder
VCScontrolandthetwoCFSmountsitdependsonaretheCVMreplicatedCFSmountsforOCRandvoting.
Informatica Proprietary
Page76of78
3/31/2011
ThecssdapplicationisalsodependentonOracleRACsPrivateNetworkwhichalternatesbetweenall4
declaredethheartbeatlinksandcanmovetheprivNICtoanyofthoseworkingNICsonanyworkinghost.
11. ThelocalSANprovisionedforCRSandOraclebinarieslooklikethefollowing:
12. ThelastofthepreinstallstepsistheSCANnamerequiredby11gR2RAC.Itrequires3IPaddresses
registeredinroundrobinintheDNSunderaclusternameorVIPname.IntheHACOEwejustassigned3
IPaddressestotheVxSFRACclusternamepsvcshacl03.Itworkedjustfinelikethat.
13. Youllneedthefollowingbitpackages:
HereisagoodstepbysteplinkforOracleRACGridandDBinstallationsifyouwishtouseit:
http://dbastreet.com/blog/?p=388
Informatica Proprietary
Page77of78
3/31/2011
1.
SpecifictothisHACOEenvironmentcombinationof11gR2OracleRAConRHEL5.5withVxSFRAC5.1P3
thisadditionalpatchwasrequired.ApplytheOraclepatch8649805tointegrateOracleClusterwarewith
VCS.FollowtheinstructionsintheOraclepatchdocumentationtoapplythepatch.Thispatchwillneed
tobeappliedimmediatelyaftercompletingOracleRAC&Gridinstallations.
2.
OnethingtonoteintheHACOEenvironmentisthatafterinstallingVxSFCFSHA5.1RP2andmodifyingthe
clustertomeetSFRAC5.1P3requirementstheOracleRACandGridinstallationseemedtogookay,but
evenaftercountlessmodificationsandattemptstostartanactiveinstanceacrossmultiplenodesinthe
RACclusterRACwouldonlystartanyinstanceactivelyononenode(andalwaysthesamenode
pslxhacoe03).ApparentlyduringtheSFRACP3patchprocessthelibvcsmm.sodidnotgetcopiedto
$GRID_HOME/libonnodesotherthantheprimarynodeintheclusterpslxhacoe03.Beawareofthis.
Hereiswhattheerrorlookedlike:
pslxhacoe04.informatica.com:/home/oracle#srvctlstartinstanceddrep02idrep022
PRCR1013:Failedtostartresourceora.drep02.db
PRCR1064:Failedtostartresourceora.drep02.dbonnodepslxhacoe04
ORA12547:TNS:lostcontact
ORA12547:TNS:lostcontact
ORA12547:TNS:lostcontact
ORA12547:TNS:lostcontact
CRS2674:Startof'ora.drep02.db'on'pslxhacoe04'failed
ORA12547:TNS:lostcontact
ORA12547:TNS:lostcontact
ORA12547:TNS:lostcontact
YoullwanttocreateTAFservicesforOracleRACinstances(source,targetandrepositoriesasrequired)
YoullwanttocreatetheCFSlocationsdescribedthroughoutthisdocument(especiallyintheApp/ETL
TierExampleClusterFileSystemssection)
Followtheinstall/configurationguidefortheappropriateversionofInformaticaHAproducts(B2BDX,
PowerCenter,etc)youintendtousefortheremainingpreinstallandinstall/configuresteps.
Informatica Proprietary
Page78of78
3/31/2011