You are on page 1of 138

UNCLASSIFIED

SoftwareCostEstimationMetricsManual

Software Cost
Estimation Metrics
Manual
AnalysisbasedondatafromtheDoDSoftware
ResourceDataReport

Thismanualdescribesamethodthattakessoftwarecostmetricsdataand
createscostestimatingrelationshipmodels.Definitionsofthedatausedinthe
methodologyarediscussed.ThecostdatadefinitionsofotherpopularSoftware
CostEstimationModelsarealsodiscussed.ThedatacollectedfromDoDs
SoftwareResourceDataReportareexplained.Thestepsforpreparingthedata
foranalysisaredescribed.Theresultsofthedataanalysisarepresentedfor
differentOperatingEnvironmentsandProductivityTypes.Themanualwraps
upwithalookatmodernestimatingchallenges.

UNCLASSIFIED
DistributionStatementA:Approvedforpublicrelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Contents
1 Introduction........................................................................................................................................1
2 MetricsDefinitions.............................................................................................................................2
2.1 SizeMeasures.............................................................................................................................2
2.2 SourceLinesofCode(SLOC)...................................................................................................2
2.2.1 SLOCTypeDefinitions.........................................................................................................2
2.2.2 SLOCCountingRules...........................................................................................................3
2.2.2.1 LogicalLines...................................................................................................................3
2.2.2.2 PhysicalLines.................................................................................................................4
2.2.2.3 TotalLines.......................................................................................................................4
2.2.2.4 NonCommentedSourceStatements(NCSS)............................................................4
2.3 EquivalentSize...........................................................................................................................5
2.3.1 DefinitionandPurposeinEstimating................................................................................5
2.3.2 AdaptedSLOCAdjustmentFactors...................................................................................6
2.3.3 TotalEquivalentSize............................................................................................................7
2.3.4 Volatility.................................................................................................................................7
2.4 DevelopmentEffort...................................................................................................................7
2.4.1 ActivitiesandLifecyclePhases............................................................................................7
2.4.2 LaborCategories....................................................................................................................8
2.4.3 LaborHours...........................................................................................................................9
2.5 Schedule......................................................................................................................................9
3 CostEstimationModels..................................................................................................................10
3.1 EffortFormula..........................................................................................................................10
3.2 CostModels..............................................................................................................................10
3.2.1 COCOMOII.........................................................................................................................11
3.2.2 SEERSEM............................................................................................................................12
3.2.3 SLIM......................................................................................................................................12
3.2.4 TrueS....................................................................................................................................13
3.3 ModelComparisons.................................................................................................................13
3.3.1 SizeInputs............................................................................................................................13
3.3.1.1 COCOMOII..................................................................................................................14
3.3.1.2 SEERSEM.....................................................................................................................14
3.3.1.3 TrueS.............................................................................................................................16
3.3.1.4 SLIM...............................................................................................................................19
3.3.2 Lifecycles,ActivitiesandCostCategories.......................................................................19
4 SoftwareResourceDataReport(SRDR).......................................................................................23

UNCLASSIFIED
DistributionStatementA:Approvedforpublicrelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
4.1 DCARCRepository..................................................................................................................23
4.2 SRDRReportingFrequency....................................................................................................24
4.3 SRDRContent...........................................................................................................................25
4.3.1 AdministrativeInformation(SRDRSection3.1).............................................................25
4.3.2 ProductandDevelopmentDescription(SRDRSection3.2)..........................................26
4.3.3 ProductSizeReporting(SRDRSection3.3).....................................................................27
4.3.4 ResourceandScheduleReporting(SRDRSection3.4)..................................................28
4.3.5 ProductQualityReporting(SRDRSection3.5Optional)............................................28
4.3.6 DataDictionary....................................................................................................................29
5 DataAssessmentandProcessing...................................................................................................30
5.1 Workflow...................................................................................................................................30
5.1.1 GatherCollectedData.........................................................................................................30
5.1.2 InspecteachDataPoint......................................................................................................31
5.1.3 DetermineDataQualityLevels.........................................................................................33
5.1.4 CorrectMissingorQuestionableData.............................................................................34
5.1.5 NormalizeSizeandEffortData.........................................................................................34
5.1.5.1 ConvertingtoLogicalSLOC.......................................................................................34
5.1.5.2 ConvertRawSLOCintoEquivalentSLOC..............................................................37
5.1.5.3 AdjustforMissingEffortData...................................................................................39
5.2 DataSegmentation...................................................................................................................39
5.2.1 OperatingEnvironments(OpEnv)....................................................................................40
5.2.2 ProductivityTypes(PT).....................................................................................................41
5.2.2.1 FindingtheProductivityType...................................................................................44
6 CostEstimatingRelationshipAnalysis.........................................................................................46
6.1 ApplicationDomainDecomposition.....................................................................................46
6.2 SRDRMetricDefinitions.........................................................................................................46
6.2.1 SoftwareSize........................................................................................................................46
6.2.2 SoftwareDevelopmentActivitiesandDurations...........................................................46
6.3 CostEstimatingRelationships(CER)....................................................................................48
6.3.1 ModelSelection...................................................................................................................48
6.3.2 ModelBasedCERsCoverage............................................................................................49
6.3.3 SoftwareCERsbyOpEnv..................................................................................................50
6.3.3.1 GroundSite(GS)OperatingEnvironment...............................................................50
6.3.3.2 GroundVehicle(GV)OperatingEnvironment........................................................51
6.3.3.3 AerialVehicle(AV)OperatingEnvironment...........................................................52
6.3.3.4 SpaceVehicleUnmanned(SVU)OperatingEnvironment....................................53
6.3.4 SoftwareCERsbyPTAcrossAllEnvironments.............................................................53

UNCLASSIFIED
DistributionStatementA:Approvedforpublicrelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
6.4 ProductivityBenchmarks........................................................................................................56
6.4.1 ModelSelectionandCoverage..........................................................................................56
6.4.2 DataTransformation...........................................................................................................57
6.4.3 ProductivityBenchmarkStatistics....................................................................................58
6.4.4 SoftwareProductivityBenchmarkResultsbyOperatingEnvironment.....................58
6.4.5 SoftwareProductivityBenchmarksResultsbyProductivityType..............................59
6.4.6 SoftwareProductivityBenchmarksbyOpEnvandPT..................................................61
6.5 FutureWork..............................................................................................................................61
7 ModernEstimationChallenges......................................................................................................62
7.1 ChangingObjectives,ConstraintsandPriorities.................................................................62
7.1.1 RapidChange,EmergentRequirements,andEvolutionaryDevelopment................62
7.1.2 NetcentricSystemsofSystems(NCSoS).........................................................................65
7.1.3 ModelDrivenandNonDevelopmentalItem(NDI)IntensiveDevelopment...........65
7.1.4 UltrahighSoftwareSystemsAssurance...........................................................................66
7.1.5 LegacyMaintenanceandBrownfieldDevelopment......................................................67
7.1.6 AgileandKanbanDevelopment.......................................................................................68
7.1.7 PuttingItAllTogetherattheLargeProjectorEnterpriseLevel..................................68
7.2 EstimationApproachesforDifferentProcesses..................................................................69
8 ConclusionsandNextSteps...........................................................................................................73
9 Appendices.......................................................................................................................................74
9.1 Acronyms..................................................................................................................................74
9.2 AutomatedCodeCounting....................................................................................................79
9.3 AdditionalAdaptedSLOCAdjustmentFactors..................................................................79
9.3.1 Examples...............................................................................................................................81
9.3.1.1 Example:NewSoftware..............................................................................................81
9.3.1.2 Example:ModifiedSoftware......................................................................................81
9.3.1.3 Example:UpgradetoLegacySystem........................................................................81
9.4 SRDRDataReport....................................................................................................................82
9.4.1 ProposedModifications......................................................................................................86
9.5 MILSTD881CWBSMappingtoProductivityTypes........................................................88
9.5.1 AerialVehicleManned(AVM)..........................................................................................88
9.5.2 OrdinanceVehicleUnmanned(OVU).............................................................................90
9.5.3 OrdinanceVehicleUnmanned(OVU).............................................................................92
9.5.4 MaritimeVesselManned(MVM).....................................................................................93
9.5.5 SpaceVehicleManned/Unmanned(SVM/U)andGroundSiteFixed(GSF)............94
9.5.6 GroundVehicleMannedandUnmanned(GVM/U)......................................................95
9.5.7 AerialVehicleUnmanned(AVU)&GroundSiteFixed(GSF)......................................97

UNCLASSIFIED
DistributionStatementA:Approvedforpublicrelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.5.8 MaritimeVesselUnmanned(MVU)andMaritimeVesselManned(MVM)..............98
9.5.9 OrdinanceVehicleUnmanned(OVU).............................................................................99
9.5.10 GroundSiteFixed(GSF)...................................................................................................100
9.5.11 AppliestoALLEnvironments.........................................................................................100
9.6 Productivity(Pr)BenchmarkDetails..................................................................................101
9.6.1 NormalityTestsonProductivityData...........................................................................101
9.6.1.1 OperatingEnvironments(allProductivityTypes)................................................101
9.6.1.2 ProductivityTypes(allOperatingEnvironments)................................................102
9.6.1.3 OperatingEnvironmentProductivityTypeSets................................................102
9.6.2 StatisticalSummariesonProductivityData..................................................................102
9.6.2.1 OperatingEnvironments...........................................................................................103
9.6.2.2 ProductivityTypes.....................................................................................................108
9.6.2.3 OperatingEnvironmentProductivityTypeSets.................................................114
9.7 References................................................................................................................................129

Acknowledgements
TheresearchandproductionofthismanualwassupportedbytheSystemsEngineering
ResearchCenter(SERC)underContractH9823008D0171andtheUSArmyContracting
Command,JointMunitions&LethalityCenter,JointArmamentsCenter,PicatinnyArsenal,NJ,
underRFQ663074.
Manypeopleworkedtomakethismanualpossible.Thecontributingauthorswere:
CherylJones,USArmyArmamentResearchDevelopmentandEngineeringCenter
(ARDEC)
JohnMcGarry,ARDEC
JosephDean,AirForceCostAnalysisAgency(AFCAA)
WilsonRosa,AFCAA
RayMadachy,NavalPostGraduateSchool
BarryBoehm,UniversityofSouthernCalifornia(USC)
BradClark,USC
ThomasTan,USC

UNCLASSIFIED
DistributionStatementA:Approvedforpublicrelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Software Cost Estimation


Metrics Manual
AnalysisbasedondatafromtheDoDSoftwareResourceDataReport

1 Introduction
Estimatingthecosttodevelopasoftwareapplicationisdifferentfromalmostanyother
manufacturingprocess.Inothermanufacturingdisciplines,theproductisdevelopedonceand
replicatedmanytimesusingphysicalprocesses.Replication
improvesphysicalprocessproductivity(duplicatemachines
producemoreitemsfaster),reduceslearningcurveeffectson
SoftwareCost
peopleandspreadsunitcostovermanyitems.
Whereasasoftwareapplicationisasingleproductionitem,i.e.
Estimation
everyapplicationisunique.Theonlyphysicalprocessesare
thedocumentationofideas,theirtranslationintocomputer Thereisnogoodway
instructionsandtheirvalidationandverification.Production toperformasoftware
productivityreduces,notincreases,whenmorepeopleare costbenefitanalysis,
employedtodevelopthesoftwareapplication.Savings breakevenanalysis,or
throughreplicationareonlyrealizedinthedevelopment makeorbuyanalysis
processesandonthelearningcurveeffectsonthe withoutsome
managementandtechnicalstaff.Unitcostisnotreducedby reasonablyaccurate
creatingthesoftwareapplicationoverandoveragain. methodofestimating
Thismanualhelpsanalystsanddecisionmakersdevelop softwarecostsand
accurate,easyandquicksoftwarecostestimatesfordifferent theirsensitivityto
operatingenvironmentssuchasground,shipboard,airand variousproduct,
space.ItwasdevelopedbytheAirForceCostAnalysis project,and
Agency(AFCAA)inconjunctionwithDoDServiceCost environmentalfactors.
Agencies,andassistedbytheUniversityofSouthern BarryBoehm
CaliforniaandtheNavalPostgraduateSchool.Theintentisto
improvequalityandconsistencyofestimatingmethodsacross
costagenciesandprogramofficesthroughguidance,
standardization,andknowledgesharing.
Themanualconsistsofchaptersonmetricdefinitions,e.g.,whatismeantbyequivalentlinesof
code,examplesofmetricdefinitionsfromcommerciallyavailablecostmodels,thedata
collectionandrepositoryform,guidelinesforpreparingthedataforanalysis,analysisresults,
costestimatingrelationshipsfoundinthedata,productivitybenchmarks,futurecostestimation
challengesandaverylargeappendix

Introduction1
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

2 Metrics Definitions
2.1 Size Measures
ThischapterdefinessoftwareproductsizemeasuresusedinCostEstimatingRelationship
(CER)analysis.Thedefinitionsinthischaptershouldbecomparedtothecommercialcost
modeldefinitionsinthenextchapter.Thiswillhelpunderstandwhyestimatesmayvary
betweentheseanalysisresultsinthismanualandothermodelresults.
Forestimationandproductivityanalysis,itisnecessarytohaveconsistentmeasurement
definitions.Consistentdefinitionsmustbeusedacrossmodelstopermitmeaningful
distinctionsandusefulinsightsforprojectmanagement.

2.2 Source Lines of Code (SLOC)


Anaccuratesizeestimateisthemostimportantinputtoparametriccostmodels.However,
determiningsizecanbechallenging.Projectsmaybecomposedofnewcode,codeadapted
fromothersourceswithorwithoutmodifications,andautomaticallygeneratedortranslated
code.
ThecommonmeasureofsoftwaresizeusedinthismanualisSourceLinesofCode(SLOC).
SLOCarelogicalsourcestatementsconsistingofdatadeclarationsandexecutables.Different
typesofSLOCcountswillbediscussedlater.

2.2.1 SLOC Type Definitions


ThecoresoftwaresizetypedefinitionsusedthroughoutthismanualaresummarizedinTable1
below.Thesedefinitionsapplytosizeestimation,datacollection,andanalysis.Someofthesize
termshavedifferentinterpretationsinthedifferentcostmodelsasdescribedinChapter3.
Table1SoftwareSizeTypes
Size Type Description
New Original software created for the first time.
Adapted Pre-existing software that is used as-is (Reused) or changed (Modified).
Pre-existing software that is not changed with the adaption parameter
settings:
Reused
Design Modification % (DM) = 0%
Code Modification % (CM) = 0%
Pre-existing software that is modified for use by making design, code
and / or test changes:
Modified
Design Modification % (DM) >= 0%
Code Modification % (CM) > 0%
A relative measure of the work done to produce software compared to
Equivalent the code-counted size of the delivered software. It adjusts the size of
adapted software relative to developing it all new.

MetricsDefinitions2
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table1SoftwareSizeTypes
Size Type Description
Software created with automated source code generators. The code to
Generated include for equivalent size consists of automated tool generated
statements.
Software that is converted between languages using automated
Converted
translators.
Pre-built commercially available software components. The source code
is not available to application developers. It is not included for
Commercial Off-The- equivalent size.
Shelf Software (COTS) Other unmodified software not included in equivalent size are
Government Furnished Software (GFS), libraries, operating systems and
utilities.

Thesizetypesareappliedatthesourcecodefilelevelfortheappropriatesystemofinterest.Ifa
component,ormodule,hasjustafewlinesofcodechangedthentheentirecomponentis
classifiedasModifiedeventhoughmostofthelinesremainunchanged.Thetotalproductsize
forthecomponentwillincludealllines.
Opensourcesoftwareishandled,aswithothercategoriesofsoftware,dependingonthecontext
ofitsusage.Ifitisnottouchedatallbythedevelopmentteamitcanbetreatedasaformof
COTSorreusedcode.However,whenopensourceismodifieditmustbequantifiedwiththe
adaptationparametersformodifiedcodeandbeaddedtotheequivalentsize.Thecostsof
integratingopensourcewithothersoftwarecomponentsshouldbeaddedintooverallproject
costs.

2.2.2 SLOC Counting Rules

2.2.2.1 Logical Lines


ThecommonmeasureofsoftwaresizeusedinthismanualandthecostmodelsisSourceLines
ofCode(SLOC).SLOCarelogicalsourcestatementsconsistingofdatadeclarationsand
executables.Table2showstheSLOCdefinitioninclusionrulesforwhattocount.Basedonthe
SoftwareEngineeringInstitute(SEI)checklistmethod[Park1992,Goethertetal.1992],each
checkmarkintheIncludescolumnidentifiesaparticularstatementtypeorattributeincluded
inthedefinition,andviceversafortheExcludes.

MetricsDefinitions3
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table2EquivalentSLOCRulesforDevelopment
Includes Excludes
Statement Type
Executable
Nonexecutable
Declarations
Compiler directives
Comments and blank lines
How Produced
Programmed New
Reused
Modified
Generated
Generator statements

3GL generated statements
(development) (maintenance)
Converted
Origin
New
Adapted
A previous version, build, or release
Unmodified COTS, GFS, library, operating system or utility

Unfortunately,notallSLOCcountsarereportedusingalogicalcounttype.Thereareother
SLOCcounttypes.Thesearediscussednext.

2.2.2.2 Physical Lines


ThePhysicalSLOCcounttypeisacounttypewhereprogramminglanguageterminatorsor
delimitersarecounted.Thiscounttypeexcludesblanklinesinasourcecodefileandincludes
everythingelse.

2.2.2.3 Total Lines


TheTotalSLOCcounttypeincludesacountofeverything,includingblanklines.

2.2.2.4 Non-Commented Source Statements (NCSS)


TheNonCommentedSourceStatementcounttypeonlycountslinescontainingaprogramming
languagesourcestatement.Noblanklinesorcommentonlylinesarecounted.
Topreventconfusioninreportingmeasuresofsizeandinstoringresultsindatabases,thetype
ofSLOCcountshouldalwaysberecorded.

MetricsDefinitions4
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

2.3 Equivalent Size


Akeyelementinusingsoftwaresizeforeffortestimationistheconceptofequivalentsize.
Equivalentsizeisaquantificationoftheeffortrequiredtousepreviouslyexistingcodealong
withnewcode.Thechallengeisnormalizingtheeffortrequiredtoworkonpreviouslyexisting
codetotheeffortrequiredtocreatenewcode.Forcostestimatingrelationships,thesizeof
previouslyexistingcodedoesnotrequirethesameeffortastheefforttodevelopnewcode.
Theguidelinesinthissectionwillhelptheestimatorindeterminingthetotalequivalentsize.All
ofthemodelsdiscussedinChapter3havetoolsfordoingthis.However,fornontraditional
sizecategories(e.g.,amodelmaynotprovideinputsforautogeneratedcode),thismanualwill
helptheestimatorcalculateequivalentsizeoutsideofthetoolandincorporatethesizeaspartof
thetotalequivalentsize.

2.3.1 Definition and Purpose in Estimating


Thesizeofreusedandmodifiedcodeisadjustedtobeitsequivalentinnewcodeforusein
estimationmodels.TheadjustedcodesizeiscalledEquivalentSourceLinesofCode(ESLOC).
Theadjustmentisbasedontheadditionaleffortittakestomodifythecodeforinclusioninthe
producttakingintoaccounttheamountofdesign,codeandtestingthatwaschangedandis
describedinthenextsection.
Inadditiontonewlydevelopedsoftware,adaptedsoftwarethatismodifiedandreusedfrom
anothersourceandusedintheproductunderdevelopmentalsocontributestotheproducts
equivalentsize.Amethodisusedtomakenewandadaptedcodeequivalentsotheycanbe
rolledupintoanaggregatesizeestimate.
TherearealsodifferentwaystoproducesoftwarethatcomplicatederivingESLOCincluding
generatedandconvertedsoftware.Allofthecategoriesareaggregatedforequivalentsize.A
primarysourcefortheequivalentsizingprinciplesinthissectionisChapter9of[Stutzke2005].
ForusualThirdGenerationLanguage(3GL)softwaresuchasCorJava,countthelogical3GL
statements.ForModelDrivenDevelopment(MDD),VeryHighLevelLanguages(VHLL),or
macrobaseddevelopment,countthegeneratedstatementsAsummaryofwhattoincludeor
excludeinESLOCforestimationpurposesisinthetablebelow.

MetricsDefinitions5
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table3EquivalentSLOCRulesforDevelopment
Source Includes Excludes
New
Reused
Modified
Generated
Generator statements
3GL generated statements
Converted
COTS
Volatility

2.3.2 Adapted SLOC Adjustment Factors


TheAAFfactorisappliedtothesizeoftheadaptedsoftwaretogetitsequivalentsize.Thecost
modelshavedifferentweightingpercentagesasidentifiedintheChapter3.
ThenormalAdaptationAdjustmentFactor(AAF)iscomputedas:
Eq 1 AAF = (0.4 x DM) + (0.3 x CM) + (0.3 x IM)
Where
% Design Modified (DM)
Thepercentageoftheadaptedsoftwaresdesignwhichismodifiedinordertoadaptittothe
newobjectivesandenvironment.Thiscanbeameasureofdesignelementschangedsuchas
UMLdescriptions.
% Code Modified (CM)
Thepercentageoftheadaptedsoftwarescodewhichismodifiedinordertoadaptittothenew
objectivesandenvironment.
CodecountingtoolscanbeusedtomeasureCM.SeethechapterontheUnifiedCodeCount
toolinAppendix9.2foritscapabilities,sampleoutputandaccesstoit.
% Integration Required (IM)
Thepercentageofeffortrequiredtointegratetheadaptedsoftwareintoanoverallproductand
totesttheresultingproductascomparedtothenormalamountofintegrationandtesteffortfor
softwareofcomparablesize.
ReusedsoftwarehasDM=CM=0.IMisnotappliedtothetotalsizeofthereusedsoftware,but
tothesizeoftheothersoftwaredirectlyinteractingwithit.Itisfrequentlyestimatedusinga
percentage.ModifiedsoftwarehasCM>0.

MetricsDefinitions6
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
2.3.3 Total Equivalent Size
UsingtheAAFtoadjustAdaptedCodesize,thetotalequivalentsizeis:
Eq 2 Total Equivalent Size = New Size + (AAF x Adapted Size)
AAFassumesalineareffortrelationship,buttherecanalsobenonlineareffects.Dataindicates
thattheAAFfactortendstounderestimatemodificationeffort[Selby1988],[Boehmetal.2001],
[Stutzke2005].TwootherfactorsusedtoaccountfortheseeffectsareSoftwareUnderstanding
andProgrammerUnfamiliarity.ThesetwofactorsandtheirusagearediscussedinAppendix
9.2

2.3.4 Volatility
Volatilityisrequirementsevolutionandchange,butnotcodethrownout.Toaccountforthe
addedeffort,volatilityisexpressedasanadditionalpercentagetosizetoobtainthetotal
equivalentsizeforestimation.
Eq 3 Total Equivalent Size = [New Size + (AAF x Adapted Size)] x (1 + Volitility)

2.4 Development Effort


2.4.1 Activities and Lifecycle Phases
Softwaredevelopmentinvolvesmuchmoreactivitythanjustcoding.Itincludesthework
involvedindevelopingrequirements,designsandtests.Itinvolvesdocumentationandreviews,
configurationmanagement,andqualityassurance.Itcanbedoneusingdifferentlifecycles(see
discussioninChapter7.2.)anddifferentwaysoforganizingthework(matrix,productlines,
etc.).UsingtheDoDSoftwareResourceDataReportasthebasis,thefollowingwork
activities/phasesareincludedorexcludedforeffort.
Table4EffortActivitiesandPhases
Activity Includes Excludes
System Conceptualization
Systems Requirements Development
Software Requirements Analysis
Software Architecture and Detailed Design
Software Coding and Unit Test
Software Integration and System / Software Integration
Hardware / Software Integration and Test
System Test and Evaluation
Operational Test and Evaluation
Production

MetricsDefinitions7
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Phase Includes Excludes


Inception
Elaboration
Construction
Transition

Softwarerequirementsanalysisincludesanyprototypingactivities.Theexcludedactivitiesare
normallysupportedbysoftwarepersonnelbutareconsideredoutsidethescopeoftheir
responsibilityforeffortmeasurement.SystemsRequirementsDevelopmentincludesequations
engineering(forderivedrequirements)andallocationtohardwareandsoftware.
Alltheseactivitiesincludetheeffortinvolvedindocumenting,reviewingandmanagingthe
workinprocess.Theseincludeanyprototypingandtheconductofdemonstrationsduringthe
development.
Transitiontooperationsandoperationsandsupportactivitiesarenotaddressedbythese
analysesforthefollowingreasons:
Theyarenormallyaccomplishedbydifferentorganizationsorteams.
TheyareseparatelyfundedusingdifferentcategoriesofmoneywithintheDoD.
Thecostdatacollectedbyprojectsthereforedoesnotincludethemwithintheirscope.
Fromalifecyclepointofview,theactivitiescomprisingthesoftwarelifecyclearerepresented
fornew,adapted,reused,generatedandCOTS(CommercialOffTheShelf)developments.
ReconcilingtheeffortassociatedwiththeactivitiesintheWorkBreakdownStructure(WBS)
acrosslifecycleisnecessaryforvalidcomparisonstobemadebetweenresultsfromcost
models.

2.4.2 Labor Categories


Thelaborcategoriesincludedorexcludedfromeffortmeasurementisanothersourceof
variation.Thecategoriesconsistofvariousfunctionaljobpositionsonaproject.Mostsoftware
projectshavestafffulfillingthefunctionsof:
ProjectManagers
ApplicationAnalysts
ImplementationDesigners
Programmers
Testers
QualityAssurancepersonnel
ConfigurationManagementpersonnel
Librarians
DatabaseAdministrators
DocumentationSpecialists
Trainingpersonnel
Othersupportstaff

MetricsDefinitions8
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Addingtothecomplexityofmeasuringwhatisincludedineffortdataisthatstaffcouldbe
fulltimeorparttimeandchargetheirhoursasdirectorindirectlabor.Theissueofcapturing
overtimeisalsoaconfoundingfactorindatacapture.

2.4.3 Labor Hours


Laborhours(orStaffHours)isthebestformofmeasuringsoftwaredevelopmenteffort.This
measurecanbetransformedintoLaborWeeks,LaborMonthsandLaborYears.Formodeling
purposes,whenweeks,monthsoryearsisrequired,chooseastandardanduseitconsistently,
e.g.152laborhoursinalabormonth.
Ifdataisreportedinunitsotherthanhours,additionalinformationisrequiredtoensurethe
dataisnormalized.EachreportingOrganizationmayusedifferentamountsofhoursin
definingalaborweek,monthoryear.Forwhateverunitbeingreported,besuretoalsorecord
theOrganizationsdefinitionforhoursinaweek,monthoryear.See[Goetherteta1992]fora
moredetaileddiscussion.

2.5 Schedule
Scheduledataarethestartandenddatefordifferentdevelopmentphases,suchasthosediscuss
in2.4.1.Anotherimportantaspectofscheduledataisentryorstartandexitorcompletion
criteriaeachphase.Thecriteriacouldvarybetweenprojectsdependingonitsdefinition.Asan
exampleofexitorcompletioncriteria,arethedatesreportedwhen:
Internalreviewsarecomplete
Formalreviewwiththecustomeriscomplete
Signoffbythecustomer
Allhighpriorityactionsitemsareclosed
Allactionitemsareclosed
Productsoftheactivity/phaseareplacedunderconfigurationmanagement
InspectionoftheproductsaresignedoffbyQA
Managementsignoff
Anindepthdiscussionisprovidedin[Goethertetal1992].

MetricsDefinitions9
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

3 Cost Estimation Models


InChapter2metricdefinitionswerediscussedforsizingsoftware,effortandschedule.Cost
estimationmodelswidelyusedonDoDprojectsareoverviewedinthissection.Itdescribesthe
parametricsoftwarecostestimationmodelformulas(theonethathavebeenpublished),size
inputs,lifecyclephases,laborcategories,andhowtheyrelatetothestandardmetrics
definitions.ThemodelsincludeCOCOMO,SEERSEM,SLIM,andTrueS.Thesimilaritiesand
differencesforthecostmodelinputs(size,costfactors)andoutputs(phases,activities)are
identifiedforcomparison.

3.1 Effort Formula


Parametriccostmodelsusedinavionics,space,ground,andshipboardplatformsbythe
servicesaregenerallybasedonthecommoneffortformulashownbelow.Sizeofthesoftwareis
providedinanumberofavailableunits,costfactorsdescribetheoverallenvironmentand
calibrationsmaytaketheformofcoefficientsadjustedforactualdataorothertypesoffactors
thataccountfordomainspecificattributes[Lumetal.2001][MadachyBoehm2008].Thetotal
effortiscalculatedandthendecomposedbyphasesoractivitiesaccordingtodifferentschemes
inthemodels.
B
Eq 4 Effort = A x Size x C
Where
Effortisinpersonmonths
Aisacalibratedconstant
Bisasizescalefactor
Cisanadditionalsetoffactorsthatinfluenceeffort.
Thepopularparametriccostmodelsinwidespreadusetodayallowsizetobeexpressedaslines
ofcode,functionpoints,objectorientedmetricsandothermeasures.Eachmodelhasitsown
respectivecostfactorsandmultipliersforEAF,andeachmodelspecifiestheBscalefactorin
slightlydifferentways(eitherdirectlyorthroughotherfactors).Somemodelsuseprojecttype
orapplicationdomaintoimproveestimatingaccuracy.Othersusealternativemathematical
formulastocomputetheirestimates.Acomparativeanalysisofthecostmodelsisprovided
next,includingtheirsizing,WBSphasesandactivities.

3.2 Cost Models


ThemodelscoveredincludeCOCOMOII,SEERSEM,SLIM,andTrueS.Theywereselected
becausetheyarethemostfrequentlyusedmodelsforestimatingDoDsoftwareeffort,costand
schedule.AcomparisonoftheCOCOMOII,SEERSEMandTrueSmodelsforNASAprojectsis
describedin[MadachyBoehm2008].ApreviousstudyatJPLanalyzedthesamethreemodels
withrespecttosomeoftheirflightandgroundprojects[Lumetal.2001].Theconsensusof
thesestudiesisanyofthemodelscanbeusedeffectivelyifitiscalibratedproperly.Eachofthe
modelshasstrengthsandeachhasweaknesses.Forthisreason,thestudiesrecommendusingat

CostEstimationModels10
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
leasttwomodelstoestimatecostswheneveritispossibletoprovideaddedassurancethatyou
arewithinanacceptablerangeofvariation.
OtherindustrycostmodelssuchasSLIM,CheckpointandEstimacshavenotbeenasfrequently
usedfordefenseapplicationsastheyaremoreorientedtowardsbusinessapplicationsper
[MadachyBoehm2008].Apreviouscomparativesurveyofsoftwarecostmodelscanalsobe
foundin[Boehmetal.2000b].COCOMOIIisapublicdomainmodelthatUSCcontinually
updatesandisimplementedinseveralcommercialtools.TrueSandSEERSEMareboth
proprietarycommercialtoolswithuniquefeaturesbutalsosharesomeaspectswithCOCOMO.
Allthreehavebeenextensivelyusedandtailoredforflightprojectdomains.SLIMisanother
parametrictoolthatusesadifferentapproachtoeffortandscheduleestimation.

3.2.1 COCOMO II
TheCOCOMO(COnstructiveCOstMOdel)costandscheduleestimationmodelwasoriginally
publishedin1981[Boehm1981].COCOMOIIresearchstartedin1994,andthemodelcontinues
tobeupdatedatUSCwiththerestoftheCOCOMOmodelfamily.COCOMOIIdefinedin
[Boehmetal.2000]hasthreesubmodels:ApplicationsComposition,EarlyDesignandPost
Architecture.Theycanbecombinedinvariouswaystodealwithdifferentsoftware
environments.TheApplicationCompositionmodelisusedtoestimateeffortandscheduleon
projectstypicallydoneasrapidapplicationdevelopment.TheEarlyDesignmodelinvolvesthe
explorationofalternativesystemarchitecturesandconceptsofoperation.Thismodelisbased
onfunctionpoints(orlinesofcodewhenavailable)andasetoffivescalefactorsandseven
effortmultipliers.
ThePostArchitecturemodelisusedwhentopleveldesigniscompleteanddetailed
informationabouttheprojectisavailableandthesoftwarearchitectureiswelldefined.Ituses
SourceLinesofCodeand/orFunctionPointsforthesizingparameter,adjustedforreuseand
breakage;asetof17effortmultipliersandasetoffivescalefactorsthatdeterminethe
economies/diseconomiesofscaleofthesoftwareunderdevelopment.Thismodelisthemost
frequentmodeofestimationandusedthroughoutthismanual.Theeffortformulais:

PM = A x Size x Emi
B
Eq 5
Where
PMiseffortinpersonmonths
Aisaconstantderivedfromhistoricalprojectdata
SizeisinKSLOC(thousandsourcelinesofcode),orconvertedfromothersizemeasures
Bisanexponentforthediseconomyofscaledependentonadditivescaledrivers
EMiisaneffortmultiplierfortheithcostdriver.TheproductofNmultipliersisanoverall
effortadjustmentfactortothenominaleffort.
TheCOCOMOIIeffortisdecomposedbylifecyclephaseandactivityasdetailedin3.3.2.More
informationonCOCOMOcanbefoundat
http://csse.usc.edu/csse/research/COCOMOII/cocomo_main.html.
Awebbasedtoolforthemodelisathttp://csse.usc.edu/tools/COCOMO.
CostEstimationModels11
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
3.2.2 SEER-SEM
SEERSEMisaproductofferedbyGalorath,Inc.ThismodelisbasedontheoriginalJensen
model[Jensen1983],andhasbeenonthemarketover15years.TheJensenmodelderivesfrom
COCOMOandothermodelsinitsmathematicalformulation.However,itsparametric
modelingequationsareproprietary.LikeTrueS,SEERSEMestimatescanbeusedaspartofa
compositemodelingsystemforhardware/softwaresystems.Descriptivematerialaboutthe
modelcanbefoundin[GalorathEvans2006].
Thescopeofthemodelcoversallphasesoftheprojectlifecycle,fromearlyspecification
throughdesign,development,deliveryandmaintenance.Ithandlesavarietyofenvironmental
andapplicationconfigurations,andmodelsdifferentdevelopmentmethodsandlanguages.
Developmentmodescoveredincludeobjectoriented,reuse,COTS,spiral,waterfall,prototype
andincrementaldevelopment.Languagescoveredare3rdand4thgenerationlanguages(C++,
FORTRAN,COBOL,Ada,etc.),aswellasapplicationgenerators.
TheSEERSEMcostmodelallowsprobabilitylevelsofestimates,constraintsonstaffing,effort
orschedule,anditbuildsestimatesuponaknowledgebaseofexistingprojects.Estimate
outputsincludeeffort,cost,schedule,staffing,anddefects.Sensitivityanalysisisalsoprovided
asisariskanalysiscapability.Manysizingmethodsareavailableincludinglinesofcodeand
functionpoints.Formoreinformation,seetheGalorathInc.websiteat
http://www.galorath.com.

3.2.3 SLIM
TheSLIMmodelisbasedonworkdonebyPutnam[Putnam1978]usingtheNorden/Rayleigh
manpowerdistribution.ThecentralpartofPutnamsmodel,calledthesoftwareequation,is
[PutnamMyers1992]:
Eq 6 Product = Productivity Parameter x (Effort/B)1/3 x Time4/3
Where
Productisthenewandmodifiedsoftwarelinesofcodeatdeliverytime
ProductivityParameterisaprocessproductivityfactor
Effortmanyearsofworkbyalljobclassifications
Bisaspecialskillsfactorthatisafunctionofsize
Timeislapsedcalendartimeinyears
TheProductivityParameter,obtainedfromcalibration,hasvaluesthatfallin36quantizedsteps
rangingfrom754to3,524,578.Thespecialskillsfactor,B,isafunctionofsizeintherangefrom
18,000to100,000deliveredSLOCthatincreasesastheneedforintegration,testing,quality
assurance,documentationandmanagementskillsgrows.
Thesoftwareequationcanberearrangedtoestimatetotaleffortinmanyears:
1/3 3 4
Eq 7 Effort = (Size x B / Productivity Parameter) x (1/Time )
PutnamsmodelisusedintheSLIMsoftwaretoolbasedforcostestimationandmanpower
scheduling[QSM2003].
CostEstimationModels12
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
3.2.4 True S
TrueSistheupdatedproducttothePRICESmodelofferedbyPRICESystems.PRICESwas
originallydevelopedatRCAforuseinternallyonsoftwareprojectssuchastheApollomoon
program,andwasthenreleasedin1977asaproprietarymodel.Itfitsintoacomposite
modelingsystemandcanbeusedtoestimatemorethanjustsoftwarecosts.Manyofthe
modelscentralalgorithmswerepublishedin[Park1988].Formoredetailsonthemodeland
themodelingsystemseethePRICESystemswebsiteathttp://www.pricesystems.com.
ThePRICESmodelconsistsofthreesubmodelsthatenableestimatingcostsandschedulesfor
thedevelopmentandsupportofcomputersystems.Themodelcoversbusinesssystems,
communications,commandandcontrol,avionics,andspacesystems.PRICESincludesfeatures
forreengineering,codegeneration,spiraldevelopment,rapiddevelopment,rapidprototyping,
objectorienteddevelopment,andsoftwareproductivitymeasurement.Sizeinputsinclude
SLOC,functionpointsand/orPredictiveObjectPoints(POPs).TheTrueSsystemalsoprovides
aCOCOMOIIcapability.
TheTruePlanningestimationsuitefromPRICESystemscontainsboththeTrueSmodeland
theCOCOMOIIcostmodel.

3.3 Model Comparisons


Comparisonsbetweenthemodelsforthecoremetricdefinitionsofsize,activitiesandlifecycle
phasesfollow.

3.3.1 Size Inputs


Thissectiondescribesthemajorsimilaritiesanddifferencesbetweenthemodelsrelatedto
softwaresizing.Allmodelssupportsizeinputsfornewandadaptedsoftware,andsome
supportautomaticallytranslatedorgeneratedcode.Themodelsdifferwithrespecttotheir
detailedparametersforthedevelopedcategoriesofsoftwareperbelow.
Table5ComparisonofModelSizeInputs
COCOMO II Size Inputs SEER-SEM Size Inputs True S Size Inputs
New Software
New Size New Size New Size
New Size Non-executable
Modified Software
Adapted Size Pre-exists Size 1 Adapted Size
% Design Modified (DM) Deleted Size Adapted Size Non-
% Code Modified (CM) Redesign Required % executable
% Integration Required (IM) Reimplementation Required Amount of Modification
Assessment and Assimilation % % of Design Adapted
(AA) Retest Required % % of Code Adapted
Software Understanding (SU) % of Test Adapted
Programmer Unfamiliarity Deleted Size
(UNFM) Code Removal Complexity

CostEstimationModels13
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table5ComparisonofModelSizeInputs
COCOMO II Size Inputs SEER-SEM Size Inputs True S Size Inputs
Reused Software
Reused Size Pre-exists Size 1, 2 Reused Size 2
% Integration Required (IM) Deleted Size Reused Size Non-
Assessment and Assimilation Redesign Required % executable
(AA) Reimplementation Required % of Design Adapted
% % of Code Adapted
Retest Required % % of Test Adapted
Deleted Size
Code Removal Complexity
Generated Code
Auto Generated Code Size
Auto Generated Size Non-
executable
Automatically Translated
Adapted SLOC Auto Translated Code Size
Automatic Translation Auto Translated Size Non-
Productivity executable
% of Code Reengineered
Deleted Code
Volatility
Requirements Evolution and Requirements Volatility
Volatility (REVL) (Change) 3
1 - Specified separately for Designed for Reuse and Not Designed for Reuse
2 - Reused is not consistent with AFCAA definition if DM or CM >0
3 - Not a size input but a multiplicative cost driver

TheprimaryunitofsoftwaresizeintheeffortmodelsisThousandsofSourceLinesofCode
(KSLOC).KSLOCcanbeconvertedfromothersizemeasures,andadditionalsizeunitscanbe
useddirectlyinthemodelsasdescribednext.Userdefinedproxysizescanbedevelopedfor
anyofthemodels.

3.3.1.1 COCOMO II
TheCOCOMOIIsizemodelisbasedonSLOCorfunctionpointsconvertedtoSLOC,andcan
becalibratedandusedwithothersoftwaresizeunits.Examplesincludeusecases,usecase
points,objectpoints,physicallines,andothers.Alternativesizemeasurescanbeconvertedto
linesofcodeanduseddirectlyinthemodeloritcanbeindependentlycalibrateddirectlyto
differentmeasures.

3.3.1.2 SEER-SEM
Severalsizingunitscanbeusedaloneorincombination.SEERcanuseSLOC,functionpoints
andcustomproxies.COTSelementsaresizedwithFeaturesandQuickSize.SEERallowsproxies
CostEstimationModels14
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
asaflexiblewaytoestimatesoftwaresize.Anycountableartifactcanbeestablishedasmeasure.
Customproxiescanbeusedwithothersizemeasuresinaproject.Availablepredefinedproxies
thatcomewithSEERincludeWebSiteDevelopment,MarkIIFunctionPoint,FunctionPoints(for
directIFPUGstandardfunctionpoints)andObjectOrientedSizing.
SEERconvertsallsizedataintointernalsizeunits,alsocalledeffortunits,SizinginSEERSEM
canbebasedonfunctionpoints,sourcelinesofcode,oruserdefinedmetrics.Userscan
combineorselectasinglemetricforanyprojectelementorfortheentireproject.COTSWBS
elementsalsohavespecificsizeinputsdefinedeitherbyFeatures,ObjectSizing,orQuickSize,
whichdescribethefunctionalitybeingintegrated.
NewLinesofCodearetheoriginallinescreatedforthefirsttimefromscratch.
PreExistingsoftwareisthatwhichismodifiedtofitintoanewsystem.Therearetwocategories
ofpreexistingsoftware:
Preexisting,DesignedforReuse
Preexisting,NotDesignedforReuse.
Bothcategoriesofpreexistingcodethenhavethefollowingsubcategories:
Preexistinglinesofcodewhichisthenumberoflinesfromaprevioussystem
LinestobeDeletedarethoselinesdeletedfromaprevioussystem.
RedesignRequiredisthepercentageofexistingcodethatmustberedesignedtomeetnewsystem
requirements.
ReimplementationRequiredisthepercentageofexistingcodethatmustbereimplemented,
physicallyrecoded,orreenteredintothesystem,suchascodethatwillbetranslatedinto
anotherlanguage.
RetestRequiredisthepercentageofexistingcodethatmustberetestedtoensurethatitis
functioningproperlyinthenewsystem.
SEERthenusesdifferentproportionalweightswiththeseparametersintheirAAFequation
accordingto:
Eq 8 Pre-existing Effective Size = (0.4 x A) + (0.25 x B) + (0.3 5x C)
Where
Aisthepercentagesofcoderedesign
Bisthepercentagesofcodereimplementation
Cisthepercentagesofcoderetestrequired
SEERalsohasthecapabilitytotakealternativesizeinputs:

CostEstimationModels15
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Function-Point Based Sizing
ExternalInput(EI)
ExternalOutput(EO)
InternalLogicalFile(ILF)
ExternalInterfaceFiles(EIF)
ExternalInquiry(EQ)
InternalFunctions(IF)anyfunctionsthatareneitherdatanortransactions
Proxies
WebSiteDevelopment
MarkIIFunctionPoints
FunctionPoints(direct)
ObjectOrientedSizing.
COTS Elements
QuickSize
ApplicationTypeParameter
FunctionalityRequiredParameter
Features
NumberofFeaturesUsed
UniqueFunctions
DataTablesReferenced
DataTablesConfigured

3.3.1.3 True S
TheTrueSsoftwarecostmodelsizemeasuresmaybeexpressedindifferentsizeunits
includingSourceLinesofCode(SLOC),functionpoints,PredictiveObjectPoints(POPs)orUse
CaseConversionPoints(UCCPs).TrueSalsodifferentiatesexecutablefromnonexecutable
softwaresizes.FunctionalSizedescribessoftwaresizeintermsofthefunctionalrequirements
thatyouexpectaSoftwareCOTScomponenttosatisfy.TheTrueSsoftwarecostmodelsize
definitionsforallofthesizeunitsarelistedbelow.
AdaptedCodeSize
Thisdescribestheamountofexistingcodethatmustbechanged,deleted,oradaptedforuse
inthenewsoftwareproject.Whenthevalueiszero(0.00),thevalueforNewCodeSizeor
ReusedCodeSizemustbegreaterthanzero.
AdaptedSizeNonexecutable
Thisvaluerepresentsthepercentageoftheadaptedcodesizethatisnonexecutable(suchas
datastatements,typedeclarations,andothernonproceduralstatements).Typicalvaluesfor
fourthgenerationlanguagesrangefrom5.00percentto30.00percent.Whenavaluecannot
beobtainedbyanyothermeans,thesuggestednominalvaluefornonexecutablecodeis
15.00percent.

CostEstimationModels16
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
AmountforModification
Thisrepresentsthepercentofthecomponentfunctionalitythatyouplantomodify,ifany.
TheAmountforModificationvalue(likeGlueCodeSize)affectstheeffortcalculatedforthe
SoftwareDesign,CodeandUnitTest,PerformSoftwareIntegrationandTest,andPerform
SoftwareQualificationTestactivities.
AutoGenSizeNonexecutable
ThisvaluerepresentsthepercentageoftheAutoGeneratedCodeSizethatisnonexecutable
(suchas,datastatements,typedeclarations,andothernonproceduralstatements).Typical
valuesforfourthgenerationlanguagesrangefrom5.00percentto30.00percent.Ifavalue
cannotbeobtainedbyanyothermeans,thesuggestednominalvaluefornonexecutable
codeis15.00percent.
AutoGeneratedCodeSize
Thisvaluedescribestheamountofcodegeneratedbyanautomateddesigntoolfor
inclusioninthiscomponent.
AutoTransSizeNonexecutable
ThisvaluerepresentsthepercentageoftheAutoTranslatedCodeSizethatisnon
executable(suchas,datastatements,typedeclarations,andothernonprocedural
statements).Typicalvaluesforfourthgenerationlanguagesrangefrom5.00percentto30.00
percent.Ifavaluecannotbeobtainedbyanyothermeans,thesuggestednominalvaluefor
nonexecutablecodeis15.00percent.
AutoTranslatedCodeSize
Thisvaluedescribestheamountofcodetranslatedfromoneprogramminglanguageto
anotherbyusinganautomatedtranslationtool(forinclusioninthiscomponent).
AutoTranslationToolEfficiency
Thisvaluerepresentsthepercentageofcodetranslationthatisactuallyaccomplishedbythe
tool.Moreefficientautotranslationtoolsrequiremoretimetoconfigurethetooltotranslate.
Lessefficienttoolsrequiremoretimeforcodeandunittestoncodethatisnottranslated.
CodeRemovalComplexity
Thisvaluedescribesthedifficultyofdeletingcodefromtheadaptedcode.Twothingsneed
tobeconsideredwhendeletingcodefromanapplicationorcomponent:theamountof
functionalitybeingremovedandhowtightlyorlooselythisfunctionalityiscoupledwith
therestofthesystem.Evenifalargeamountoffunctionalityisbeingremoved,ifitis
accessedthroughasinglepointratherthanfrommanypoints,thecomplexityofthe
integrationwillbereduced.
DeletedCodeSize
Thisdescribestheamountofpreexistingcodethatyouplantoremovefromtheadapted
codeduringthesoftwareproject.TheDeletedCodeSizevaluerepresentscodethatis
includedinAdaptedCodeSize,therefore,itmustbelessthan,orequalto,theAdapted
CodeSizevalue.

CostEstimationModels17
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Equivalent Source Lines of Code
TheESLOC(EquivalentSourceLinesofCode)valuedescribesthemagnitudeofaselectedcost
objectinEquivalentSourceLinesofCodesizeunits.TrueSdoesnotuseESLOCinroutine
modelcalculations,butprovidesanESLOCvalueforanyselectedcostobject.Different
organizationsusedifferentformulastocalculateESLOC.
TheTrueScalculationforESLOCis:
Eq 9 ESLOC = New Code + (0.7 x Adapted Code) + (0.1 x Reused Code)
TocalculateESLOCforaSoftwareCOTS,TrueSfirstconvertsFunctionalSizeandGlueCode
SizeinputstoSLOCusingadefaultsetofconversionrates.NewCodeincludesGlueCodeSize
andFunctionalSizewhenthevalueofAmountforModificationisgreaterthanorequalto25%.
AdaptedCodeincludesFunctionalSizewhenthevalueofAmountforModificationislessthan
25%andgreaterthanzero.ReusedCodeincludesFunctionalSizewhenthevalueofAmount
forModificationequalszero.
FunctionalSize
Thisvaluedescribessoftwaresizeintermsofthefunctionalrequirementsthatyouexpecta
SoftwareCOTScomponenttosatisfy.WhenyouselectFunctionalSizeastheunitof
measure(SizeUnitsvalue)todescribeaSoftwareCOTScomponent,theFunctionalSize
valuerepresentsaconceptuallevelsizethatisbasedonthefunctionalcategoriesofthe
software(suchasMathematical,DataProcessing,orOperatingSystem).Ameasureof
FunctionalSizecanalsobespecifiedusingSourceLinesofCode,FunctionPoints,Predictive
ObjectPointsorUseCaseConversionPointsifoneoftheseistheSizeUnitselected.
GlueCodeSize
ThisvaluerepresentstheamountofGlueCodethatwillbewritten.GlueCodeholdsthe
systemtogether,providesinterfacesbetweenSoftwareCOTScomponents,interpretsreturn
codes,andtranslatesdataintotheproperformat.Also,GlueCodemayberequiredto
compensateforinadequaciesorerrorsintheCOTScomponentselectedtodeliverdesired
functionality.
NewCodeSize
Thisvaluedescribestheamountofentirelynewcodethatdoesnotreuseanydesign,code,
ortestartifacts.Whenthevalueiszero(0.00),thevaluemustbegreaterthanzerofor
ReusedCodeSizeorAdaptedCodeSize.
NewSizeNonexecutable
ThisvaluedescribesthepercentageoftheNewCodeSizethatisnonexecutable(suchas
datastatements,typedeclarations,andothernonproceduralstatements).Typicalvaluesfor
fourthgenerationlanguagesrangefrom5.0percentto30.00percent.Ifavaluecannotbe
obtainedbyanyothermeans,thesuggestednominalvaluefornonexecutablecodeis15.00
percent.
PercentofCodeAdapted
Thisrepresentsthepercentageoftheadaptedcodethatmustchangetoenabletheadapted
codetofunctionandmeetthesoftwareprojectrequirements.

CostEstimationModels18
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
PercentofDesignAdapted
Thisrepresentsthepercentageoftheexisting(adaptedcode)designthatmustchangeto
enabletheadaptedcodetofunctionandmeetthesoftwareprojectrequirements.Thisvalue
describestheplannedredesignofadaptedcode.Redesignincludesarchitecturaldesign
changes,detaileddesignchanges,andanynecessaryreverseengineering.
PercentofTestAdapted
Thisrepresentsthepercentageoftheadaptedcodetestartifactsthatmustchange.Testplans
andotherartifactsmustchangetoensurethatsoftwarethatcontainsadaptedcodemeets
theperformancespecificationsoftheSoftwareComponentcostobject.
ReusedCodeSize
Thisvaluedescribestheamountofpreexisting,functionalcodethatrequiresnodesignor
implementationchangestofunctioninthenewsoftwareproject.Whenthevalueiszero
(0.00),thevaluemustbegreaterthanzeroforNewCodeSizeorAdaptedCodeSize.
ReusedSizeNonexecutable
ThisvaluerepresentsthepercentageoftheReusedCodeSizethatisnonexecutable(such
as,datastatements,typedeclarations,andothernonproceduralstatements).Typicalvalues
forfourthgenerationlanguagesrangefrom5.00percentto30.00percent.Ifavaluecannot
beobtainedbyanyothermeans,thesuggestednominalvaluefornonexecutablecodeis
15.00percent.

3.3.1.4 SLIM
SLIMuseseffectivesystemsizecomposedofnewandmodifiedcode.Deletedcodeisnot
consideredinthemodel.Ifthereisreusedcode,thentheProductivityIndex(PI)factormaybe
adjustedtoaddintimeandeffortforregressiontestingandintegrationofthereusedcode.
SLIMprovidesdifferentsizingtechniquesincluding:
Sizingbyhistory
Totalsystemmapping
Sizingbydecomposition
Sizingbymodule
Functionpointsizing.
AlternativesizestoSLOCsuchasusecasesorrequirementscanbeusedinTotalSystem
Mapping.Theuserdefinesthemethodandquantitativemappingfactor.

3.3.2 Lifecycles, Activities and Cost Categories


COCOMOIIallowseffortandscheduletobeallocatedtoeitherawaterfallorMBASElifecycle.
MBASEisamoderniterativeandincrementallifecyclemodelliketheRationalUnifiedProcess
(RUP)ortheIncrementalCommitmentModel(ICM).Thephasesinclude:(1)Inception,(2)
Elaboration,(3)Construction,and(4)Transition.
TrueSusesthenineDoDSTD2167Adevelopmentphases:(1)Concept,(2)System
Requirements,(3)SoftwareRequirements,(4)PreliminaryDesign,(5)DetailedDesign,(6)Code
/UnitTest,(7)Integration&Test,(8)Hardware/SoftwareIntegration,and(9)FieldTest.

CostEstimationModels19
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
InSEERSEMthestandardlifecycleactivitiesinclude:(1)SystemConcept,(2)System
RequirementsDesign,(3)SoftwareRequirementsAnalysis,(4)PreliminaryDesign,(5)Detailed
Design,(6)CodeandUnitTest,(7)ComponentIntegrationandTesting,(8)ProgramTest,(9)
SystemsIntegrationthroughOT&E&Installation,and(10)OperationSupport.Activitiesmay
bedefineddifferentlyacrossdevelopmentorganizationsandmappedtoSEERSEMs
designations.
InSLIMthelifecyclemapstofourgeneralphasesofsoftwaredevelopment.Thedefaultphases
are:1)ConceptDefinition,2)RequirementsandDesign,3)ConstructandTest,and4)Perfective
Maintenance.Thephasenames,activitydescriptionsanddeliverablescanbechangedinSLIM.
ThemainbuildphaseinitiallycomputedbySLIMincludesthedetaileddesignthrough
systemtestphases,butthemodelhastheoptiontoincludetherequirementsanddesign
phase,includingsoftwarerequirementsandpreliminarydesign,andafeasibilitystudyphase
toencompasssystemrequirementsanddesign.
ThephasescoveredinthemodelsaresummarizedintheTable6.

CostEstimationModels20
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table6LifecyclePhaseCoverage
Model Phases
Inception
Elaboration
COCOMO II
Construction
Transition
System Concept
System Requirements Design
Software Requirements Analysis
Preliminary Design
Detailed Design
SEER-SEM
Code / Unit Test
Component Integration and Testing
Program Test
System Integration Through OT&E and Installation
Operation Support
Concept
System Requirements
Software Requirements
Preliminary Design
Detailed Design
True S Code / Unit Test
Integration and Test
Hardware / Software Integration
Field Test
System Integration and Test
Maintenance
Concept Definition
Requirements and Design
SLIM
Construction and Test
Perfective Maintenance

CostEstimationModels21
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
TheworkactivitiesestimatedintherespectivetoolsareinTable7.
Table7WorkActivitiesCoverage
Model Activities
Management
Environment / CM
Requirements
COCOMO II Design
Implementation
Assessment
Deployment
Management
Software Requirements
Design
Code
SEER-SEM
Data Programming
Test
CM
QA
Design
Programming
Data
True S
SEPGM
QA
CFM
WBS Sub-elements of Phases:
Concept Definition
SLIM Requirements and Design
Construct and Test
Perfective Maintenance

ThecategoriesoflaborcoveredintheestimationmodelsandtoolsarelistedinTable8.
Table8LaborActivitiesCovered
Model Categories
COCOMO II Software Engineering Labor*
Software Engineering Labor*
SEER-SEM
Purchases
Software Engineering Labor*
Purchased Good
True S
Purchased Service
Other Cost
SLIM Software Engineering Labor
* Project Management (including contracts), Analysts,
Designers, Programmers, Testers, CM, QA, and Documentation

CostEstimationModels22
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

4 Software Resource Data Report (SRDR)


TheSoftwareResourcesDataReport(SRDR)isusedtoobtainboththeestimatedandactual
characteristicsofnewsoftwaredevelopmentsorupgrades.BoththeGovernmentprogram
officeand,aftercontractaward,thesoftwarecontractorsubmitthisreport.Forcontractors,this
reportconstitutesacontractdatadeliverablethatformalizesthereportingofsoftwaremetric
andresourcedata.Allcontractors,developingorproducinganysoftwaredevelopmentelement
withaprojectedsoftwareeffortgreaterthan$20M(thenyeardollars)onmajorcontractsand
subcontractswithinACATIandACATIAprograms,regardlessofcontracttype,mustsubmit
SRDRs.Thedatacollectionandreportingappliestodevelopmentsandupgradeswhether
performedunderacommercialcontractorinternallybyagovernmentCentralDesignActivity
(CDA)underthetermsofaMemorandumofUnderstanding(MOU).

4.1 DCARC Repository


TheDefenseCostandResourceCenter(DCARC),whichispartofOSDCostAssessmentand
ProgramEvaluation(CAPE),existstocollectMajorDefenseAcquisitionProgram(MDAP)cost
andsoftwareresourcedataandmakethosedataavailabletoauthorizedGovernmentanalysts.
Theirwebsite1istheauthoritativesourceofinformationassociatedwiththeCostandSoftware
DataReporting(CSDR)system,includingbutnotlimitedto:policyandguidance,training
materials,anddata.CSDRsareDoDsonlysystematicmechanismforcapturingcompleted
developmentandproductioncontractactualsthatprovidetherightvisibilityandconsistency
neededtodevelopcrediblecostestimates.Sincecrediblecostestimatesenablerealisticbudgets,
executablecontractsandprogramstability,CSDRsareaninvaluableresourcetotheDoDcost
analysiscommunityandtheentireDoDacquisitioncommunity.
TheDefenseCostandResourceCenter(DCARC),wasestablishedin1998toassistinthere
engineeringoftheCSRDprocess.TheDCARCispartofOSDCostAssessmentandProgram
Evaluation(CAPE).TheprimaryroleoftheDCARCistocollectcurrentandhistoricalMajor
DefenseAcquisitionProgramcostandsoftwareresourcedatainajointserviceenvironment
andmakethosedataavailableforusebyauthorizedgovernmentanalyststoestimatethecostof
ongoingandfuturegovernmentprograms,particularlyDoDweaponsystems.
TheDCARCsDefenseAutomatedCostInformationManagementSystem(DACIMS)isthe
databaseforaccesstocurrentandhistoricalcostandsoftwareresourcedataneededtodevelop
independent,substantiatedestimates.DACIMSisasecurewebsitethatallowsDoD
governmentcostestimatorsandanalyststobrowsethroughalmost30,000CCDRs,SRDRand
associateddocumentsviatheInternet.ItisthelargestrepositoryofDoDcostinformation.

1http://dcarc.cape.osd.mil/CSDR/CSDROverview.aspx

SoftwareResourceDataReport(SRDR)23
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

4.2 SRDR Reporting Frequency


TheSRDRFinalDeveloperReportcontainsmeasurementdataasdescribedinthecontractors
SRDRDataDictionary.Thedatareflectsthescoperelevanttothereportingevent,Table9.Both
estimates(DDForm26301,2)andactualresults(DDForm26303)ofsoftware(SW)
developmenteffortsarereportedforneworupgradeprojects.
SRDRsubmissionsforcontractcompleteeventshallreflecttheentiresoftwaredevelopment
project.
Whenthedevelopmentprojectisdividedintomultipleproductbuilds,eachrepresenting
productionlevelsoftwaredeliveredtothegovernment,thesubmissionshouldreflecteach
productbuild.
SRDRsubmissionsforcompletionofaproductbuildshallreflectsize,effort,andschedule
ofthatproductbuild.

Table9SRDRReportingEvents

Report
Event Due Who Provides Scope of Report

Pre-Contract Initial Government Estimates of the entire completed project.


(180 days prior Program Office Measures should reflect cumulative grand
to award) totals.

Contract Initial Contractor Estimates of the entire project at the level of


award detail agreed upon. Measures should reflect
cumulative grand totals.

At start of each Initial Contractor Estimates for completion for the build only.
build

Estimates Initial Contractor Corrections to the submitted estimates.


corrections

At end of each Final Contractor Actuals for the build only.


build

Contract Final Contractor Actuals for the entire project. Measures


completion should reflect cumulative grand totals.

Actuals Final Contractor Corrections to the submitted actuals.


corrections

Perhapsitisnotreadilyapparenthowimportantitistounderstandthesubmissioncriteria.
SRDRrecordsareamixtureofcompletecontractsandindividualbuildswithinacontract.And
thereareinitialandfinalreportsalongwithcorrections.Mixingcontractdataandbuilddataor
mixinginitialandfinalresultsornotusingthelatestcorrectedversionwillproduce
inconclusive,ifnotincorrect,results.
Thereportconsistsoftwopages,seeChapter9.4.Thefieldsineachpagearelistedbelow.

SoftwareResourceDataReport(SRDR)24
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

4.3 SRDR Content


4.3.1 Administrative Information (SRDR Section 3.1)
SecurityClassification
MajorProgram
ProgramName
Phase/Milestone
ReportingOrganizationType(Prime,Subcontractor,Government)
Name/Address
ReportingOrganization
Division
ApprovedPlanNumber
Customer(DirectReportingSubcontractorUseOnly)
ContractType
WBSElementCode
WBSReportingElement
TypeAction
ContractNo
LatestModification
SolicitationNo
CommonReferenceName
TaskOrder/DeliveryOrder/LotNo
PeriodofPerformance
StartDate(YYYYMMDD)
EndDate(YYYYMMDD)
Appropriation(RDT&E,Procurement,O&M)
SubmissionNumber
ResubmissionNumber
ReportAsOf(YYYYMMDD)
DatePrepared(YYYYMMDD)
PointofContact
Name(Last,First,MiddleInitial)
Department
TelephoneNumber(includeAreaCode)
Email
DevelopmentOrganization
SoftwareProcessMaturity
LeadEvaluator
CertificationDate
SoftwareResourceDataReport(SRDR)25
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
EvaluatorAffiliation
Precedents(Listuptofivesimilarsystemsbythesameorganizationorteam.)
SRDRDataDictionaryFilename
Comments(onReportContextandDevelopmentOrganization)

4.3.2 Product and Development Description (SRDR Section 3.2)


FunctionalDescription.Abriefdescriptionofitsfunction.
SoftwareDevelopmentCharacterization
ApplicationType
PrimaryandSecondaryProgrammingLanguage.
PercentofOverallProductSize.Approximatepercentage(upto100%)oftheproduct
sizethatisofthisapplicationtype.
ActualDevelopmentProcess.Enterthenameofthedevelopmentprocessfollowedfor
thedevelopmentofthesystem.
SoftwareDevelopmentMethod(s).Identifythesoftwaredevelopmentmethodor
methodsusedtodesignanddevelopthesoftwareproduct.
UpgradeorNewDevelopment.Indicatewhethertheprimarydevelopmentwasnew
softwareoranupgrade.
SoftwareReuse.Identifybynameandbrieflydescribesoftwareproductsreusedfrom
priordevelopmentefforts(e.g.sourcecode,softwaredesigns,requirements
documentation,etc.).
COTS/GOTSApplicationsUsed.
Name.Listthenamesoftheapplicationsorproductsthatconstitutepartofthefinal
deliveredproduct,whethertheyareCOTS,GOTS,oropensourceproducts.
IntegrationEffort(Optional).IfrequestedbytheCWIPT,theSRDreportshallcontain
theactualeffortrequiredtointegrateeachCOTS/GOTSapplicationidentifiedin
Section3.2.4.1.
Staffing.
PeakStaff.Theactualpeakteamsize,measuredinfulltimeequivalent(FTE)staff.
PeakStaffDate.Enterthedatewhentheactualpeakstaffingoccurred.
HoursperStaffMonth.Enterthenumberofdirectlaborhoursperstaffmonth.
PersonnelExperienceinDomain.Stratifytheprojectstaffdomainexperiencebyexperience
levelandspecifythepercentageofprojectstaffateachexperiencelevelidentified.Sample
Format3identifiesfivelevels:
VeryHighlyExperienced(12ormoreyears)
HighlyExperienced(6to12years)
NominallyExperienced(3to6years)
LowExperience(1to3years)
Inexperienced/EntryLevel(lessthanayear)

SoftwareResourceDataReport(SRDR)26
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
4.3.3 Product Size Reporting (SRDR Section 3.3)
NumberofSoftwareRequirements.Providetheactualnumberofsoftwarerequirements.
TotalRequirements.Entertheactualnumberoftotalrequirementssatisfiedbythe
developedsoftwareproductatthecompletionoftheincrementorproject.
NewRequirements.Ofthetotalactualnumberofrequirementsreported,identifyhow
manyarenewrequirements.
NumberofExternalInterfaceRequirements.Providethenumberofexternalinterface
requirements,asspecifiedbelow,notunderprojectcontrolthatthedevelopedsystem
satisfies.
TotalExternalInterfaceRequirements.Entertheactualnumberoftotalexternalinterface
requirementssatisfiedbythedevelopedsoftwareproductatthecompletionofthe
incrementorproject.
NewExternalInterfaceRequirements.Ofthetotalnumberofexternalinterface
requirementsreported,identifyhowmanyarenewexternalinterfacerequirements.
RequirementsVolatility.Indicatetheamountofrequirementsvolatilityencounteredduring
developmentasapercentageofrequirementsthatchangedsincetheSoftwareRequirements
Review.
SoftwareSize.
DeliveredSize.Capturethedeliveredsizeoftheproductdeveloped,notincludingany
codethatwasneededtoassistdevelopmentbutwasnotdelivered(suchastemporary
stubs,testscaffoldings,ordebugstatements).Additionally,thecodeshallbepartitioned
(exhaustivewithnooverlaps)intoappropriatedevelopmentcategories.Acommonset
ofsoftwaredevelopmentcategoriesisnew,reusedwithmodification,reusedwithout
modification,carryovercode,deletedcode,andautogeneratedcode.
ReusedCodeWithModification.Whencodeisincludedthatwasreusedwith
modification,provideanassessmentoftheamountofredesign,recode,andretest
requiredtoimplementthemodifiedorreusedcode.
ReuseCodeWithoutModification.Codereusedwithoutmodificationiscodethat
hasnodesignorcodemodifications.However,theremaybeanamountofretest
required.Percentageofretestshouldbereportedwiththeretestfactorsdescribed
above.
CarryoverCode.Reportshalldistinguishbetweencodedevelopedinprevious
incrementsthatiscarriedforwardintothecurrentincrementandcodeaddedaspart
oftheeffortonthecurrentincrement.
DeletedCode.Includetheamountofdeliveredcodethatwascreatedand
subsequentlydeletedfromthefinaldeliveredcode.
AutogeneratedCode.Ifthedevelopedsoftwarecontainsautogeneratedsource
code,reportanautogeneratedcodesizingpartitionaspartofthesetof
developmentcategories.
SubcontractorDevelopedCode.

SoftwareResourceDataReport(SRDR)27
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
CountingConvention.Identifythecountingconventionusedtocountsoftwaresize.
SizeReportingbyProgrammingLanguage(Optional).
StandardizedCodeCounting(Optional).Ifrequested,thecontractorshalluseapublicly
availableanddocumentedcodecountingtool,suchastheUniversityofSouthern
CaliforniaCodeCounttool,toobtainasetofstandardizedcodecountsthatreflect
logicalsize.Theseresultsshallbeusedtoreportsoftwaresizing.

4.3.4 Resource and Schedule Reporting (SRDR Section 3.4)

The Final Developer Report shall contain actual schedules and actual total effort for each
software development activity.

Effort.Theunitsofmeasureforsoftwaredevelopmenteffortshallbereportedinstaff
hours.Effortshallbepartitionedintodiscretesoftwaredevelopmentactivities.
WBSMapping.
SubcontractorDevelopmentEffort.TheeffortdataintheSRDreportshallbeseparated
intoaminimumoftwodiscretecategoriesandreportedseparately:PrimeContractor
OnlyandAllOtherSubcontractors.
Schedule.Foreachsoftwaredevelopmentactivityreported,providetheactualstartand
enddatesforthatactivity.

4.3.5 Product Quality Reporting (SRDR Section 3.5 - Optional)


Qualityshouldbequantifiedoperationally(throughfailurerateanddefectdiscoveryrate).
However,othermethodsmaybeusedifappropriatelyexplainedintheassociatedSRDRData
Dictionary.
NumberofDefectsDiscovered.Reportanestimatednumberofdefectsdiscoveredduring
integrationandqualificationtesting.Ifavailable,listtheexpecteddefectdiscoverycounts
bypriority,e.g.1,2,3,4,5.Provideadescriptionoftheprioritylevelsifused.
NumberofDefectsRemoved.Reportanestimatednumberofdefectsremovedduring
integrationandqualificationtesting.Ifavailable,listthedefectremovalcountsbypriority.

SoftwareResourceDataReport(SRDR)28
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
4.3.6 Data Dictionary
TheSRDRDataDictionarycontains,ataminimum,thefollowinginformationinadditiontothe
specificrequirementsidentifiedinSections3.1through3.5:
ExperienceLevels.Providethecontractorsspecificdefinition(i.e.,thenumberofyearsof
experience)forpersonnelexperiencelevelsreportedintheSRDreport.
SoftwareSizeDefinitions.Providethecontractorsspecificinternalrulesusedtocount
softwarecodesize.
SoftwareSizeCategories.Foreachsoftwaresizecategoryidentified(i.e.,New,Modified,
Unmodified,etc.),providethecontractorsspecificrulesand/ortoolsusedforclassifying
codeintoeachcategory.
PeakStaffing.Provideadefinitionthatdescribeswhatactivitieswereincludedinpeak
staffing.
RequirementsCount(Internal).Providethecontractorsspecificrulesand/ortoolsusedto
countrequirements.
RequirementsCount(External).Providethecontractorsspecificrulesand/ortoolsusedto
countexternalinterfacerequirements.
RequirementsVolatility.Providethecontractorsinternaldefinitionsusedforclassifying
requirementsvolatility.
SoftwareDevelopmentActivities.Providethecontractorsinternaldefinitionsoflabor
categoriesandactivitiesincludedintheSRDreportssoftwareactivity.
ProductQualityReporting.Providethecontractorsinternaldefinitionsforproductquality
metricsbeingreportedandspecificrulesand/ortoolsusedtocountthemetrics.

SoftwareResourceDataReport(SRDR)29
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

5 Data Assessment and Processing


ThischapterdiscussestransformingtheSRDRdataintousefulinformationforuseincreating
CostEstimatingRelationships(CER)andtoprovideproductivitybenchmarksforusein
managementoversight.
TheSoftwareResourcesDataReport(SRDR)hasdataqualityissuesnotuncommonwithother
datasets.ThispresentsmanychallengeswhenattemptingtocreateCERsandproductivity
benchmarks.Thelistbelowshowsthechallengeswhenworkingwiththisdata:
Inadequateinformationonmodifiedcode(onlysizeprovided)
Inadequateinformationonsizechangeorgrowth
Sizemeasuredinconsistently
Inadequateinformationonaveragestaffingorpeakstaffing
Inadequateinformationonpersonnelexperience
Inaccurateeffortdatainmultibuildcomponents
Missingeffortdata
Replicatedduration(startandenddates)acrosscomponents
Inadequateinformationonschedulecompression
Missingscheduledata
Noqualitydata
Theremedyforsomeofthesechallengesistofindawaytonormalizethedatatothedefinitions
discussedinChapter2.Othertechniquesarerequiredtofillinmissingdata,eitherby
consultingothersourcesorusingstatisticaltechniquestofillinmissingvaluesinatable.What
isneededisaprocesstomakethedatausable.

5.1 Workflow
Thedataassessmentandprocessingworkflowhassixsteps.Thisworkflowwasusedinthe
analysisoftheSRDRdata.Eachofthesestepsisdescribedindetail.
1. Gatherthedatathathasbeencollected.
2. Reviewandinspecteachdatapoint.
3. Determineaquantitativequalitylevelbasedonthedatainspection.
4. Correctmissingorquestionabledata.Therewereseveralthingsthatcanbedoneaboutthis.
Datathatcannotberepairedisexcludedfromtheanalysis.
5. Thedatahastobenormalizedtoacommonunitofmeasureorscopeofwhatiscoveredby
thedata.
6. FinallythedataissegmentedbyOperatingEnvironmentandSoftwareDomain.

5.1.1 Gather Collected Data


Historicaldataisstoredinavarietyofformats.Oftenthereisdatainarecordthatisnot
relevantforcostestimationanalysis.Alltoooften,thereisnotenoughdatatosupporta
thoroughanalysis.

DataAssessmentandProcessing30
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Thedatahastobetransformedfromdifferentformatsintoacommondataformatthatsupports
theanalysisobjectives.Acommondataformatforcostestimationanalysiswouldbedifferent
foranalysisofrequirementsgrowth,defectdiscovery/removalorprocessimprovementreturn
oninvestmenttonameafew.
Thecommondataformatforcostestimationanalysisrequiresdetailinformationon:
Amountofworkload(expressedasafunctionalmeasureoraproductmeasure)
Developmentandsupporteffort
Projectorbuildduration
Additionalcontextualdataisneededtoprovideinformationonwhatthedatarepresents,e.g.,
Organizationthatdevelopedthesoftware
Whattheapplicationdoes
Wherethesoftwarefitsintothesystem(isitallofthesoftware,abuild,aconfiguration
item,orasmallsoftwareunit)
ThecommondataformatusedinanalyzingSRDRdatahadadditionalinformationthanwas
foundintheSRDRreport.

5.1.2 Inspect each Data Point


Asthegathereddataisbeingtransformedintothecommondataformat,inspectthedatafor
completeness,integrity,andreasonableness.Thefirstactivityistoexaminetheproject
contextinformation.
Project Context
Areallofthedataavailabletofillthecommondataformatfields?
Howwouldthissoftwarecomponentbecharacterized?
Whatdoesthiscomponentdo?
Werethereanyextenuatingcircumstancesconcerningdevelopment,e.g.management
change,largerequirementschange,stop/restartwork?
IstheDataDictionaryforthatrecordavailableasastandalonefile?
Isthereanyadditionalinformationthatcanbeconsultedaboutthedataduringanalysis,
suchas:
AcquisitionStrategy
AcquisitionSupportPlan(ASP)
ContractPlan
CostAnalysisRequirementsDocument(CARD)
CapabilityDescriptionDocument(CDD)
SoftwareRequirementsSpecification(SRS)
WorkBreakdownStructure(WBS)
EarnedValueManagementSystemdata(EVMS)
Next,thesize,effort,scheduleandproductivitydataareexamined.

DataAssessmentandProcessing31
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Size Data
Doesthesizedatalooksound?
Isthesizepartofmultibuildrelease?
Wasallcodeautogenerated?
WascoderewrittenafterAG?
Wasaportionofalegacysystemincludedinthesizingdata?
Howmuchsoftwarewasadapted(modified)?
Howmuchsoftwarewasreused(nochanges)?
Isthereeffortandscheduledataforeachsoftwareactivity?
Isthererepeatingsizedata?
Effort Data
Whatlaborwasincludedinthereportedhours?
Engineeringlabor
Managementlabor
Supportlabor:CM,QA,ProcessImprovement,Safety,Security,Dev.Environment
support
WhatlaborwasreportedintheOtheractivity?
WasRequirementseffortreportedforallbuilds?
Weretherecontinuousintegrationactivitiesacrossallbuilds?
Schedule Data
Wasthereschedulecompressionmentionedontheproject
Werethereparallelmultiplebuilds(samestart&enddate)
Productivity Screening
Isaquickproductivitycheckreasonablyclosetosoftwarewithsimilarfunctionality?
Isthisrecordanoutlierinascatterplotwithothersimilardata?

DataAssessmentandProcessing32
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
5.1.3 Determine Data Quality Levels
Fromtheinspectionprocess,assigntherecordadataqualityrating.ThecriteriainTable10can
beusedtodetermineratingvalues.
Table10DataQualityRatingScale
Attribute Value Condition
1.0 if size data present
Size:
0 if no size data
1.0 if size is Logical SLOC
Size Count Type: 0.7 if size is Non-Commented Source Statements
(providing size 0.5 if size is Physical Lines (Comment and Source Statements)
data is present) 0.4 if size is Total Lines (all lines in file: blank, comment, source)
0 if no size data
1.0 if modification parameters provided for Auto-gen, Modified &
Reuse
ESLOC
0.5 if New SLOC and no size data for Auto-gen, Modified or Reuse
Parameters:
0 if no modification parameters provided for either Modified, Auto-
gen, or Reused SLOC counts
1.0 if Total Size is 5,000 < Size < 250,000
CSCI-level Data:
0 if Total Size < 5,000 or Size > 250,000
1.0 if effort reported for all phases
Effort: 0.5 if effort is reported as a total
0 if effort is missing for a phase
1.0 if duration reported for all phases
Schedule: 0.5 if duration is reported as a total
0 is duration is missing for a phase
1.0 if record is in the expected value range
Productivity: 0.5 if record is within 1 standard deviation from the mean
0 if record is a clear outlier

Aseachrecordisratedbythecriteriaabove,anoverallqualitylevelisassignedby:
Eq 10 Quality Level = (Size + Size Count Type + ESLOC Parameters +
CSCI level + Effort + Schedule + Productivity) / 7
Thequalitylevelisaquickindicatorofthedegreeofissuesfoundintherecord.Astherecorded
iscorrectedthroughsupplementalinformation,theratingisrevised.Becausetherangeofthe
qualitylevelscaleisbetween0and1.0,itcouldbeusedasaweightduringanalysis.

DataAssessmentandProcessing33
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
5.1.4 Correct Missing or Questionable Data
Thequalitylevelmakesclearwhichrecordsneedadditionalwork.Thereareseveralapproaches
availabletoresolvingmissingorquestionabledata.Thesearelistedinarecommendedorder:
1. ConsulttheaccompanyingDataDictionarydiscussedinChapter4.3.6
2. Consultanysupplementalontheprojectthatisavailable,e.g.,ASP,CARD,CDD,EVMS,
SRS,WBS,etc.
3. SchedulingfollowupmeetingswithSRDRdatacontributor.Dataqualityissuesthatwere
fixedinthepastbytheSRDRcontributor:
Revisedmissingsize,effortanddurationdata
ObtainedAdaptationAdjustmentFactor(AAF)parameters
Confirmedproductivitytypeandenvironment
ConfirmedCSCIlevelofreporting
Askedaboutproblemswithhigh/low,long/shortsize,effortanddurationdata
Asaresultofinspectingthedataandattemptingtocorrecttheissuesfound,nobaddataor
outliersareexcludedfromtheanalysisonarbitrarygrounds.However,dataissuesthat
cannotberesolvedareexcludedfromanalysis.

5.1.5 Normalize Size and Effort Data


Normalizingdataismakingatypeofdatathesame.Forexample,ifSLOCwasmeasuredby
differentcriteria,allSLOCcountsareconvertedintoacommoncountmethod.Ifeffortdata
coversdifferentlifecyclephases,alleffortdataisconvertedtocoverthesamephases.
Normalizationreducesnoiseinthedata.Otherwise,itwillposeasignificantthreattostatistical
validity.

5.1.5.1 Converting to Logical SLOC


WiththeSRDRdata,theSLOCwerecountedusingdifferentmethods.
TotalCount:alineinafile,e.g.carriagereturnsincludingblanksandcommentlines
NonCommentedSourceStatements(NCSS)Count:alineinafilethatisnotablankor
commentline
LogicalCount:asdefinedearlierinChapter2.2.2.1
Foranalysis,thedefinitionofasourcelineofcodeneedstobeasconsistentaspossibleto
eliminatenoiseinthedata.AlogicalsourcelineofcodehasbeenselectedasthebaselineSLOC
definition.

DataAssessmentandProcessing34
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
IfasourcelineofcodecountwasdefinedaseitherTotalorNCSS,thesecountswereconverted
toaLogicalSLOCcount.AnexperimentwasrunusingtheUCCtool,describedinAppendix
9.2,onpublicdomainsoftwareapplicationsandadditionalcontributionsfromUSCCSSE
Affiliates.Total,NCSSandLogicalcountsweretakenfromtheprogramfiles.Sixprogramming
languagesweresampled:
Ada
C#
C/C++
Java
PERL
PHP
Thetotalnumberofdatapointswas40.Theresultsofthisexperimentaredescribednext.
NCSS Line Count Conversion to Logical
ThesizecountsforNCSSandLogicalwereanalyzedfortheirrelationship.Twoanalyseswere
conducted,oneforallofthesizedataandanotherforthelower80%ofthesizedata.Thetwo
relationshipsareexpressedasfollows(theinterceptwasconstrainedtozero2):
Eq 11 All Sizes: Logical SLOC count = 0.44 x NCSS count
Eq 12 Lower 80%: Logical SLOC count = 0.66 x NCSS count
ThestatisticsfortheserelationshipsareinTable11andascatterplotinFigure1.

Table11NCSSLogicalRelationshipStatistics

Statistics All Sizes Lower 80%

Coefficient 0.44 0.66

Total number of observations 40 32

Min - Max Range (KSLOC) 2.3 1,690 2.3 149

Adjusted R2 0.86 0.95

Standard Error 0.03 0.03

Lower 95% Confidence Interval 0.38 0.60

Upper 95% Confidence Interval 0.55 0.71

T-Statistic 15.35 23.73

2 Whenmodelingthisrelationship,anoverheadamount(asrepresentedbyaninterceptvalue)doesnot
makesense,i.e.,thereisnooverheadiftherearezerolinestobeconverted.Incidentally,whenthe
regressionwasrunonallsizeswithoutthezeroconstraint,theconstanthadaTstatisticof1.90andaP
levelof0.70.

DataAssessmentandProcessing35
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

All Counts Lower 80% of Counts


Figure1NCSStoLogicalSLOCPlot

Total Line Count Conversion to Logical


AswithNCSScounts,countsforNCSSandLogicalwereanalyzedfortheirrelationship.Two
analyseswereconducted,oneforallofthesizedataandanotherforthelower80%ofthesize
data.Thetworelationshipsareexpressedasfollows(theinterceptwasconstrainedtozero):
Eq 13 All Sizes: Logical SLOC count = 0.29 x Total count
Eq 14 Lower 80%: Logical SLOC count = 0.34 x Total count
ThestatisticsfortheserelationshipsareinTable12andascatterplotinFigure2.

Table12TotalLogicalRelationshipStatistics

Statistics All Sizes Lower 80%

Coefficient 0.29 0.34

Total number of observations 40 32

Min - Max Range (KSLOC) 3.5 2,249 3.5 265

Adjusted R2 0.95 0.85

Standard Error 0.01 0.03

Lower 90% Confidence Interval 0.27 0.29

Upper 90% Confidence Interval 0.31 0.39

T-Statistic 27.12 13.00

DataAssessmentandProcessing36
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

All Counts Lower 80% of Counts


Figure2TotaltoLogicalSLOCPlot

Conclusion
The80%solutionwasusedinthisanalysis.The80%conversionfactorsappeartobemore
reasonablethanthe100%factors.Afutureversionofthismanualwillexplorethe
relationshipsforNCSSandTotalcountstoLogicalcountsforeachofthesixprogramming
languages.

5.1.5.2 Convert Raw SLOC into Equivalent SLOC


EquivalentSizeisamethodusedtomakenewandadaptedcodeequivalentsotheycanbe
rolledupintoanaggregatesizeestimate(discussedinChapter2.3.2).Thisadjustmentiscalled
EquivalentSourceLinesofCode(ESLOC):
Eq 15 ESLOC = New SLOC +
(AAFM x Modified SLOC) +
(AAFR x Reused SLOC) +
(AAFAG x Auto-Generated SLOC)
Where:AAFi=(0.4xDM)+(0.3xCM)+(0.3xIM)
TheSRDRdatadidnotincludetheparametersforDM,CMandIM.Independentdata
collectionofsimilardatawasconducted.Basedonthedatacollectedandthegroupingofthe
databyOperatingEnvironment(Chapter5.2.1)andProductivityTypes(Chapter5.2.2),
guidelinesforfillinginmissingdatawerederivedfromthatdatathathadtheadaptation
parameters,Table13.
Asshownintheequationabove,therearefourtypesofcode:New,AutoGenerated,Reused,
andModified(seeChapter2.2.1).TheDM,CMandIMparametersarenotrequiredforeach
type.
Newcodedoesnotrequireanyadaptionparameters.Nothinghasbeenmodified.

DataAssessmentandProcessing37
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
AutoGeneratedcodedoesnotrequiretheDMorCMadaptionparameters.However,it
doesrequiretesting,IM.IfAutoGeneratedcodedoesrequiremodification,thenitbecomes
ModifiedcodeandtheadaptationfactorsforModifiedcodeapply.
ReusecodedoesnotrequiretheDMorCMadaptionparameterseither.Italsorequires
testing,IM.IfReusedcodedoesrequiremodification,thenitbecomesModifiedcodeand
theadaptationfactorsforModifiedcodeapply.
Modifiedcoderequiresthethreeparameters,DM,CMandIM,representingmodifications
tothemodifiedcodedesign,codeandintegrationtesting.
Table13showsDM,CMandIMfordifferentproductivitytypes.Thetableshowsthecodetype,
numberofrecordsusedtoderivetheadaptationparameters,themeanvalueoftheparameter
withits95%confidenceinterval,andthemeanvalue.Theadaptationadjustmentfactor(AAF)
isshowninthelastcolumn.Thisfactoristheportionofadaptedcodethatwillbeusedfor
equivalentSLOC.Unfortunatelytherewasnotenoughdatatosupportreportingforall
productivitytypes.
Table13AdaptedCodeParameters
DM CM IM
PT Code Type # Mean M Mean M Mean M AAF
Auto-Gen 0 0.00 0.00 0.00 0.00
SCP Reused 18 0.51 0.21 0.42 0.15
Modified 7 0.26 0.22 0.25 0.33 0.20 0.50 0.66 0.40 1.00 0.40
Auto-Gen 0 0.00 0.00 0.00 0.00
RTE Reused 8 0.17 0.23 0.10 0.05
Modified 14 0.13 0.10 0.05 0.30 0.19 0.10 0.95 0.07 1.00 0.85
Auto-Gen 1 0.13 0.00 0.13 0.04
MP Reused 12 0.36 0.20 0.33 0.11
Modified 21 0.75 0.12 1.00 0.89 0.12 1.00 0.95 0.07 1.00 0.85
Auto-Gen 12 0.37 0.25 0.13 0.11
SYS Reused 6 0.56 0.40 0.50 0.17
Modified 14 0.22 0.19 0.03 0.34 0.20 0.17 0.68 0.18 0.58 0.39
Auto-Gen 7 0.12 0.20 0.10 0.04
SCI Reused 15 0.46 0.20 0.33 0.14
Modified 10 0.34 0.30 0.17 0.53 0.30 0.41 0.78 0.18 0.88 0.53
Auto-Gen 2 0.33 0.00 0.33 0.10
IIS Reused 7 0.45 0.24 0.33 0.14
Modified 4 1.00 0.00 1.00 0.81 0.60 1.00 0.90 0.32 1.00 0.91

Generalobservationsandusageguidelinesare:
Themorerealtimenatureofthesoftware,thelessthedesignismodified,i.e.Inteland
InformationSystems(IIS)haveaDMof100%whereasSensorControlandSignalProcessing
(SCP)haveaDMof26%.
ThesameisgenerallytrueforCM.Therealtimenatureappearstoinfluencehowmuch
codeismodified.
DataAssessmentandProcessing38
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
IMisusuallyhigherthaneitherDMorCM.Itthesoftwarebeingestimatedrequiresmore
reliabilityorismorecomplex,ahighervalueforIMshouldbeused.
WhilethemeanvalueisprovidedforDM,CMandIM,comparethemeantothemedian.Thisis
anindicationofskewinginthedata.Thisshouldalsoinfluenceyourdecisiononwhichvalues
tochoosewithinthe95%confidenceinterval.
Afutureversionofthismanualwillprocessmoredataandexpandtheadaptedcodeparameter
tabletoadditionalproductivitytypes.Itwillalsoanalyzetheseparametersacrossoperating
environments.

5.1.5.3 Adjust for Missing Effort Data


GuidelinesforadjustingformissingeffortdataareshowninTable14.Astheseweredeveloped,
considerationwasgiventotheproductivitytype(PT).AverageEffortpercentageswerederived
foreachProductivityTypeusingtheanalysisdataset(~300records).Anymissingeffortdata
wasadjustedusingtheappropriateeffortpercentageandproductivitytype.Datamissingmore
thantwophasesofeffortwerenotusedintheanalysis.Thisanalysisisbasedonresearchby
[Tan2012].
Table14AverageActivityEffortPercentagesBasedOnCompleteData
Productivity
Type Requirement Arch & Design Code & Unit Test Integration & QT
IIS 11.56% 27.82% 35.63% 24.99%
MP 20.56% 15.75% 28.89% 34.80%
PLN 16.22% 12.27% 50.78% 20.73%
RTE 15.47% 26.65% 26.71% 31.17%
SCI 7.38% 39.90% 32.05% 20.67%
SSP 10.80% 45.20% 20.34% 23.66%
SYS 17.61% 21.10% 28.75% 32.54%
VC 18.47% 23.60% 31.32% 26.61%

Afutureversionofthismanualwillprocessmoredataandexpandtheaverageeffort
percentagestabletoadditionalproductivitytypes.Additionally,analysisofscheduleduration
forthedifferentactivitieswillbeconducted.

5.2 Data Segmentation


DatasegmentationcanbechallengingbecauseCostandScheduleEstimatingRelationships
(CERandSER)aredifferentfordifferenttypesofsoftware.Factorssuchasapplication
complexity;impactoflossduetoreliability;autonomousmodesofoperation;constraintson
timing,storage,andpower;securityrequirements;andcomplexinterfacesinfluencethecost
andtimetodevelopapplications.Parametriccostmodelshaveanumberofadjustable
parametersthatattempttoaccountforthesefactors.Manyoftheseparameters,however,are
unknownuntilcontractaward.

DataAssessmentandProcessing39
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
InsteadofdevelopingCERsandSERswithmanyparameters,theapproachtakenbythisproject
isbasedongroupingsimilarsoftwareapplicationstogether.Thesegroupsarecalled
ApplicationDomains.ApplicationDomainsimplementacombinationofhardwareand
softwarecomponentstoachievetheintendedfunctionality.However,becauseApplication
Domainsthentorepresentanentiresubsystem,e.g.Communications,theapproachtakenwas
touseagenericdescriptionofsoftwaredomainscalledproductivitytypes(PT).Theoperating
environmentforeachPTisconsideredaswell.Boththeoperatingenvironmentanddomainare
consideredinthisanalysistoproducetheproductivitytypes.

5.2.1 Operating Environments (OpEnv)

Operating Environments have similar systems, similar products, similar operational


characteristics, and similar requirements:

Highspeedvehicleversusstationary
Batteryoperatedversusgroundpower
Unrecoverableplatformversusreadilyaccessible
Limited,nonupgradeablecomputingprocessorcapacityversusracksofprocessors
Fixedinternalandexternalmemorycapacityversusexpandablecapacity

There are 11 operating environments:

Table15OperatingEnvironments
Operating Environment (OpEnv) Examples
Command Post, Ground Operations Center,
Fixed (GSF)
Ground Terminal, Test Faculties
Ground Site (GS)
Intelligence gathering stations mounted on
Mobile (GSM)
vehicles, Mobile missile launcher
Manned (GVM) Tanks, Howitzers, Personnel carrier
Ground Vehicle (GV)
Unmanned (GVU) Robotic vehicles
Aircraft carriers, destroyers, supply ships,
Manned (MVM) submarines
Maritime Vessel (MV)
Unmanned (MVU) Mine hunting systems, Towed sonar array
Manned (AVM) Fixed-wing aircraft, Helicopters
Aerial Vehicle (AV)
Unmanned (AVU) Remotely piloted air vehicles
Passenger vehicle, Cargo vehicle, Space
Manned (SVM)
station
Space Vehicle (SV)
Orbiting satellites (weather,
Unmanned (SVU)
communications), Exploratory space vehicles
Air-to-air missiles, Air-to-ground missiles, Smart
Ordinance Vehicle (OV) Unmanned (OVU)
bombs, Strategic missiles

DataAssessmentandProcessing40
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
The operating environments can be aggregated into six high-level environments. This is useful
when there is not enough data for each of the 11 environments in Table 15:

1. GroundSite(GS)
2. GroundVehicle(GV)
3. MaritimeVessel(MV)
4. AerialVehicle(AV)
5. SpaceVehicle(SV)
6. OrdinanceVehicle(OV)

5.2.2 Productivity Types (PT)

Productivity types are groups of application productivities that are characterized by the
following:

Requiredsoftwarereliability
Databasesizeifthereisalargedataprocessingandstoragecomponenttothesoftware
application
Productcomplexity
Integrationcomplexity
Realtimeoperatingrequirements
Platformvolatility,Targetsystemvolatility
Specialdisplayrequirements
Developmentrehosting
Qualityassurancerequirements
Securityrequirements
Assurancerequirements
Requiredtestinglevel

There are 14 productivity types:

DataAssessmentandProcessing41
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table16ProductivityTypes
PT Description
Sensor Control and Signal Software that requires timing-dependent device coding to
Processing (SCP) enhance, transform, filter, convert, or compress data signals.
Ex.: Beam steering controller, sensor receiver / transmitter control,
sensor signal processing, sensor receiver / transmitter test.
Ex. of sensors: antennas, lasers, radar, sonar, acoustic,
electromagnetic.
Vehicle Control (VC) Hardware & software necessary for the control of vehicle primary
and secondary mechanical devices and surfaces.
Ex: Digital Flight Control, Operational Flight Programs, Fly-By-Wire
Flight Control System, Flight Software, Executive.
Vehicle Payload (VP) Hardware & software which controls and monitors vehicle
payloads and provides communications to other vehicle
subsystems and payloads.
Ex: Weapons delivery and control, Fire Control, Airborne
Electronic Attack subsystem controller, Stores and Self-Defense
program, Mine Warfare Mission Package.
Real Time Embedded (RTE) Real-time data processing unit responsible for directing and
processing sensor input / output.
Ex: Devices such as Radio, Navigation, Guidance, Identification,
Communication, Controls And Displays, Data Links, Safety, Target
Data Extractor, Digital Measurement Receiver, Sensor Analysis,
Flight Termination, Surveillance, Electronic Countermeasures,
Terrain Awareness And Warning, Telemetry, Remote Control.
Mission Processing (MP) Vehicle onboard master data processing unit(s) responsible for
coordinating and directing the major mission systems.
Ex.: Mission Computer Processing, Avionics, Data Formatting, Air
Vehicle Software, Launcher Software, Tactical Data Systems,
Data Control And Distribution, Mission Processing, Emergency
Systems, Launch and Recovery System, Environmental Control
System, Anchoring, Mooring and Towing.
Process Control (PC) Software that manages the planning, scheduling and execution
of a system based on inputs, generally sensor driven.
System Software (SYS) Layers of software that sit between the computing platform and
applications.
Ex: Health Management, Link 16, Information Assurance,
Framework, Operating System Augmentation, Middleware,
Operating Systems.
Planning Software (PLN) Provides the capability to maximize the use of the platform. The
system supports all the mission requirements of the platform and
may have the capability to program onboard platform systems
with routing, targeting, performance, map, and Intel data.
Scientific Software (SCI) Non real time software that involves significant computations
and scientific analysis.
Ex: Environment Simulations, Offline Data Analysis, Vehicle
Control Simulators.
Training Software (TRN) Hardware and software that are used for educational and
training purposes.
Ex: Onboard or Deliverable Training Equipment & Software,
Computer-Based Training.
DataAssessmentandProcessing42
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table16ProductivityTypes
PT Description
Telecommunications (TEL) The transmission of information, e.g. voice, data, commands,
images, and video across different mediums and distances.
Primarily software systems that control or manage transmitters,
receivers and communications channels.
Ex: switches, routers, integrated circuits, multiplexing, encryption,
broadcasting, protocols, transfer modes, etc.
Software Tools (TOOL) Software that is used for analysis, design, construction, or testing
of computer programs.
Ex: Integrated collection of tools for most development phases of
the life cycle, e.g. Rational development environment.
Test Software (TST) Hardware & Software necessary to operate and maintain
systems and subsystems which are not consumed during the
testing phase and are not allocated to a specific phase of
testing.
Ex: Onboard or Deliverable Test Equipment & Software.
Intelligence & Information An assembly of software applications that allows a properly
Software (IIS) designated authority to exercise control over the
accomplishment of the mission. Humans manage a dynamic
situation and respond to user-input in real time to facilitate
coordination and cooperation.
Ex: Battle Management, Mission Control. Also, software that
manipulates, transports and stores information.
Ex: Database, Data Distribution, Information Processing, Internet,
Entertainment, Enterprise Services*, Enterprise Information**.
* Enterprise Information HW & SW needed for developing functionality or software service
(subtype of IIS) that are unassociated, loosely coupled units of functionality.
Examples are: Enterprise service management (monitoring, fault
management), Machine-to-machine messaging, Service
discovery, People and device discovery, Metadata discovery,
Mediation, Service security, Content discovery and delivery,
Federated search, Enterprise catalog service, Data source
integration, Enterprise content delivery network (caching
specification, distributed caching, forward staging), Session
management,, Audio & video over internet protocol, Text
collaboration (chat, instant messaging), Collaboration (white
boarding & annotation), Application broadcasting and sharing,
Virtual spaces, Identity management (people and device
discovery), User profiling and customization.
** Enterprise Information HW & SW needed for assessing and tailoring COTS software
(subtype of IIS) applications or modules that can be attributed to a specific
software service or bundle of services.
Examples of enterprise information systems include but not
limited to: , Enterprise resource planning, Enterprise data
warehouse, Data mart, Operational data store.
Examples of business / functional areas include but not limited to:
General ledger, Accounts payable, Revenue and accounts
receivable, Funds control and budgetary accounting, Cost
management, Financial reporting, Real property inventory and
management.

DataAssessmentandProcessing43
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
5.2.2.1 Finding the Productivity Type
Itcanbechallengingtodeterminewhichproductivitytypeshouldbeusedtoestimatethecost
andscheduleofanapplication(thatpartofthehardwaresoftwarecomplexwhichcomprisea
domain).Theproductivitytypesarebydesigngeneric.Byusingaworkbreakdownstructure
(WBS),theenvironmentanddomainareusedtodeterminetheproductivitytype.
UsingtheWBSfromMILSTD881C,amappingiscreatedfromenvironmenttoProductivity
Type(PT),Table17.Startingwiththeenvironment,traversetheWBStothelowestlevelwhere
thedomainisrepresented.EachdomainisassociatedwithaProductivityType(PT).Inreal
worldWBSs,thetraversefromenvironmenttoPTwillmostlikelynotbethesamenumberof
levels.Howeverthe881CWBSprovidesthecontextforselectingthePTwhichshouldbe
transferabletootherWBSs.
Twoexamplesforfindingtheproductivitytypeusingthe881CAerialVehicleManned(AVM)
andSpaceVehicleUnmanned(SVU)WBSelementsareprovidedbelow.ThehighestlevelWBS
elementrepresentstheenvironment.IntheAVMenvironmenttherearetheAvionics
subsystem,FireControlsubsubsystem,andthesensor,navigation,airdata,display,bombing
computerandsafetydomains.Eachdomainhasanassociatedproductivitytype.

Table17AerialVehicleMannedtoPTExample

Environment Subsystem Sub-subsystem Domains PT


Search, target, tracking sensors SCP
Self-contained navigation RTE
Self-contained air data systems RTE
Fire Control
Displays, scopes, or sights RTE
Bombing computer MP
AVM Avionics
Safety devices RTE
Multi-function display RTE
Control display units RTE
Data Display and Controls
Display processors MP
On-board mission planning TRN

Foraspacesystem,thehighestlevel881CWBSelementistheSpaceVehicleUnmanned(SVU).
ThetwosubsystemsareBusandPayload.ThedomainsforBusaddresscontrollingthevehicle.
ThedomainsforPayloadaddresscontrollingtheonboardequipment.Eachdomainhasan
associatedproductivitytype,Table18.

DataAssessmentandProcessing44
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table18SpaceVehicleUnmannedtoPTExample

Environment Subsystem Domains PT


Structures & Mechanisms (SMS) VC
Thermal Control (TCS) VC
Electrical Power (EPS) VC
Bus Attitude Control (ACS) VC
Propulsion VC
Telemetry, Tracking, & Command (TT&C) RTE
Bus Flight Software VC
SVU Thermal Control RTE
Electrical Power RTE
Pointing, Command, & Control Interface VP
Payload Antenna SCP
Payload
Payload Signal Electronics SCP
Optical Assembly SCP
Sensor SCP
Payload Flight Software VP

ThefulltableisavailablefortheMILSTD881CWBSMappingtoProductivityTypes,
Appendix9.5.

DataAssessmentandProcessing45
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

6 Cost Estimating Relationship Analysis


ThischapterdiscussesusingtheassessedandprocessSRDRdatatocreateCostEstimating
Relationships(CER).Theserelationshipsaredifferentfordifferenttypesofsoftware.Factors
suchasapplicationcomplexity,impactoflossduetoreliability,autonomousmodesof
operation,constraintsontiming,storageandpower,securityrequirements,andcomplex
interfacesinfluencethecostandtimetodevelopapplications.Parametriccostmodelshavea
numberofadjustableparametersthatattempttoaccountforthesefactors.

6.1 Application Domain Decomposition


InsteadofdevelopingCERsandSERswithmanyparameters,thischapterdescribesananalysis
approachbasedongroupingsimilarsoftwareapplicationstogether.Thesegroupsarecalled
ApplicationDomains.ApplicationDomainsimplementacombinationofhardwareand
softwarecomponentstoachievetheintendedfunctionality.Insteadofusingadomainname
suchasCommunications,abetterapproachistouseagenericsoftwareProductivityType(PT).
Alsoconsiderationneedstobegiventotheoperatingenvironmentthatthedomainoperates
within.BoththeoperatingenvironmentandPTareconsideredinthisanalysistoproduceCERs.
DomainanalysisoftheSRDRdatabaseispresentedinthenextsections,andprovidesguidance
indevelopingestimatesintherespectivedomains.Costandscheduleestimatingrelationships
areexpressedindifferentforms.Inthismanual,theyareexpressedasaratiocommonlycalled
ProductivityandasasimplemathequationcalledaModel.

6.2 SRDR Metric Definitions


TheSRDRwasdiscussedinChapter4.InChapter5themetricswerediscussedformeasuring
size,effortandschedule.

6.2.1 Software Size


TheSRDRdatacontainedamixtureofdifferentcodecounttypes.ThedatainChapter5.1.5.1
wasusedtoconvertallcountstothelogicalcounttype.
Forpreexistingcode(AutoGenerated,ModifiedandReused),iftheadaptationparameters
werenotprovidedwiththedata,theguidelinesinChapter5.1.5.2wereused.

6.2.2 Software Development Activities and Durations


SoftwareCERshaveabreadthandadepth.Thebreadthisthenumberoflifecycleactivities
coveredandthedepthisthetypeoflaborcountedinoracrosseachactivity.Theactivitydatain
theSRDRisreportedfollowingthe[ISO12207]processesforsoftwaredevelopment.Table19
showsthe12207processesandtheonescoveredbySRDRdata.ThisisthebreadthoftheCERs
reportedinthismanual.

CostEstimatingRelationshipAnalysis46
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table 19 ISO/IEC 12207 Development Activities


System requirements analysis
System architectural design
Software requirements analysis

Activities in SRDR
Software architectural design
Software detailed design

data
Software coding and testing
Software integration
Software qualification testing
System integration
System qualification testing
Software installation
Software acceptance support

Table20showsthedifferentlaborcategoriesintheSRDRdata.Notalloftherecordshadallof
thecategories.However,theSoftwareEngineeringandAssessmentcategorieswerereported
forineachrecord.Table14inChapter5.1.5.3providesadistributionofeffortacrossthese
activities.
Table 20 SRDR Labor Categories
Category SRDR Labor Categories
Engineering Management
Management
Business Management
Software Requirements Analysis
Architecture and Detailed Design
Software Engineering
Coding and Unit Testing
Test and Integration
Qualification Testing
Assessment
Development Test Evaluation Support
Software Configuration Management
Software Quality Assurance
Configuration Audit
Development Environment Support
Tools Support
Support Documentation
Data Preparation
Process Management
Metrics
Training
IT Support / Data Center

WhencomparingresultsoftheCERanalysiswithotheravailableCERdata,itisimportantto
keepinmindthebreadthanddepthofactivitiescovered.Theyshouldbeassimilaraspossible.

CostEstimatingRelationshipAnalysis47
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

6.3 Cost Estimating Relationships (CER)


6.3.1 Model Selection
Acommonissueinmodelingsoftwareengineeringcostdatausingthemodelformbelow,
EQ(15)iswhetherthereareeconomiesordiseconomiesofscaleinthedata,i.e.,asthesoftware
sizeincreaseslesseffortisrequired(economyofscale)orassizeincreasesmoreeffortis
required(diseconomiesofscale).Thescalinginfluenceisfoundintheexponent,B.An
estimatedvalueforB<1.0indicatesaneconomyofscale.AnestimatedvalueofB>1.0indicates
adiseconomyofscale.
B
Eq 16 Effort = A x (KESLOC )
[BankerKemerer1989]provideasurveyofreasonsforeconomiesanddiseconomiesofscale.
Theirpaperattributeseconomiesofscaleto:
Softwaredevelopmenttoolsthatincreaseproductivity
Specializedpersonnelthatarehighlyproductive
Fixedoverheadthatdoesnotincreasedirectlywithprojectsizetherebyproducing
economiesofscaleinlargerprojects
Diseconomiesofscaleareattributedto:
Increasingcommunicationpathsbetweenprojectteammembers
Largersystemshavingmorecomplexinterfaceproblems
Increasingthenumberofpeopleincreasesthechanceofpersonalityconflicts
Overheadactivitiesincreaseatafasterthanlinearrateasprojectsizeincreases
Theresultsoftheirresearchargueforbotheconomiesanddiseconomiesofscale.The
economiesofscalewereobservedonsmallprojectsanddiseconomiesofscalewereobservedon
largeprojects.Theypresentamodel,MostProductiveScaleSize(MPSS),whichfindsthebreak
pointbetweensmallandlargeprojects.TheMPSSmodelisorganizationdependent.
Ouranalysisfoundthatdiseconomyofscalewasdifficulttodetectonsmallerprojects(lessthan
50KESLOC)andwasnotalwaysabsent(thismayhavebeenduetodifferencesinwherethe
costwasallocatedbythedifferentdatasubmitters).This,webelieve,wasduetothepresenceof
fixedstartupcostsandmanagementoverheadactivities,e.g.requiredreportingbythe
Government.Theconclusionisthattheamountoffixedstartupcostsandoverheadactivities
onsmallerprojectshasamaskingeffectondirectlabor;skewingtheBexponenttoavalue<1.0
(thelargerprojectshadBexponentvalues>1.0).
Ourapproachwastotesttheinitialequation,Eq16.IftheinitialequationshowsaBexponent<
1.0,weexaminewhetherfixedstartupcostsandoverheadactivitieswereinfluencingresults
usingaNonLinearModel(NLM):

CostEstimatingRelationshipAnalysis48
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
B
Eq 17 Effort (PM) = C + (A x KESLOC )
Or
B
Eq 18 Effort (PM) = C + (KESLOC )
Where
EffortisPersonMonths
CisthefixedstartupandoverheadactivitycostsinPersonMonths
Bisascalingfactorexpressingthedegreeofthediseconomyofscale
IftheNLMshowsaBexponent>1.0,thentheNLMischosen.Thismodelunmasksthe
influenceoffixedstartupcostsinaseparatevariablefromthediseconomiesofscalepresentin
thedata.
AstatisticthatisnotavailableforNLMsisR2,theCoefficientofDetermination,usedtodescribe
howwellaregressionfitsasetofdata.Thisisduetonotbeingabletouseregressionanalysisto
derivetheNLM.Iterativesearchtechniquesareusedinstead.WhenaNLMisdisplayed,theR2
isdisplayedwiththemarker***.

6.3.2 Model-Based CERs Coverage


ThecoverageofmodelbasedCERsbyoperatingenvironmentandproductivitytype,discussed
inChapters5.2.1and5.2.2respectively,areshowninTable21.Theoperatingenvironmentsare
thetablecolumns.Theproductivitytypesaretherows.Notallproductivitytypesand
environmentswerecoveredduetolackofenoughdataineachgroup(atleast5recordsare
required).
TheshadedcellsinTable21denoteaCERandthenumberinacellisthenumberofrecords
usedtocreatetheCER.TheALLcolumnandrowmeanforalloperatingenvironmentsor
productivitytypes.

CostEstimatingRelationshipAnalysis49
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table 21 CER Coverage


Ground Ground Maritime Aerial Space
Site Vehicle Vessel Vehicle Vehicle Ordinance ALL
SCP 13 8 36
VC
VP 16 16
RTE 22 9 52
MP 6 31 48
PC
SYS 28 60
PLN
SCI 24 39
TRN
TEL
TOOL
TST
IIS 23 37
ALL

6.3.3 Software CERs by OpEnv

6.3.3.1 Ground Site (GS) Operating Environment


Mission Processing
1.19
Eq 19 PMGSF-MP = 3.20 + (KESLOC )
Numberofobservations: 6
AdjustedR2: ***3
MaximumAbsoluteDeviation: 0.24
PRED(30): 0.83
MinimumKESLOCValue: 15
MaximumKESLOCValue: 91

3 R2isnotavailableforNLMs.

CostEstimatingRelationshipAnalysis50
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
System Software
1.12
Eq 20 PMGSF-SYS = 20.86 + (2.35 x KESLOC )
Numberofobservations: 28
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.19
PRED(30): 0.82
MinimumKESLOCValue: 5
MaximumKESLOCValue: 215
Scientific Systems
1.29
Eq 21 PMGSF-SCI = 34.26 + (KESLOC )
Numberofobservations: 24
AdjustedR2: ****
MaximumAbsoluteDeviation: 0.37
PRED(30): 0.56
MinimumKESLOCValue: 5
MaximumKESLOCValue: 171
Intelligence and Information Systems
1.13
Eq 22 PMGSF-IIS = 30.83 + (1.38 x KESLOC )
Numberofobservations: 23
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.16
PRED(30): 0.91
MinimumKESLOCValue: 15
MaximumKESLOCValue: 180

6.3.3.2 Ground Vehicle (GV) Operating Environment


Sensor Control and Processing
1.60
Eq 23 PMGV-SCP = 135.5 + (KESLOC )
Numberofobservations: 13
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.39
PRED(30): 0.31
MinimumKESLOCValue: 1
MaximumKESLOCValue: 76

CostEstimatingRelationshipAnalysis51
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Real Time Embedded
1.45
Eq 24 PMGV-RTE = 84.42 + (KESLOC )
Numberofobservations: 22
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.24
PRED(30): 0.73
MinimumKESLOCValue: 9
MaximumKESLOCValue: 89

6.3.3.3 Aerial Vehicle (AV) Operating Environment


Sensor Control and Signal Processing (SCP)
1.61
Eq 25 PMAVM-SCP = 115.8 + (KESLOC )
Numberofobservations: 8
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.27
PRED(30): 0.62
MinimumKESLOCValue: 6
MaximumKESLOCValue: 162
Real Time Embedded (RTE)
1.16
Eq 26 PMAVM-RTE = 5.61 x (KESLOC )
Numberofobservations: 9
AdjustedR2: 0.89
MaximumAbsoluteDeviation: 0.50
PRED(30): 0.33
MinimumKESLOCValue: 1
MaximumKESLOCValue: 167
Mission Processing (MP)
1.43
Eq 27 PMAVM-MP = 3.1 x (KESLOC )
Numberofobservations: 31
AdjustedR2: 0.88
MaximumAbsoluteDeviation: 0.50
PRED(30): 0.59
MinimumKESLOCValue: 1
MaximumKESLOCValue: 207

CostEstimatingRelationshipAnalysis52
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
6.3.3.4 Space Vehicle Unmanned (SVU) Operating Environment
Vehicle Payload
1.38
Eq 28 PMSV-VP = 3.15 x (KESLOC )
Numberofobservations: 16
AdjustedR2: 0.86
MaximumAbsoluteDeviation: 0.27
PRED(30): 0.50
MinimumKESLOCValue: 5
MaximumKESLOCValue: 120

6.3.4 Software CERs by PT Across All Environments


Thefollowingenvironmentswereincludedinthisanalysis:
GroundSites
GroundVehicles
MaritimeVessels
AerialVehicle
Ordnance
SpaceVehicle
Sensor Control and Signal Processing
1.71
Eq 29 PMAll-SCP = 74.37 + (KESLOC )
Numberofobservations: 36
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.69
PRED(30): 0.31
MinimumKESLOCValue: 1
MaximumKESLOCValue: 162
Vehicle Payload
1.38
Eq 30 PMAll-VP = 3.15 + (KESLOC )
Numberofobservations: 16
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.27
PRED(30): 0.50
MinimumKESLOCValue: 5
MaximumKESLOCValue: 120

CostEstimatingRelationshipAnalysis53
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Real Time Embedded
1.52
Eq 31 PMAll-RTE = 34.32 + (KESLOC )
Numberofobservations: 52
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.61
PRED(30): 0.46
MinimumKESLOCValue: 1
MaximumKESLOCValue: 167
Mission Processing
1.17
Eq 32 PMAll-MP = 3.48 x (KESLOC )
Numberofobservations: 48
AdjustedR2: 0.88
MaximumAbsoluteDeviation: 0.49
PRED(30): 0.58
MinimumKESLOCValue: 1
MaximumKESLOCValue: 207
System Software
1.37
Eq 33 PMAll-SYS = 16.01 + (KESLOC )
Numberofobservations: 60
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.37
PRED(30): 0.53
MinimumKESLOCValue: 2
MaximumKESLOCValue: 215
Scientific Software
1.36
Eq 34 PMAll-SCI = 21.09 + (KESLOC )
Numberofobservations: 39
AdjustedR2: ***
MaximumAbsoluteDeviation: 0.65
PRED(30): 0.18
MinimumKESLOCValue: 1
MaximumKESLOCValue: 171

CostEstimatingRelationshipAnalysis54
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Intelligence and Information Systems
1.18
Eq 35 PMAll-IIS = 1.27 x (KESLOC )
Numberofobservations: 37
AdjustedR2: 0.90
MaximumAbsoluteDeviation: 0.35
PRED(30): 0.65
MinimumKESLOCValue: 1
MaximumKESLOCValue: 180
FutureversionofthismanualwillshowCERscatterplots,95%confidenceintervals,aswellas
expandthenumberofCERsforproductivitytypesandoperatingenvironments.

CostEstimatingRelationshipAnalysis55
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

6.4 Productivity Benchmarks


6.4.1 Model Selection and Coverage
Softwareproductivityreferstotheabilityofanorganizationtogenerateoutputsusingthe
resourcesthatitcurrentlyhasasinputs.Inputstypicallyincludefacilities,people,experience,
processes,equipment,andtools.Outputsgeneratedincludesoftwareapplicationsand
documentationusedtodescribethem.
Eq 36 Productivity = Outputs / Inputs = KESLOC / PM
ThemetricusedtoexpresssoftwareproductivityisThousandsofEquivalentSourceLinesof
Code(KESLOC)perPersonMonth(PM)ofeffort.Whilemanyothermeasuresexist,KESLOC/
PMwillbeusedbecausemostofthedatacollectedbytheDoDonpastprojectsiscaptured
usingthesetwomeasures.WhilecontroversyexistsoverwhetherornotKESLOC/PMisa
goodmeasure,consistentuseofthismetric(seeMetricDefinitions)providesformeaningful
comparisonsofproductivity.
Table 22 Productivity Benchmark Coverage
GSF MVM AVM SVU OVU ALL
SCP 13 7 6 38
VC
VP
RTE 23 6 9 53
MP 6 7 31 47
PC
SYS 28 23 60
PLN
SCI 23 15 39
TRN
TEL
TOOL
TST
IIS 23 35
ALL 116 67 50 6 16

ThenumbersinTable22arethenumberofrecordsanalyzed.TheALLcolumnandrowarenot
necessarilythesumofthecorrespondingcolumnsorrows.Aminimumoffiveormoreprojects
wererequiredtoderiveaproductivitybenchmark.

CostEstimatingRelationshipAnalysis56
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
6.4.2 Data Transformation
AnAndersonDarlingtestoftheproductivitydatarevealedanonnormaldistribution.A
histogramwithagammadistributionoverlayvisuallyshowsthisphenomenon,leftplotin
Figure3.ABoxCoxtransformationoftheproductivitydatashowedthatifthedatawere
transformedtologvalues,thedistributionwasmuchclosertoanormaldistribution,rightplot
inFigure3.

Histogram of Productivity Histogram of Ln(Productivity)


G amma G amma

Shape 2.005 35 Shape 35.95


50
Scale 100.8 Scale 0.1402
N 272 N 272
30
40
25
Fr equency

Fr equency
30 20

15
20

10
10
5

0 0
0 150 300 450 600 750 2.4 3.2 4.0 4.8 5.6 6.4 7.2
P r oductivity Ln( P r oductivity)

Figure3ProductivityDataDistribution

Thisobservationrequiredthetestingofeachdatagroupsdistributionfornormality.Some
groupsrequireddifferenttypesoftransformationandsomedidnotrequireanytransformation
atall.TheresultsofthedistributiontestingareprovidedinAppendix9.6.1,NormalityTestson
ProductivityData.
Table24,Table25,andTable26belowshowtheproductivitybenchmarkresults.Resultsshown
initalicsindicatetheanalysiswasperformedontransformeddata.Thisisimportanttonote
becausestatisticsofdatadispersionareonlyvalidonnormaldistributions,i.e.,theyonlyapply
inthetransformednumberspace.However,dispersionstatisticsinthetransformednumber
spacedonotprovidemuchinsightwhenconvertedbackintolinearnumberspace,e.g.,
dispersionstatisticsinlognumberspacearemuchclosertogetherandconversionbackinto
linearnumberspaceresultsinafalsemeasureofdispersion.Thereforetheresultsinthesetables
arereportedinlinearnumberspace.
Themeanvalueofthetransformeddataisvalidinlinearnumberspaceandcanbecomparedto
othermeanvalues.Thedispersionstatisticsforthetransformedstatisticsare,strictlyspeaking,
onlyanindicatorofdispersion.Thestandarddeviation,onwhichthedispersionstatisticsrely,
wasderivedmanuallyinlinearnumberspace.
Thetransformationsperformedoneachdatasetandthestatisticalsummariesareprovidedin
Appendices9.6.1and9.6.2.
CostEstimatingRelationshipAnalysis57
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
6.4.3 Productivity Benchmark Statistics
ThetablesofproductivityresultshaveanumberofcolumnsthataredefinedinTable23.
Table 23 Productivity Statistics
Column Label Description
N Number of records
Min KESLOC Minimum value in thousands of equivalent source lines of code
Max KESLOC Maximum value in thousands of equivalent source lines of code
LCI Lower Confidence Interval is an estimate of an interval below the sample
mean within which the population mean is estimated to lie
Mean Estimated sample value representing the population central value; equal to
the sum of the values divided by the number of values , i.e., arithmetic mean
UCI Upper Confidence Interval is an estimate of an interval above the sample
mean within which the population mean is estimated to lie
Std Dev Standard Deviation is a measure of dispersion about the mean
CV Coefficient of Variation shows the extent of variability in relation to the mean of
the sample. It is defined as the ratio of the standard deviation to the mean.
Q1 Numerical value for the lower 25% of ranked data (1st Quartile), i.e., the value
half way between the lowest value and the median in a set of ranked values
Median Numerical value separating the higher half of a sample from the lower half, i.e.,
the middle value in a set of ranked values
Q3 Numerical value for the lower 75% of ranked data (3rd Quartile), i.e. the value
half way between the median and the highest value in a set of ranked values

6.4.4 Software Productivity Benchmark Results by Operating Environment


Table24showsthemeanandmedianproductivityacrossoperatingenvironments(OpEnv),
discussedinChapter5.2.1.Tobeincludedinthetable,therehadtobefiveormorerecordsin
anenvironmentgroup.Therowsaresortedonthemeanproductivityfromlowesttohighest.
Table24ProductivityBenchmarksbyOperatingEnvironment
Min Max Std
OpEnv N KESLOC KESLOC LCI Mean UCI Dev CV Q1 Median Q3
SVU 6 5 134 50 85 120 44 52% 52 76 125
OVU 16 0.3 189 88 133 177 90 68% 51 120 226
AVM 50 0.32 208 108 133 158 91 68% 81 129 184
MVM 67 2 54 159 203 247 184 91% 112 193 717
GSF 116 0.61 215 182 205 227 123 60% 118 229 301

Note:Resultsshowninitalicsindicatetheanalysiswasperformedontransformeddata.See
discussionin6.3.2.
UsingthemedianvaluesinTable24,Figure4showsacomparisonoftheproductivitiesacross
operatingenvironments.

CostEstimatingRelationshipAnalysis58
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Boxplot of Operating Environment Median Productivities


800

700

600

500

400
Pr

300

200

100

0
AVM GSF MVM OVU SVU
OpEnv

Figure4OpEnvMedianProductivitiesBoxplot

6.4.5 Software Productivity Benchmarks Results by Productivity Type


Table25showsthemeanandmedianproductivityacrossProductivityTypes(PT),discussedin
Chapter5.2.2.Tobeincludedinthetable,therehadtobefiveormorerecordsinaproductivity
typegroup.Therowsaresortedonthemeanproductivityfromlowesttohighest.
Table 25 Productivity Benchmarks by Productivity Type
Min Max Std
PT N KESLOC KESLOC LCI Mean UCI Dev CV Q1 Median Q3
SCP 38 0.35 162 44 50 56 19 39% 33 53 67
RTE 53 1 167 100 120 140 75 62% 83 124 168
MP 47 1 45 135 167 199 113 68% 124 159 236
SYS 60 2 215 205 225 245 78 35% 172 219 274
SCI 39 0.32 171 200 237 275 120 51% 106 248 313
IIS 35 3 150 342 397 453 167 42% 305 363 541

Note:Resultsshowninitalicsindicatetheanalysiswasperformedontransformeddata.See
discussionin6.3.2.
UsingthemedianvaluesinTable25,Figure5showsacomparisonoftheproductivitiesacross
operatingenvironments.

CostEstimatingRelationshipAnalysis59
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Boxplot of Productivity Type Median Productivities


900

800

700

600

500
Pr

400

300

200

100

IIS MP RTE SCI SCP SYS


PT

Figure5PTMedianProductivitiesBoxplot

CostEstimatingRelationshipAnalysis60
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
6.4.6 Software Productivity Benchmarks by OpEnv and PT
Table26showsthemeanandmedianproductivitybyoperatingenvironment(OpEnv)and
productivitytype(PT).Tobeincludedinthetable,therehadtobefiveormorerecordsina
productivitytypegroup.Therowsaresortedonthemeanproductivityfromlowesttohighest
ineachOpEnvgrouping.
Table 26 Productivity Benchmarks by Operating Environment and Productivity Type
Min Max Std
OpEnv PT N KESLOC KESLOC LCI Mean UCI Dev CV Q1 Median Q3
AVM SCP 8 6 162 47 56 65 13 23% 46 58 68
AVM RTE 9 1.5 167 69 122 174 80 66% 81 89 237
AVM MP 31 1.25 207 123 154 186 90 58% 122 141 188
GSF SCP 13 0.61 76 49 58 67 17 29% 44 60 67
GSF RTE 23 9.4 89 110 129 147 45 35% 99 130 149
GSF MP 6 14.5 91 120 162 203 52 32% 126 158 199
GSF SYS 28 5.1 215 217 240 264 64 26% 199 245 274
GSF SCI 23 4.5 171 230 271 312 99 37% 201 274 313
GSF IIS 23 15.2 180 341 376 410 85 23% 305 359 436
MVM SCP 7 5.99 24 25 39 53 19 50% 24 33 46
MVM RTE 6 2.21 38 -13 113 239 158 139% 62 110 247
MVM SCI 15 2 54 119 185 251 131 71% 55 170 327
MVM MP 7 4.931 13 150 189 228 52 28% 136 217 241
MVM SYS 23 2.05 47 199 234 269 86 37% 184 226 324
OVU RTE 11 1.2 116 96 141 410 76 54% 56 127 192

Note:Resultsshowninitalicsindicatetheanalysiswasperformedontransformeddata.See
discussionin6.3.2.

6.5 Future Work


Productivityisnotonlyinfluencedbytheoperatingenvironmentandproductivitytypebut
alsobyapplicationsize.Thelargertheapplicationbeingdeveloped,thelargerthenumberof
overheadactivitiesrequiredtocoordinatethedevelopment.Ingeneral,productivitydecreases
assizeincreasesasdiscussedpreviouslyinSection6.3.1.
Forthisreason,withinanenvironmentandPT,differentproductivitiesshouldbebrokenout
fordifferentsizegroups:
025KESLOC
2650KESLOC
51100KESLOC
100+KESLOC

A future version of this manual will use additional data to examine productivity changes within
an operating environment and productivity type.

CostEstimatingRelationshipAnalysis61
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

7 Modern Estimation Challenges


Severalfuturetrendswillpresentsignificantfuturechallengesforthesizingandcostestimation
of21stcenturysoftwaresystems.Prominentamongthesetrendsare:
Rapidchange,emergentrequirements,andevolutionarydevelopment
Netcentricsystemsofsystems
ModelDrivenandNonDevelopmentalItem(NDI)intensivesystems
Ultrahighsoftwaresystemassurance
Legacymaintenanceandbrownfielddevelopment
AgileandKanbandevelopment
Thischaptersummarizeseachtrendandelaboratesonitschallengesforsoftwaresizingand
costestimation.

7.1 Changing Objectives, Constraints and Priorities


7.1.1 Rapid Change, Emergent Requirements, and Evolutionary Development
21stcenturysoftwaresystemswillencounterincreasinglyrapidchangeintheirobjectives,
constraints,andpriorities.Thischangewillbenecessaryduetoincreasinglyrapidchangesin
theircompetitivethreats,technology,organizations,leadershippriorities,andenvironments.It
isthusincreasinglyinfeasibletoprovideprecisesizeandcostestimatesifthesystems
requirementsareemergentratherthanprespecifiable.Thishasledtoincreasinguseof
strategiessuchasincrementalandevolutionarydevelopment,andtoexperienceswith
associatednewsizingandcostingphenomenasuchastheIncrementalDevelopment
ProductivityDecline.Italsoimpliesthatmeasuringthesystemssizebycountingthenumber
ofsourcelinesofcode(SLOC)inthedeliveredsystemmaybeanunderestimate,asagooddeal
ofsoftwaremaybedevelopedanddeletedbeforedeliveryduetochangingpriorities.
Therearethreeprimaryoptionsforhandlingthesesizingandestimationchallenges.Thefirstis
toimprovetheabilitytoestimaterequirementsvolatilityduringdevelopmentviaimproved
datacollectionandanalysis,suchastheuseofcodecountersabletocountnumbersofSLOC
added,modified,anddeletedduringdevelopment[Nguyen2010].Ifsuchdataisunavailable,
thebestonecandoistoestimaterangesofrequirementsvolatility.Foruniformity,Table27
presentsarecommendedsetofRequirementsVolatility(RVOL)rangesoverthedevelopment
periodforratinglevelsof1(VeryLow)to5(VeryHigh),suchasintheDoDSRDRform
[DCARC2005].
Table27RecommendedRVOLRatingLevels
Rating Level RVOL Range RVOL Average
1. Very Low 0-6% 3%
2. Low 6-12% 9%
3. Nominal 12-24% 18%
4. High 24-48% 36%
5. Very High >48% 72%

ModernEstimationChallenges62
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Forincrementalandevolutionarydevelopmentprojects,thesecondoptionistotreattheearlier
incrementsasreusedsoftware,andtoapplyreusefactorstothem(suchasthepercentofthe
design,code,andintegrationmodified,perhapsadjustedfordegreeofsoftware
understandabilityandprogrammerunfamiliarity[Boehmetal.2000]).Thiscanbedoneeither
uniformlyacrossthesetofpreviousincrements,ofbyhavingthesefactorsvarybyprevious
incrementorbysubsystem.ThiswillproduceanequivalentSLOC(ESLOC)sizefortheeffectof
modifyingthepreviousincrements,tobeaddedtothesizeofthenewincrementinestimating
effortforthenewincrement.Intrackingthesizeoftheoverallsystem,itisimportantto
rememberthattheseESLOCarenotactuallinesofcodetobeincludedinthesizeofthenext
release.
ThethirdoptionistoincludeanIncrementalDevelopmentProductivityDecline(IDPD)factor,
orperhapsmultiplefactorsvaryingbyincrementorsubsystem.Unlikehardware,whereunit
coststendtodecreasewithaddedproductionvolume,theunitcostsoflatersoftware
incrementstendtoincrease,duetopreviousincrementbreakageandusagefeedback,anddue
toincreasedintegrationandtesteffort.Thus,usinghardwaredrivenortraditionalsoftware
drivenestimationmethodsforlaterincrementswillleadtounderestimatesandoverrunsin
bothcostandschedule.
Arelevantexamplewasalargedefensesoftwaresystemthathadthefollowingcharacteristics:
5builds,7years,$100M
Build1productivityover300SLOC/personmonth
Build5productivityunder150SLOC/personmonth
IncludingBuild14breakage,integration,rework
318%changeinrequirementsacrossallbuilds
Afactorof2decreaseinproductivityacrossfournewbuildscorrespondstoanaveragebuild
tobuildIDPDfactorof19%.ArecentquantitativeIDPDanalysisofasmallersoftwaresystem
yieldedanIDPDof14%,withsignificantvariationsfromincrementtoincrement[Tanetal.
2009].SimilarIDPDphenomenahavebeenfoundforlargecommercialsoftwaresuchasthe
multiyearslippagesinthedeliveryofMicrosoftsWordforWindows[GillIansiti1994]and
WindowsVista,andforlargeagiledevelopmentprojectsthatassumedazeroIDPDfactor
[ElssamadisySchalliol2002].
Basedonexperiencewithsimilarprojects,thefollowingimpactcausesandrangesper
incrementareconservativelystatedinTable28:
Table28IDPDEffortDrivers
Less effort due to more experienced personnel, assuming reasonable
initial experience level
Variation depending on personnel turnover rates 5-20%
More effort due to code base growth
Breakage, maintenance of full code base 20-40%
Diseconomies of scale in development, integration 10-25%
Requirements volatility, user requests 10-25%

ModernEstimationChallenges63
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Inthebestcase,therewouldbe20%moreeffort(fromabove20+20+10+10);fora4build
system,theIDPDwouldbe6%.
Intheworstcase,therewouldbe85%moreeffort(fromabove40+25+255);fora4buildsystem,
theIDPDwouldbe23%.
Inanycase,withfixedstaffsize,therewouldbeeitherascheduleincreaseorincompletebuilds.
Thedifferencebetween6%and23%maynotlooktooserious,butthecumulativeeffectson
scheduleacrossanumberofbuildsisveryserious.
Asimplifiedillustrativemodelrelatingproductivitydeclinetonumberofbuildsneededto
reach4MESLOCacross4buildsfollows.AssumethatthetwoyearBuild1productionof1M
SLOCcanbedevelopedat200SLOC/PM.Thismeansitwillneed208developers(500PM/24
mo.).Assumingaconstantstaffsizeof208forallbuilds.TheanalysisshowninFigure6shows
theimpactontheamountofsoftwaredeliveredperbuildandtheresultingeffectontheoverall
deliveryscheduleasafunctionoftheIDPDfactor.Manyincrementaldevelopmentcost
estimatesassumeanIDPDofzero,andanontimedeliveryof4MSLOCin4builds.However,
astheIDPDfactorincreasesandthestaffinglevelremainsconstant,theproductivitydeclineper
buildstretchesthescheduleouttotwiceaslongforanIDPDof20%.
Thus,itisimportanttounderstandtheIDPDfactoranditsinfluencewhendoingincrementalor
evolutionarydevelopment.OngoingresearchindicatesthatthemagnitudeoftheIDPDfactor
mayvarybytypeofapplication(infrastructuresoftwarehavinghigherIDPDssinceittendsto
betightlycoupledandtoucheseverything;applicationssoftwarehavinglowerIDPDsifitis
architectedtobelooselycoupled),orbyrecencyofthebuild(olderbuildsmaybemorestable).
Furtherdatacollectionandanalysiswouldbeveryhelpfulinimprovingtheunderstandingof
theIDPDfactor.

20000
18000
16000
14000
Cumulative 12000
KSLOC 10000
0% productivity decline
8000 10% productivity decline
6000 15% productivity decline
20% productivity decline
4000
2000
0
1 2 3 4 5 6 7 8
Build

Figure6EffectsofIDPDonNumberofBuildstoachieve4MSLOC

ModernEstimationChallenges64
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
7.1.2 Net-centric Systems of Systems (NCSoS)
IfoneisdevelopingsoftwarecomponentsforuseinaNCSoS,changesintheinterfacesbetween
thecomponentsystemsandindependentlyevolvingNCSoSinternalorNCSoSexternal
systemswilladdfurthereffort.Theamountofeffortmayvarybythetightnessofthecoupling
amongthesystems;thecomplexity,dynamism,andcompatibilityofpurposeofthe
independentlyevolvingsystems;andthedegreeofcontrolthattheNCSoSprotagonisthasover
thevariouscomponentsystems.ThelatterrangesfromDirectedSoS(strongcontrol),through
Acknowledged(partialcontrol)andCollaborative(sharedinterests)SoSs,toVirtualSoSs(no
guarantees)[USD(AT&L)2008].
Forestimation,oneoptionistouserequirementsvolatilityasawaytoassessincreasedeffort.
AnotheristouseexistingmodelssuchasCOSYSMO[Valerdi2008]toestimatetheadded
coordinationeffortacrosstheNCSoS[Lane2009].Athirdapproachistohaveseparatemodels
forestimatingthesystemsengineering,NCSoScomponentsystemsdevelopment,andNCSoS
componentsystemsintegrationtoestimatetheaddedeffort[LaneBoehm2007].

7.1.3 Model-Driven and Non-Developmental Item (NDI)-Intensive Development


ModeldrivendevelopmentandNonDevelopmentalItem(NDI)intensivedevelopmentare
twoapproachesthatenablelargeportionsofsoftwareintensivesystemstobegeneratedfrom
modeldirectivesorprovidedbyNDIssuchasCommercialOffTheShelf(COTS)components,
opensourcecomponents,andpurchasedservicessuchasCloudservices.Figure7showsrecent
trendsinthegrowthofCOTSBasedApplications(CBAs)[Yangetal.2005]andservices
intensivesystems[KoolmanojwongBoehm2010]intheareaofwebbasedeservices.

CBA Growth Trend in USC e-Services Projects Fa08 Fa09


Sp09, Sp10,
80
Fa07
57%
Percentage

70 50%
60 Sp08,
Fa06
Percentage

50 35%
40 Sp07
30 19%
20
10
0
1997 1998 1999 2000 2001 2002
Year
Year

Figure7COTSandServicesIntensiveSystemsGrowthinUSCEServicesProjects

Suchapplicationsarehighlycosteffective,butpresentseveralsizingandcostestimation
challenges:
ModeldirectivesgeneratesourcecodeinJava,C++,orotherthirdgenerationlanguages,but
unlessthegeneratedSLOCaregoingtobeusedforsystemmaintenance,theirsizeas
countedbycodecountersshouldnotbeusedfordevelopmentormaintenancecost
estimation.

ModernEstimationChallenges65
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Countingmodeldirectivesispossibleforsometypesofmodeldrivendevelopment,but
presentssignificantchallengesforothers(e.g.,GUIbuilders).
Exceptforcustomerfurnishedoropensourcesoftwarethatisexpectedtobemodified,the
sizeofNDIcomponentsshouldnotbeusedforestimating.
AsignificantchallengeistofindappropriatelyeffectivesizemeasuresforsuchNDI
components.Oneapproachistousethenumberandcomplexityoftheirinterfaceswith
eachotherorwiththesoftwarebeingdeveloped.Anotheristocounttheamountofglue
codeSLOCbeingdevelopedtointegratetheNDIcomponents,withtheprovisothatsuch
gluecodetendstobeabout3timesasexpensiveperSLOCasregularlydevelopedcode
[BasiliBoehm,2001].Asimilarapproachistousetheinterfaceelementsoffunctionpoints
forsizing[GalorathEvans2006].
AfurtherchallengeisthatmuchoftheeffortinusingNDIisexpendedinassessing
candidateNDIcomponentsandintailoringthemtothegivenapplication.Someinitial
guidelinesforestimatingsucheffortareprovidedintheCOCOTSmodel[Abts2004].
AnotherchallengeisthattheeffectsofCOTSandCloudservicesevolutionaregenerally
underestimatedduringsoftwaremaintenance.COTSproductsgenerallyprovidesignificant
newreleasesontheaverageofaboutevery10months,andgenerallybecomeunsupported
afterthreenewreleases.WithCloudservices,onedoesnothavetheoptiontodeclinenew
releases,andupdatesoccurmorefrequently.Onewaytoestimatethissourceofeffortisto
consideritasaformofrequirementsvolatility.
Anotherseriousconcernisthatfunctionalsizemeasuressuchasfunctionpoints,usecases,
orrequirementswillbehighlyunreliableuntilitisknownhowmuchofthefunctionalityis
goingtobeprovidedbyNDIcomponentsorCloudservices.

7.1.4 Ultrahigh Software Systems Assurance


Theincreasingcriticalityofsoftwaretothesafetyoftransportationvehicles,medicalequipment,
orfinancialresources;thesecurityofprivateorconfidentialinformation;andtheassuranceof
24/7Internet,web,orCloudserviceswillrequirefurtherinvestmentsinthedevelopment
andcertificationofsoftwarethanareprovidedbymostcurrentsoftwareintensivesystems.
Whileitiswidelyheldthatultrahighassurancesoftwarewillsubstantiallyraisesoftware
projectcost,differentmodelsvaryinestimatingtheaddedcost.Forexample,[BisignaniReed
1988]estimatesthatengineeringhighlysecuresoftwarewillincreasecostsbyafactorof8;the
1990sSoftcostRmodelestimatesafactorof3.43[Reifer2002];theSEERmodelusesasimilar
valueof3.47[GalorathEvans2006].
ArecentexperimentalextensionoftheCOCOMOIImodelcalledCOSECMOusedthe7
EvaluatedAssuranceLevels(EALs)intheISOStandardCommonCriteriaforInformation
TechnologySecurityEvaluation(CC)[ISO1999],andquotedpricesforcertifyingvariousEAL
securitylevelstoprovideaninitialestimationmodelinthiscontext[ColbertBoehm2008].Its
addedeffortestimateswereafunctionofbothEALlevelandsoftwaresize:itsmultipliersfora
5000SLOCsecuresystemwere1.50forEAL4and8.8forEAL7.
Afurthersizingchallengeforultrahighassurancesoftwareisthatitrequiresmorefunctionality
forsuchfunctionsassecurityaudit,communication,cryptographicsupport,dataprotection,
ModernEstimationChallenges66
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
etc.ThesemaybefurnishedbyNDIcomponentsormayneedtobedevelopedforspecial
systems.

7.1.5 Legacy Maintenance and Brownfield Development


Fewerandfewersoftwareintensivesystemshavetheluxuryofstartingwithacleansheetof
paperorwhiteboardonwhichtocreateanewGreenfieldsystem.Mostsoftwareintensive
systemsarealreadyinmaintenance;[Booch2009]estimatesthatthereareroughly200billion
SLOCinserviceworldwide.Also,mostnewapplicationsneedtoconsidercontinuityofservice
fromthelegacysystem(s)theyarereplacing.Manysuchapplicationsinvolvingincremental
developmenthavefailedbecausetherewasnowaytoseparateouttheincrementallegacy
systemcapabilitiesthatwerebeingreplaced.Thus,suchapplicationsneedtouseaBrownfield
developmentapproachthatconcurrentlyarchitectthenewversionanditsincrements,whilere
engineeringthelegacysoftwaretoaccommodatetheincrementalphaseinofthenew
capabilities[HopkinsJenkins2008;Lewisetal.2008;Boehm2009].
TraditionalsoftwaremaintenancesizingmodelshavedeterminedanequivalentSLOCsizeby
multiplyingthesizeofthelegacysystembyitsAnnualChangeTraffic(ACT)fraction(%of
SLOCadded+%ofSLOCmodified)/100.Theresultingequivalentsizeisusedtodeterminea
nominalcostofayearofmaintenance,whichisthenadjustedbymaintenanceorientedeffort
multipliers.Thesearegenerallysimilarorthesameasthosefordevelopment,exceptforsome,
suchasrequiredreliabilityanddegreeofdocumentation,inwhichlargerdevelopment
investmentswillyieldrelativemaintenancesavings.SomemodelssuchasSEER[Galorath
Evans2006]includefurthermaintenanceparameterssuchaspersonnelandenvironment
differences.Anexcellentsummaryofsoftwaremaintenanceestimationisin[Stutzke2005].
However,aslegacysystemsbecomelargerandlarger(afullupBMWcontainsroughly100
millionSLOC[Broy2010]),theACTapproachbecomeslessstable.Thedifferencebetweenan
ACTof1%andanACTof2%whenappliedto100millionSLOCis1millionSLOC.Arecent
revisionoftheCOCOMOIIsoftwaremaintenancemodelsizesanewreleaseasESLOC=
2*(ModifiedSLOC)+AddedSLOC+0.5*(DeletedSLOC).Thecoefficientsareroundedvalues
determinedfromtheanalysisofdatafrom24maintenanceactivities[Nguyen,2010],inwhich
themodified,added,anddeletedSLOCwereobtainedfromacodecountingtool.Thismodel
canalsobeusedtoestimatetheequivalentsizeofreengineeringlegacysoftwareinBrownfield
softwaredevelopment.Atfirst,theestimatesoflegacySLOCmodified,added,anddeletedwill
beveryrough,andcanberefinedasthedesignofthemaintenancemodificationsorBrownfield
reengineeringisdetermined.

ModernEstimationChallenges67
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
7.1.6 Agile and Kanban Development
Thedifficultiesofsoftwaremaintenanceestimationcanoftenbemitigatedbyusingworkflow
managementtechniquessuchasKanban[Anderson2010].InKanban,individualmaintenance
upgradesaregivenKanbancards(KanbanistheJapanesewordforcard;theapproach
originatedwiththeToyotaProductionSystem).Workflowmanagementisaccomplishedby
limitingthenumberofcardsintroducedintothedevelopmentprocess,andpullingthecards
intothenextstageofdevelopment(design,code,test,release)whenopencapacityisavailable
(eachstagehasalimitofthenumberofcardsitcanbeprocessingatagiventime).Any
buildupsofupgradequeueswaitingtobepulledforwardaregivenmanagementattentionto
findandfixbottleneckrootcausesortorebalancethemanpowerdevotedtoeachstageof
development.AkeyKanbanprincipleistominimizeworkinprogress.
AnadvantageofKanbanisthatifupgraderequestsarerelativelysmallanduniform,thatthere
isnoneedtoestimatetheirrequiredeffort;theyarepulledthroughthestagesascapacityis
available,andifthecapacitiesofthestagesarewelltunedtothetraffic,workgetsdoneon
schedule.However,ifatoolargeupgradeisintroducedintothesystem,itislikelytointroduce
delaysasitprogressesthroughthestages.Thus,someformofestimationisnecessaryto
determinerightsizeupgradeunits,butitdoesnothavetobepreciseaslongastheworkflow
managementpullstheupgradethroughthestages.Forfamiliarsystems,performerswillbeable
torightsizetheunits.ForKanbaninlessfamiliarsystems,andforsizingbuildsinagile
methodssuchasScrum,groupconsensustechniquessuchasPlanningPoker[Cohn2005]or
WidebandDelphi[Boehm1981]cangenerallyservethispurpose.
Thekeypointhereistorecognizethatestimationofknowledgeworkcanneverbeperfect,and
tocreatedevelopmentapproachesthatcompensateforvariationsinestimationaccuracy.
Kanbanisonesuch;anotheristheagilemethodsapproachoftimeboxingorscheduleas
independentvariable(SAIV),inwhichmaintenanceupgradesorincrementaldevelopment
featuresareprioritized,andtheincrementarchitectedtoenabledroppingoffeaturestomeeta
fixeddeliverydate(WithKanban,prioritizationoccursindeterminingwhichofabacklogof
desiredupgradefeaturesgetsthenextcard).Suchprioritizationisaformofvaluebased
softwareengineering,inthatthehigherpriorityfeaturescanbeflowedmorerapidlythrough
Kanbanstages[Anderson2010],oringeneralgivenmoreattentionindefectdetectionand
removalviavaluebasedinspectionsortesting[BoehmLee2005;LiBoehm2010].Another
importantpointisthattheabilitytocompensateforroughestimatesdoesnotmeanthatdataon
projectperformancedoesnotneedtobecollectedandanalyzed.Itisevenmoreimportantasa
soundsourceofcontinuousimprovementandchangeadaptabilityefforts.

7.1.7 Putting It All Together at the Large-Project or Enterprise Level


Thebiggestchallengeofallisthatthesixchallengesaboveneedtobeaddressedconcurrently.
Suboptimizingonindividualprojectagilityrunstherisksofeasiestfirstlockintounscalableor
unsecurablesystems,orofproducingnumerousincompatiblestovepipeapplications.
Suboptimizingonsecurityassuranceandcertificationrunstherisksofmissingearlyadopter
marketwindows,ofrapidlyrespondingtocompetitivethreats,orofcreatinginflexible,user
unfriendlysystems.
ModernEstimationChallenges68
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Onekeystrategyforaddressingsuchestimationandperformancechallengesistorecognize
thatlargesystemsandenterprisesarecomposedofsubsystemsthathavedifferentneed
prioritiesandcanbehandledbydifferentestimationandperformanceapproaches.Realtime,
safetycriticalcontrolsystemsandsecuritykernelsneedhighassurance,butarerelatively
stable.GUIsneedrapidadaptabilitytochange,butwithGUIbuildersystems,canlargely
compensateforlowerassurancelevelsviarapidfixes.Akeypointhereisthatformost
enterprisesandlargesystems,thereisnoonesizefitsallmethodofsizing,estimating,and
performing.

7.2 Estimation Approaches for Different Processes


Thisimpliesaneedforguidanceonwhatkindofprocesstouseforwhatkindofsystemor
subsystem,andonwhatkindsofsizingandestimationcapabilitiesfitwhatkindsofprocesses.
AstarttowardsuchguidanceisprovidedinTables3.3and3.4in[BoehmLane2010].
Figure8summarizesthetraditionalsinglestepwaterfallprocessplusseveralformsof
incrementaldevelopment,eachofwhichmeetsdifferentcompetitivechallengesandwhichare
bestservedbydifferentcostestimationapproaches.Thetimephasingofeachformisexpressed
intermsoftheincrement1,2,3,contentwithrespecttotheRationalUnifiedProcess(RUP)
phasesofInception(I),Elaboration(E),Construction(C),andTransition(T):

Figure8SummaryofDifferentProcesses

ModernEstimationChallenges69
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
TheSingleStepmodelisthetraditionalwaterfallmodel,inwhichtherequirementsarepre
specified,andthesystemisdevelopedtotherequirementsinasingleincrement.Single
incrementparametricestimationmodels,complementedbyexpertjudgment,arebestforthis
process.
ThePrespecifiedSequentialincrementaldevelopmentmodelisnotevolutionary.Itjustsplits
upthedevelopmentinordertofieldanearlyInitialOperationalCapability,followedbyseveral
PrePlannedProductImprovements(P3Is).Whenrequirementsareprespecifiableandstable,it
enablesastrong,predictableprocess.Whenrequirementsareemergentand/orrapidly
changing,itoftenrequiresveryexpensivereworkwhenitneedstoundoarchitectural
commitments.Costestimationcanbeperformedbysequentialapplicationofsinglestep
parametricmodelsplustheuseofanIDPDfactor,orbyparametricmodelextensions
supportingtheestimationofincrements,includingoptionsforincrementoverlapandbreakage
ofexistingincrements,suchastheextensionofCOCOMOIIIncrementalDevelopmentModel
(COINCOMO)extensiondescribedinAppendixBof[Boehmetal.2000].
TheEvolutionarySequentialmodelrapidlydevelopsaninitialoperationalcapabilityand
upgradesitbasedonoperationalexperience.Pureagilesoftwaredevelopmentfitsthismodel:if
somethingiswrong,itwillbefixedin30daysinthenextrelease.Rapidfieldingalsofitsthis
modelforlargerorhardwaresoftwaresystems.Itsstrengthisgettingquickresponse
capabilitiesinthefield.Forpureagile,itcanfallpreytoaneasiestfirstsetofarchitectural
commitmentswhichbreakwhen,forexample,ittriestoaddsecurityorscalabilityasanew
featureinalaterincrement.Forrapidfielding,itmaybeexpensivetokeepthedevelopment
teamtogetherwhilewaitingforusagefeedback,butitmaybeworthit.Forsmallagileprojects,
groupconsensustechniquessuchasPlanningPokerarebest;forlargerprojects,parametric
modelswithanIDPDfactorarebest.
EvolutionaryOverlappedcoversthespecialcaseofdeferringthenextincrementuntilcritical
enablerssuchasdesirednewtechnology,anticipatednewcommercialproductcapabilities,or
neededfundingbecomeavailableormatureenoughtobeadded.
EvolutionaryConcurrenthasthesystemsengineershandlingthechangetrafficandre
baseliningtheplansandspecificationsforthenextincrement,whilekeepingthedevelopment
stabilizedforthecurrentincrement.ItsexampleandprosandconsareprovidedinTable29.

ModernEstimationChallenges70
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table29SituationDependentProcessesandEstimationApproaches
Type Examples Pros Cons Cost Estimation
Single Step Stable; High Pre-specifiable Emergent Single-increment
Assurance full-capability requirements or parametric
requirements rapid change estimation models
Pre-specified Platform base Pre-specifiable Emergent COINCOMO or
Sequential plus PPPIs full-capability requirements or repeated single-
requirements rapid change increment parametric
model estimation
with IDPD
Evolutionary Small: Agile Adaptability to Easiest-first; late, Small: Planning-
Sequential Large: change costly breakage poker-type
Evolutionary Large: Parametric
Development with IDPD and
Requirements
Volatility
Evolutionary COTS-intensive Immaturity risk Delay may be Parametric with IDPD
Overlapped systems avoidance noncompetitive and Requirements
Volatility
Evolutionary Mainstream High assurance Highly coupled COINCOMO with
Concurrent product lines; with rapid systems with very IDPD for
Systems of change rapid change development;
systems COSYSMO for re-
baselining

AllCostEstimationapproachesalsoincludeanexpertjudgmentcrosscheck.
Table30providescriteriafordecidingwhichofthefiveclassesofincrementalandevolutionary
acquisition(EvA)definedinTable29touse,plusthechoiceofnonincremental,singlestep
development.
TheSingleSteptoFullCapabilityprocessexemplifiedbythetraditionalwaterfallorsequential
Veemodelisappropriateiftheproductsrequirementsareprespecifiableandhavealow
probabilityofsignificantchange;andifthereisnovalueinoropportunitytodeliverapartial
productcapability.Agoodexamplewouldbethehardwareportionofageosynchronous
satellite.
ThePrespecifiedSequentialprocessisbestiftheproductsrequirementsareprespecifiable
andhavealowprobabilityofsignificantchange;andifwaitingforthefullsystemtobe
developedincursalossofimportantanddeliverableincrementalmissioncapabilities.Agood
examplewouldbeawellunderstoodandwellprioritizedsequenceofsoftwareupgradestoa
programmableradio.
TheEvolutionarySequentialprocessisbestwhenthereisaneedtogetoperationalfeedbackon
aquickresponsecapabilitybeforedefininganddevelopingthenextincrementscontent.Agile
methodsfitintothiscategory,asdosystemsundergoingrapidcompetitivechange.
TheEvolutionaryOverlappedprocessisbestwhenonedoesnotneedtowaitforoperational
feedback,butmayneedtowaitfornextincrementenablerssuchastechnologymaturity,
ModernEstimationChallenges71
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
externalsystemcapabilities,orneededresources.Agoodexampleistheneedtowaitfora
maturereleaseofananticipatedcommercialproduct.TheEvolutionaryConcurrentprocessis
bestwhentheenablersareavailable,butthereisagreatdealofchangetraffictobehandledthat
woulddestabilizetheteamdevelopingthecurrentincrement.Examplesmaybenew
competitivethreats,emergentusercapabilityneeds,externalsysteminterfacechanges,
technologymaturedonotherprograms,orCOTSupgrades.
Table 30 Process Model Decision Table
Stable pre- OK to wait for full Need to wait for Need to wait for
specifiable system to be next-increment next-increment
Type requirements? developed? priorities? enablers?
Single Step Yes Yes
Pre-specified
Yes No
Sequential
Evolutionary
No No Yes
Sequential
Evolutionary
No No No Yes
Overlapped
Evolutionary
No No No No
Concurrent

Exampleenablers:Technologymaturity;Externalsystemcapabilities;Neededresources

ModernEstimationChallenges72
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

8 Conclusions and Next Steps


TherearetwoconclusionsthatcanbedrawnfromtheresultspresentedinChapter6Cost
EstimatingRelationshipAnalysis:
1. Theresultsdonotprovideenoughcertaintytobeuseful.Inwhichcasethereadermaybe
advisedtobecautiousofanyproductivitymetrics.
2. Theresultshavevaliditybutmoreinvestigationisneededintothevariabilityofthedataand
analysisofadditionaldataisneededaswell.

Thismanualisstillaworkinprogress.TherearemoreSRDRrecordstobeanalyzed.Withmore
data,theresultspresentedwouldbeexpandedandrefined.Thelistbelowpresentsanoverview
ofthenextsteps:
ExpandtheSLOCcounttypeconversionsforeachofthesixprogramminglanguages.
Expandtheadaptedcodeparametertabletoadditionalproductivitytypesandoperating
environments.
Expandtheaverageeffortpercentagestable,Table14,toadditionalproductivitytypes.
Analysisofscheduledurationforthedifferentactivitieswillbeconducted.
ExpandthenumberofCERsforeachproductivitytypesandoperatingenvironments.
Increasethecoverageoftheproductivitybenchmarksforeachoperatingenvironmentand
productivitytype.
Segmenttheproductivitybenchmarksbysoftwaresizegroups.
Therearetwoadditionalchaptersthatshouldbeinthismanual:
1. SoftwareCodeGrowth
ThisisaveryimportanttopicasitwillimpactthesensitivitiesofaCERbasedestimate.
2. SoftwareMaintenanceCERsandProductivityBenchmarks
WiththesuperlargebaseofDoDsoftware,maintenancecostestimationsareanother
importanttopic.

ConclusionsandNextSteps73
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

9 Appendices
9.1 Acronyms
4GL FourthGenerationLanguage
AAF AdaptationAdjustmentFactor:Itisusedwithadaptedsoftwareto
produceanequivalentsize.ItincludestheeffectsofDesign
Modified(DM),CodeModified(CM),andIntegrationModified
(IM).
AAM AdaptationAdjustmentMultiplier
ACAT AcquisitionCategory
ACEIT AutomatedCostEstimatingIntegratedTools
ACWP ActualCostofWorkPerformed
AMS AcquisitionManagementSystem
ASP AcquisitionSupportPlan
AV AerialVehicle
AVM AerialVehicleManned,e.g.,Fixedwingaircraft,Helicopters
AVU AerialVehicleUnmanned,e.g.,Remotelypilotedairvehicles
BCWP BudgetedCostofWorkPerformed
BCWS BudgetedCostofWorkScheduled
BFP BasicFeaturePoint
C/SCSC Costs/ScheduleControlSystemCriteria
CAPE CostAssessmentandProgramEvaluation(anOSDorganization)
CARD CostAnalysisRequirementsDocument
CDA CentralDesignAuthority
CDD CapabilityDescriptionDocument
CDR CriticalDesignReview
CDRL ContractDataRequirementsList
CER CostEstimatingRelationship
CER CostEstimatingRelationship
CM CodeModifiedPercentage
CMM CapabilityMaturityModel

Appendices74
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
CO ContractingOfficer
COCOMO COnstructiveCOstMOdel
COCOTS COnstructiveCOTS
COTS CommercialofftheShelf
CPM CriticalPathMethod
CSC ComputerSoftwareComponent
CSCI ComputerSoftwareConfigurationItem
CSDR CostandSoftwareDataReport
CSU ComputerSoftwareUnit
DACIMS DefenseAutomatedCostInformationManagementSystem
DCARC DefenseCostandResourceCenter
DDE DynamicDataExchange
DM DesignModifiedPercentage
DoD DepartmentofDefense
EA EvolutionaryAcquisition
EI ExternalInputs
EIF ExternalInterfaces
EO ExternalOutputs
EQ ExternalInquiries
EVMS EarnedValueManagementSystem
FAACEH FAACostEstimatingHandbook
FAAPH FAAPricingHandbook
FAQ FrequentlyAskedQuestions
FCA FunctionalConfigurationAudit
FPA FunctionPointAnalysis
FPC FunctionPointCount
FPH FAAPricingHandbook
GAO U.S.GeneralAccountingOffice
GS GroundSite
GSF GroundSiteFixed,e.g.,CommandPost,GroundOperations
Center,GroundTerminal,TestFaculties

Appendices75
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
GSM GroundSiteMobile,e.g.,Intelligencegatheringstationsmounted
onvehicles,Mobilemissilelauncher
GUI GraphicalUserInterface
GV GroundVehicle
GVM GroundVehicleManned,e.g.,Tanks,Howitzers,Personnelcarrier
GVU GroundVehicleUnmanned,e.g.,Robotvehicles
HOL HigherOrderLanguage
HWCI HardwareConfigurationitem
IDPD IncrementalDevelopmentProductivityDecline
ICE IndependentCostEstimate
IEEE InstituteofElectricalandElectronicsEngineers
IFPUG InternationalFunctionPointUsersGroup
IIS IntelligenceandInformationSoftware
ILF InternalFiles
IM IntegrationModifiedPercentage
IRS InterfaceRequirementSpecification
IS InformationSystem
KDSI ThousandsofDeliveredSourceInstructions
LCC LifeCycleCost
MDAP MajorDefenseAcquisitionProgram
MOU MemorandumofUnderstanding
MPSS MostProductiveScaleSize
MTTD MeanTimeToDetect
MP MissionProcessing
MSLOC Millionsofsourcelinesofcode
MV MaritimeVessel
MVM MaritimeVesselManned,e.g.,Aircraftcarriers,destroyers,supply
ships,submarines
MVU MaritimeVesselUnmanned,e.g.,Minehuntingsystems,Towed
sonararray
NASA NationalAeronauticsandSpaceAdministration
NCCA NavalCenterforCostAnalysis
Appendices76
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
NDI NonDevelopmentItem
NLM NonLinearModel
NRaD UnitedStatesNavysNavalCommand,Control,Surveillance
Center,RDT&EDivision,SoftwareEngineeringProcessOffice
OO ObjectOriented
OpEnv OperatingEnvironment
OSD OfficeoftheSecretaryofDefense
OV OrdinanceVehicle
OVU OrdinanceVehicleUnmanned,e.g.,Airtoairmissiles,Airto
groundmissiles,Smartbombs,Strategicmissiles
PCA PhysicalConfigurationAudit
PERT ProgramEvaluationandReviewTechnique
PC ProcessControl
PLN Planningsoftware
Pr Productivity
PT ProductivityType
RTE RealTimeEmbedded
RUP RationalUnifiedProcess
SAIV ScheduleAsanIndependentVariable
SCI Scientificsoftware
SCP SensorControlandSignalProcessing(SCP)
SDD SoftwareDesignDocument
SDP SoftwareDevelopmentPlan
SDR SoftwareDesignReview
SEER AtoolsuiteproducedbyGalorath
SEERSEM SystemEvaluationandEstimationofResourcesSoftware
EstimatingModel
SEI SoftwareEngineeringInstitute
SER ScheduleEstimatingRelationship
SLIM AtoolsuiteproducedbyQuantitativeSoftwareManagement
SLIM SoftwareLifeCycleModel
SLOC SourceLinesofCode
Appendices77
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
SRDR SoftwareResourceDataReport
SRR SystemsRequirementsReview
SRS SoftwareRequirementsSpecification
SSCAG SpaceSystemsCostAnalysisGroup
SSR SoftwareSpecificationReview
SSS SystemSegmentSpecification
SU SoftwareUnderstanding
SV SpaceVehicle
SVM SpaceVehicleManned,e.g.,Passengervehicle,Cargovehicle,
Spacestation
SVU SpaceVehicleUnmanned,e.g.,Orbitingsatellites(weather,
communications),Exploratoryspacevehicles
SYS SystemSoftware
TEL Telecommunicationssoftware
TOOL SoftwareTools
TST Testsoftware
TRN TrainingSoftware
UCC UniversalCodeCounter
UNFM ProgrammerUnfamiliarity
USC UniversityofSouthernCalifornia
VC VehicleControl
VP VehiclePayload
WBS WorkBreakdownStructure

Appendices78
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

9.2 Automated Code Counting


UnifiedCodeCountisasourcecodecountinganddifferencingtool.Itallowstheusertocount,
compare,andcollectbothphysicalandlogicaldifferentialsbetweentwoversionsofthesource
codeofasoftwareproduct.Thedifferencingcapabilitiesallowuserstocountthenumberof
added/new,deleted,modified,andunmodifiedphysicalandlogicalsourcelinesofcode
(SLOC)ofthecurrentversionincomparisonwiththepreviousversion.Withthecounting
capabilities,userscangeneratethephysical,noncommentedsourcestatements,logicalSLOC
counts,andothersizinginformation,suchascommentandkeywordcounts,ofthetarget
program.ThetoolcanbecompiledusingaC/C++supportedcompiler.Itisrunbyproviding
filesoffilenamestobecountedorprovidingthedirectorieswherethefilesreside.Itcanbe
downloadedfreefromtheUSCCenterforSystemsandSoftwareEngineering4.

Figure9UnifiedCodeCountSummaryOutputExample

TheexampleinFigure9showsthesummarySLOCcountwithatotalof3,375LogicalSLOC
consistingof619datadeclarationsand2,127executableinstructions.Thisisthesoftwaresizeto
beusedforestimation,measurementofactuals,andmodelcalibration

9.3 Additional Adapted SLOC Adjustment Factors


SoftwareUnderstanding
SoftwareUnderstanding(SU)measureshowunderstandableisthesoftwaretobemodified.The
SUincrementisexpressedquantitativelyasapercentage.SUisdeterminedbytakingan
averageofitsratingsonstructure,applicationsclarity,andselfdescriptivenessusingTable31
below.

4 http://csse.usc.edu/research/CODECOUNT/

Appendices79
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Table31RatingScaleforSoftwareUnderstanding
Very Low Low Nominal High Very High
Very low Moderately Reasonably High Strong
cohesion, low cohesion, well- cohesion, low modularity,
high high structured; coupling. information
Structure
coupling, coupling. some weak hiding in data
spaghetti areas. / control
code. structures.
No match Some Moderate Good Clear match
between correlation correlation correlation between
Application
program and between between between program and
Clarity
application program and program and program and application
world-views. application. application. application. world-views.
Obscure Some code Moderate Good code Self-
code; commentary level of code commentary descriptive
documentati and headers; commentary, and headers; code;
Self- on missing, some useful headers, useful documentati
Descriptive- obscure or documentati documentati documentati on up-to-
ness obsolete. on. on. on; some date, well-
weak areas. organized,
with design
rationale.
SU
Increment to 50 40 30 20 10
ESLOC

ProgrammerUnfamiliarity
Unfamiliarity(UNFM)quantifieshowunfamiliarwiththesoftwaretobemodifiedistheperson
modifyingit.TheUNFMfactordescribedisappliedmultiplicativelytoSUtoaccountforthe
familiarity.Forexample,apersonwhodevelopedtheadaptedsoftwareandisintimatewithit
doesnothavetoundertaketheunderstandingeffort.SeeTable32below.
Table32RatingScaleforProgrammerUnfamiliarity
UNFM Increment to ESLOC Level of Unfamiliarity
0.0 Completely familiar
0.2 Mostly familiar
0.4 Somewhat familiar
0.6 Considerably familiar
0.8 Mostly unfamiliar
1.0 Completely unfamiliar

Appendices80
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
ThenonlineareffectsforSUandUNFMareaddedtothelinearapproximationgivenbyAAF
(discussedinChapter2.3.2)tocomputeESLOC.Ahigherfidelityadaptedcodeadjustment
factorisgivenbytheAdaptationAdjustmentMultiplier(AAM):
Eq 37 AAM = AAF x (1 + 0.02 x SU x UNFM) (when AAF 50%)
Eq 38 AAM = AAF + (SU x UNFM) (when AAF > 50%)
Thenewtotalequivalentsizeforsoftwarecomposedofnewandadaptedsoftwareis:
Eq 39 Total Equivalent Size = New Size + (AAM x Adapted Size)

9.3.1 Examples

9.3.1.1 Example: New Software


Asystemistobedevelopedallnew.Thereisnolegacysoftwareorotherreusablesoftware
used.TheonlysizeinputforestimationisNewandtherearenoadaptationparameters
involved.

9.3.1.2 Example: Modified Software


Thisexampleestimatestheequivalentsizeassociatedwithwritinganewuserinterfacetowork
withanexistingapplication.Assumethesizeofthenewinterfaceis20KSLOC.Forittowork,
wemustchangetheexistingapplicationtoaccommodateanewApplicationProgramInterface
(API).Iftheadaptedsizeestimateis100KSLOCasfollowsundertheassumptionthatthe
originalcodesizewas8KSLOCwecompute:
Eq 40 AAM = [0.4 x (5% DM) + 0.3 x (10% CM) + 0.3 x (10% IM)] x [100 KSLOC]
= 8 KSLOC
Furtherassumethatwearedealingwithpoorlywrittenspaghetticodeandthatwearetotally
unfamiliarwithit.WewouldthenrateSUas50%andUNFMas1.Asaresult,wewouldhave
toincreaseourestimatetoreflectthislearningcurve.

9.3.1.3 Example: Upgrade to Legacy System


Inthisexamplethereisaverylargeexistinglegacysystemundergoingperiodicupgrades.The
sizeofthelegacysystemissolargethattheequivalentsizetoestimatetheincrementalupdate
isverysensitivetotheadaptationparameters.Thesizemetricsfortheincrementare:
Newcode:75KSLOC
Modifiedcode:20KSLOC
Legacycode:3MSLOC

Appendices81
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
Caremustbetakenwhenassigningtheadaptationparametersforthelegacycodetocompute
itsequivalentsize.Forexample,thedifferencebetweenthesmallvaluesofAAF=0%andAAF
=5%isatriplingoftheequivalentsizefortheupgrade.Someregressiontestingofuntouched
legacycodeisinevitableandthefactorfor%IntegrationRequiredshouldbeinvestigated
carefullyintermsoftherelativeeffortinvolved.Thismightbedonebyquantifyingthenumber
theregressiontestsperformedandtheirmanualintensitycomparedtothetestsofnew
functionality.Ifthe%IntegrationRequiredforthelegacycodeis1%thentheadaptionfactor
foritwouldbe:
Eq 41 AAM = [0.4 x (0% DM) + 0.3 x (0% CM) + 0.3 x (1% IM)]
= 0.003
ThetotalESLOCfornew,modifiedandreused(legacy)assumingtheAAFforthemodified
codeis25%is:
Eq 42 ESLOC = 75 + (20 x 0.25) + (3 MSLOC x 0.003) = 75 + 5 + 9
= 89 KSLOC
Inthiscase,buildingontopofthelegacybaselinewas9/89orabout10%ofthework.

9.4 SRDR Data Report


TheSRDRDataReporthasseveralversionsreflectingitsevolutionoverpastyears:2003,2004,
2007and2001.Thereportdoesnotrequireaspecifyformatfordatasubmission.However,
thereisarecommendformatsubmittersmayuse.Thisformatlookslikeadataformand
consistsoftwopages.SeetheFigure10andFigure11below.

Appendices82
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Figure10SRDRPage1(top)

Appendices83
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Figure10SRDRPage1(bottom)

Appendices84
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Figure11SRDRPage2

Appendices85
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.4.1 Proposed Modifications
In2010,themodificationsinTable33wereproposedtotheDCARC.Mostofthe
recommendationswereincorporated,tosomedegree,inthe2011SRDRinstructions.Itwilltake
severalyearsbeforedataappearswiththeadditiondataitems.
TherelevanceofthissectioninthismanualisfordemonstratingthatDCARCwillevolvethe
SRDRtomeetfutureinformationneeds.SRDRusersneedtobeproactivewiththeDCARCto
expresstheirneedsfordifferenttypesofdata.TherecommendationsinTable33were
accompaniedwithexamplesofdataanalysishighlightingtheshortfallsinthecurrentdata
collection.
Table33RecommendedSRDRModifications
Current 2007 SRDR Proposed Modifications Rationale
Application Types (3.7.1 Reorganize around Operating Reduce duplication
17) Environments and Application Structure productivity analysis
Domains domains, database planning
Add Mission Criticality (add guidance
reliability and complexity in a Account for productivity
single rating scale) variations
Revisit detailed definitions of the
Application Domains
Amount of New (>25%), Add DM, CM, IM, SU, & UNFM Improve derivation of
Modified (<25% mod) factors for modified code equivalent SLOC for use in
Code Incorporate Galorath-like calibration and estimation
questionnaire Excludes COTS; more
Add IM for reused code accurate for generated
Definitions for code types code
Count at the level it will be Includes the code base for
maintained evolutionary acquisition
Deleted Code Report deleted code counts Deleting code does take
effort
Software and External Add anticipated requirements CARD realism
Interface Requirements volatility to 2630-1, 2 Traceability
Use percentage of requirements Improve calibration and
change as volatility input (SRR estimation accuracy
baseline)
Personnel Experience & Add to 2630-1 CARD realism
Turnover Expand years of experience rating Traceability
scale to 12 years Improve calibration and
estimation accuracy
Project- or CSCI-level Specify the level of data reporting Apples-to-Apples comparison
data Improved data analysis

Appendices86
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Table33RecommendedSRDRModifications
Current 2007 SRDR Proposed Modifications Rationale
All Other Direct Break into: Improve calibration and
Software Engineering Management functions estimation accuracy for
Development Effort Configuration / Environment different functions
(4.7): functions
Project Management Assessment functions
IV&V Organization functions (e.g. user &
Configuration maintainer documentation,
Management measurement, training, process
Quality Control improvement, etc.)
Problem Resolution
Library Management
Process Improvement
Measurement
Training
Documentation
Data Conversion
Customer-run
Acceptance Test
Software Delivery,
Installation &
Deployment
Product Quality: Are there better measures, e.g.: There is limited quality
Mean Time To critical Total number of priority 1, 2, 3, 4, & information
Defect (MTTD) 5 defects discovered If it is not going to be
Analogy with Similar Total number of priority 1, 2, 3, 4, & reported, why put it on the
Systems 5 defects removed form?

Appendices87
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

9.5 MIL-STD-881C WBS Mapping to Productivity Types


TheWorkBreakdownStructureswereadaptedfromMILSTD881Ctoassistindeterminingthe
correctProductivityType(PT).EachSystemfrom881Cislistedwiththeassociatedoneofmore
MetricsManualOperatingEnvironments.
Withintheenvironments,lookthroughtheSubsystemstofindonethatmatchesthecomponent
beingestimated.EachSubsystemorSubSubsystemhasamatchingPT.
UsethePTtolookuptheassociatedProductivitybasedCERandModelbasedCER/SER.

9.5.1 Aerial Vehicle Manned (AVM)


Source:MILSTD881CAppendixA:AircraftSystems

Env SubSystem Sub-Subsystem Domain PT


Flight Control Subsystem VC

Auxiliary Power Subsystem VC

Hydraulic Subsystem VC

Electrical Subsystem VC

Crew Station Subsystem VC


AVM Air Vehicle
Environmental Control Subsystem VC

Fuel Subsystem VC

Landing Gear VC

Rotor Group VC

Drive System VC

Intercoms RTE

Radio System(S) RTE


Communication / Identification
Identification Equipment (IFF) RTE

Data Links RTE

Radar SCP
AVM Avionics
Radio SCP

Other Essential Nav Equipment RTE


Navigation / Guidance
Radar Altimeter SCP

Direction Finding Set RTE

Doppler Compass SCP

AVM Avionics Mission Computer / Processing MP

Appendices88
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Env SubSystem Sub-Subsystem Domain PT


Search, Target, Tracking Sensors SSP

Self-Contained Navigation RTE

Self-Contained Air Data Systems RTE


Fire Control
Displays, Scopes, Or Sights RTE

Bombing Computer MP

Safety Devices RTE

Multi-Function Displays RTE

Control Display Units RTE


Data Display and Controls
Display Processors MP

On-Board Mission Planning TRN

Ferret And Search Receivers SCP

Warning Devices SCP

AVM Avionics Electronic Countermeasures SCP

Survivability Jamming Transmitters SCP

Chaff SCP

Infra-Red Jammers SCP

Terrain-Following Radar SCP

Photographic Sensors SCP

Electronic Sensors SCP

Infrared Sensors SCP

Search Receivers SCP


Reconnaissance
Recorders SCP

Warning Devices SCP

Magazines RTE

Data Link RTE

Appendices89
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Env SubSystem Sub-Subsystem Domain PT


Flight Control Computers MP

Signal Processors SCP

Data Formatting MP

Interfaces To Other Systems MP


Automatic Flight Control
Pressure Transducers SCP
AVM Avionics
Rate Gyros SCP

Accelerometers SCP

Motion Sensors SCP

Health Monitoring System SYS

Stores Management MP

9.5.2 Ordinance Vehicle Unmanned (OVU)


Source:MILSTD881CAppendixC:MissileSystems

Env SubSystem Sub-Subsystem Domain PT


Seeker Assembly SCP
Guidance
Guidance Software RTE

Sensor Assembly SCP


Navigation
Navigation Software RTE

Target Defeat Mechanism RTE

Target Detection Device SCP


Payload
OVU Air Vehicle Fuze SCP

Payload-specific software VP

Primary Power VC

Power and Distribution Power Conditioning Electronics VC

Power and distribution software VC

Antenna Assembly SCP


Communications
Communications software RTE

Appendices90
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Env SubSystem Sub-Subsystem Domain PT


Motor Engine VC

Thrust Vector Actuation VC

Attitude Control System VC

Fuel / Oxidizer Liquid


VC
OVU Air Vehicle Propulsion Subsystem Management

Arm / Fire Device VC

Flight Termination/Mission
RTE
Termination

Propulsion software VC

Controls Controls software VC

Reentry System VC

Post boost System VC

OVU Air Vehicle On Board Test Equipment TST

On Board Training Equipment TRN

Auxiliary Equipment SYS

Air Vehicle Software MP

Encasement
OVU Encasement Device Software MP
Device

Surveillance, Identification, and


SCP
Tracking Sensors

Launch & Guidance Control RTE


Command &
OVU
Launch Communications RTE

Launcher Equipment RTE

Auxiliary Equipment SYS

Appendices91
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.5.3 Ordinance Vehicle Unmanned (OVU)
Source:MILSTD881CAppendixD:OrdinanceSystems

Env SubSystem Sub-Subsystem Domain PT


Seeker Assembly SCP
Guidance
Guidance Software RTE
OVU Munition
Sensor Assembly SCP
Navigation
Navigation Software RTE

Target Defeat Mechanism RTE

Target Detection Device SCP


OVU Munition Payload
Fuze SCP

Payload software VP

Primary Power VC

OVU Munition Power and Distribution Power Conditioning Electronics VC

Power and distribution software VC

Antenna Assembly SCP


OVU Munition Communications
Communications software RTE

Motor Engine VC

Fuel / Oxidizer Liquid Management VC

Arm / Fire Device VC


OVU Munition Propulsion Subsystem
Thrust Vector Actuation VC

Flight Termination/Mission Termination RTE

Propulsion software VC

Controls Controls software VC

On Board Test Equipment TST

Munition On Board Training Equipment TRN


OVU
Auxiliary Equipment SYS

Munition Software MP

Launch System Fire Control RTE

Appendices92
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.5.4 Maritime Vessel Manned (MVM)
Source:MILSTD881CAppendixE:SeaSystems

Env SubSystem Sub-Subsystem Domain PT


Sensing and data RTE

Navigation equipment RTE

Command, Communication & Interior communication RTE


MVM Ship
Surveillance
Gun fire control system RTE

Non-electronic & electronic


RTE
countermeasure

Missile fire control systems RTE

Antisubmarine warfare fire control


RTE
and torpedo fire control systems

Radar systems RTE

Radio communication systems RTE

Electronic navigation systems RTE


Command, Communication &
MVM Ship Space vehicle electronic tracking
Surveillance RTE
systems

Sonar systems RTE

Electronic tactical data systems MP

Fiber optic plant BIS

Inter / intranet BIS

Entertainment systems BIS

Appendices93
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.5.5 Space Vehicle Manned / Unmanned (SVM/U) and Ground Site Fixed (GSF)
Source:MILSTD881CAppendixF:SpaceSystems

Env SubSystem Sub-Subsystem Domain PT


Structures & Mechanisms
VC
(SMS)

Thermal Control (TCS) VC

Electrical Power (EPS) VC

SVM/U Bus Attitude Control (ACS) VC

Propulsion VC

Telemetry, Tracking, &


RTE
Command (TT&C)

Bus Flight Software MP

Thermal Control RTE

Electrical Power RTE

Pointing, Command, &


VP
Control Interface

SVM/U Payload Payload Antenna SCP

Payload Signal Electronics SCP

Optical Assembly SCP

Sensor SCP

Payload Flight Software VP

Mission Management BIS

Command and Control C&C


Ground
Operations Mission Data Processing BIS
GSF
& Processing Mission Data Analysis BIS
Center
Collection Management BIS

Infrastructure & Framework SYS

Ground Application Specific Integrated


SCP
GSF Terminal / Ground Terminal Software Circuit
Gateway Field Programmable Gate Array SCP

Appendices94
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.5.6 Ground Vehicle Manned and Unmanned (GVM/U)
Source:MILSTD881CAppendixG:SurfaceVehicleSystems

Env SubSystem Sub-Subsystem Domain PT


System Survivability

Primary Turret Assembly RTE


GVM/U
Vehicle
Suspension /
SCP
Steering

Computers And Other Devices For


VC
Command And Control

Data Control And Distribution MP


Primary
GVM/U Vehicle Electronics
Vehicle Controls And Displays BIS

Power Distribution And Management RTE

Health Management Systems RTE

Appendices95
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Env SubSystem Sub-Subsystem Domain PT


Controls And Instrumentation VC

Power Transmission, Final Drivers, And Power


VC
Takeoffs
Power Package /
Brakes And Steering When Integral To Power
Drive Train VC
Transmission

Hybrid Electric Drive Systems VC

Energy Storage Systems VC

Radars And Other Sensors SCP

Controls And Displays RTE


Primary
Fire Control Sights Or Scopes RTE
Vehicle

GVM/U Range Finders, Gun Drives And Stabilization


RTE
Systems

Main Gun And Secondary Guns VP

Missile Launchers VP
Armament
Non-Lethal Weapons VP

Other Offensive Weapon Systems VP

Automatic
Ammunition MP
Handling

Navigation and
Primary RTE
Remote Piloting
Vehicle
Communications RTE

Ground Control
RTE
Systems
Remote
Control Command and
GVU C&C
System (UGV Control Subsystem
specific)
Remote Control
RTE
System Software

Appendices96
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.5.7 Aerial Vehicle Unmanned(AVU) & Ground Site Fixed (GSF)
Source:MILSTD881CAppendixH:UnmannedAirVehicleSystems

Env SubSystem Sub-Subsystem Domain PT


Propulsion VC

Flight Control Subsystem VC

Auxiliary Power Subsystem VC

Hydraulic Subsystem VC

Electrical Subsystem VC
AVU Air Vehicle Vehicle Subsystems
Environmental Control VC

Subsystem Fuel Subsystem VC

Landing Gear VC

Rotor Group VC

Drive System VC

Communication /
RTE
Identification

Navigation / Guidance RTE

Automatic Flight Control VC


AVU Air Vehicle Avionics
Health Monitoring System SYS

Stores Management VP

Mission Processing MP

Fire Control RTE

Survivability Payload VP

Reconnaissance Payload VP
AVU Payload
Electronic Warfare Payload VP

Armament / Weapons Delivery VP

Ground Control Systems C&C

Command and Control


Ground / Host RTE
GSF Subsystem
Segment
Launch and Recovery
RTE
Equipment

Appendices97
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.5.8 Maritime Vessel Unmanned (MVU) and Maritime Vessel Manned (MVM)
Source:MILSTD881CAppendixI:UnmannedMaritimeVesselSystems

Env SubSystem Sub-Subsystem Domain PT


Energy Storage And Conversion
Energy Storage / Conversion VC
Monitoring And Control System

Electric Power Monitoring And


Electrical Power VC
Control System

Mission Control RTE

Navigation RTE

Guidance And Control RTE


Vehicle Command and Control Health Status Monitoring SYS

Rendezvous, Homing And Docking


SYS
Systems

Fire Control RTE


Maritime
Vehicle Surveillance RTE

Communications / Identification RTE

UMV Hovering And Depth Control VC

Ship Control Systems Ballast And Trim VC

Maneuvering System VC

Emergency Systems MP

Launch And Recovery System MP

Auxiliary Systems Environmental Control System MP

Anchoring, Mooring And Towing MP

Miscellaneous Fluid Systems MP

Survivability Payload VP

Intelligence, Surveillance
VP
Reconnaissance Payload
Payload
Armament / Weapons Delivery
VP
Payload

Mission Payload VP

Appendices98
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Env SubSystem Sub-Subsystem Domain PT


Shipboard UM Command and
C&C
Control Subsystem

Shipboard Communication
Shipboard Subsystem RTE
MVM
Segment
Shipboard Power Subsystem VC

Launch and Recovery


RTE
Equipment

9.5.9 Ordinance Vehicle Unmanned (OVU)


Source:MILSTD881CAppendixJ:LaunchVehicles

Env SubSystem Sub-Subsystem Domain PT


Propulsions System VC

Reaction Control System VC

Stage(s) Recovery System VC

Environmental Control System RTE

Stage Peculiar Avionics RTE

Launch Vehicle Guidance Navigation and


RTE
Control

OVU Power VC

Avionics Data Acquisition and Telemetry RTE

Range Tracking & Safety


RTE
(Airborne)

Flight Software VC

Telemetry processing RTE


Flight
Real-time mission control Communications RTE
Operations
Data reduction and analysis BIS

Appendices99
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.5.10 Ground Site Fixed (GSF)
Source:MILSTD881CAppendixK:AutomatedInformationSystems

Env SubSystem Sub-Subsystem Domain PT


Custom Application Subsystem
Variable
Software Software CSCI

Component identification BIS

Assessment and Selection BIS


Software COTS /
Enterprise Service Element Prototyping BIS
GOTS
Glue code development BIS
GSF
Tailoring and configuration BIS

Component identification BIS

Assessment and Selection BIS


Enterprise Information Business Software
Prototyping BIS
System COTS / GOTS
Glue code development BIS

Tailoring and configuration BIS

9.5.11 Applies to ALL Environments


Source:MILSTD881CAppendixL:CommonElements

Env SubSystem Sub-Subsystem Domain PT


SIL Software - SIL Operations TST
System Integration Lab (SIL)
SIL Software - Simulation SCI

Test and Evaluation Support Test Software STS

Automated Test Equipment Equipment Software TST

Equipment TRN

Simulators SCI
CE Training
Computer Based-Application BIS

Computer Based-Web BIS

Support Equipment Software BIS

Test and Measurement


Equipment Software TST
Equipment

Data Migration Software Utilities BIS

Appendices100
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

9.6 Productivity (Pr) Benchmark Details


9.6.1 Normality Tests on Productivity Data
Beforeanalyzingthedataforcentraltendencies,acheckwasmadeofthenormalityofeach
datasetthathadfive(5)ormorerecordsusingtheAndersonDarlingNormalityTest.Toreject
thehypothesisofnormality,theAndersonDarlingstatistic,A2,needstoexceedthecritical
valueof0.787forthecasewhenthemeanandvarianceareunknown.Additionally,iftheP
valueislessthanthelevelofsignificance,0.05inthiscase,thehypothesisofnormalityis
rejected.
InplainEnglish,thedatasetdistributionisnormallydistributedif:
A2islessthan0.787
Pvalueisgreaterthanthelevelofsignificance(0.05)
Intheeventofatie,visuallyinspectthedistributionwithanormalcurveoverlay
Foreachdatasetthatfailsthenormalitytest,aBoxCoxtransformwasusedtodeterminethe
function,Ft,requiredtotransformthedataintoamorenormallikedistributionthereby
improvingthevalidityofthemeasurementofthemeanandmedian.
Table34,Table35andTable36showtheresultsoftheAndersonDarlingtestonproductivity
datagroupedbyOperatingEnvironments(OpEnv),ProductivityTypes(PT)andOpEnvPT
pairs.Thetablecolumnsare:
Groupname
N:numberofrecords
A2::AndersonDarlingteststatistic
P:Pvalue
Ft:functionfortransformingdatatoamorenormallikedistribution,ifrequired
Tobeincludedintheanalysis,therewerefive(5)ormorerecordsinagroup.

9.6.1.1 Operating Environments (all Productivity Types)


Table 34 OpEnv Productivity Normality Tests
Operating Environment (OpEnv) N A2 P Ft
1 Aerial Vehicle Manned (AVM) 50 1.27 0.005 X0.5
2 Ground Site Fixed (GFS) 116 0.95 0.016 X0.5
3 Maritime Vessel Manned (MVM) 69 2.45 0.005 X0.5
4 Ordinance Vehicle Unmanned (OVU) 16 0.42 0.285 Not Required
5 Space Vehicle Unmanned (SVU) 6 0.22 0.702 Not Required

Appendices101
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.6.1.2 Productivity Types (all Operating Environments)
Table 35 PT Productivity Normality Tests
Productivity Type (PT) N A2 P Ft
1 Intel and Information Processing (IIS) 35 0.53 0.162 Loge
2 Mission Processing (MP) 47 0.59 0.117 Loge
3 Real-Time Embedded (RTE) 53 0.17 0.927 Loge
4 Scientific Systems (SCI) 39 0.76 0.044 x1.5
5 Sensor Control and Signal Processing (SCP) 38 0.62 0.100 Not Required
6 System Software (SYS) 60 0.30 0.566 Not Required

9.6.1.3 Operating Environment Productivity Type Sets


Tobeincludedintheanalysis,therehastobefive(5)ormorerecordsintheOpEnvPTpair.
Thiscausedsomeoperatingenvironmentsandproductivitytypestodropoutofconsideration.
Table 36 OpEnv - PT Normality Tests
OpEnv - PT N A2 P Ft
1 AVM-MP 31 1.89 0.005 Loge
2 AVM-RTE 9 0.832 0.019 Loge
3 AVM-SCP 8 0.227 0.725 Not Required
4 GSF-IIS 23 0.44 0.274 Not Required
5 GSF-MP 6 0.23 0.661 Not Required
6 GSF-RTE 23 0.41 0.310 Not Required
7 GSF-SCI 23 0.95 0.013 X2
8 GSF-SCP 13 0.86 0.020 X2
9 GSF-SYS 28 0.30 0.571 Not Required
10 MVM-MP 8 0.44 0.200 Not Required
11 MVM-RTE 6 0.58 0.074 Loge
12 MVM-SCI 15 0.54 0.141 Not Required
13 MVM-SCP 7 0.43 0.217 Not Required
14 MVM-SYS 28 0.53 0.159 Not Required
15 OVU-RTE 11 0.27 0.593 Not Required

9.6.2 Statistical Summaries on Productivity Data


Thefollowingsectionsshowstatisticalsummariesofthenontransformedand,ifrequired,the
transformedproductivitydata.Thetransformationfunction,Ft,isshowninthesummarytable
abovethehistogram.

Appendices102
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.6.2.1 Operating Environments
Non-Transformed Transformed with Ft

Summary forAVM-Pr Summary for AVM-SqRt(Pr)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 1.27 A-Squared 0.28
P-Value < 0.005 P-Value 0.628

Mean 146.43 Mean 11.544


StDev 89.92 StDev 3.666
Variance 8085.32 Variance 13.437
Skewness 1.02934 Skewness 0.274914
Kurtosis 0.66251 Kurtosis -0.067340
N 50 N 50

Minimum 9.24 Minimum 3.039


1st Quartile 80.46 1st Quartile 8.970
Median 129.30 Median 11.370
3rd Quartile 183.86 3rd Quartile 13.559
0 80 160 240 320 400 4 8 12 16 20
Maximum 389.56 Maximum 19.737
95% Confidence Interval for Mean 95% Confidence Interval for Mean
120.88 171.99 10.502 12.586

95% Confidence Interval for Median 95% Confidence Interval for Median
102.23 151.27 10.111 12.299

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
75.11 112.05 3.062 4.568

Mean Mean

Median Median

100 120 140 160 180 10.0 10.5 11.0 11.5 12.0 12.5

Appendices103
Unclassified:DistributionStatementA/ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for GSF-Pr Summary for GSF-SqRt(Pr)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 0.95 A-Squared 0.96
P-Value 0.016 P-Value 0.015

Mean 222.90 Mean 14.309


StDev 121.40 StDev 4.279
Variance 14736.90 Variance 18.313
Skewness 0.417306 Skewness -0.152076
Kurtosis -0.326034 Kurtosis -0.747428
N 116 N 116

Minimum 26.82 Minimum 5.179


1st Quartile 117.51 1st Quartile 10.839
Median 228.76 Median 15.123
3rd Quartile 301.22 3rd Quartile 17.356
100 200 300 400 500 600 6 9 12 15 18 21 24
Maximum 580.72 Maximum 24.098
95% Confidence Interval for Mean 95% Confidence Interval for Mean
200.57 245.22 13.522 15.096

95% Confidence Interval for Median 95% Confidence Interval for Median
182.26 261.01 13.500 16.156

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
107.53 139.40 3.791 4.914

Mean Mean

Median Median

180 200 220 240 260 13.5 14.0 14.5 15.0 15.5 16.0 16.5

Appendices104
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for MVM-Pr Summary for MVM-SqRt(Pr)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 2.56 A-Squared 0.64
P-Value < 0.005 P-Value 0.092

Mean 235.38 Mean 14.251


StDev 181.14 StDev 5.725
Variance 32811.65 Variance 32.779
Skewness 1.30544 Skewness 0.413780
Kurtosis 1.38414 Kurtosis -0.138233
N 67 N 67

Minimum 20.24 Minimum 4.499


1st Quartile 111.73 1st Quartile 10.570
Median 193.34 Median 13.905
3rd Quartile 324.33 3rd Quartile 18.009
0 160 320 480 640 5 10 15 20 25
Maximum 716.86 Maximum 26.774
95% Confidence Interval for Mean 95% Confidence Interval for Mean
191.20 279.57 12.855 15.648

95% Confidence Interval for Median 95% Confidence Interval for Median
169.14 233.40 13.005 15.277

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
154.82 218.33 4.893 6.901

Mean Mean

Median Median

160 180 200 220 240 260 280 13.0 13.5 14.0 14.5 15.0 15.5 16.0

Appendices105
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for OVU-Pr


Anderson-Darling Normality Test
A-Squared 0.42
P-Value 0.285

Mean 132.50
StDev 90.30
Variance 8154.88
Skewness 0.23432
Kurtosis -1.36368
N 16

Minimum 9.95
1st Quartile 50.78
Median 119.94 Not Required
3rd Quartile 225.61
0 50 100 150 200 250 300
Maximum 277.35
95% Confidence Interval for Mean
84.38 180.62

95% Confidence Interval for Median


52.74 202.87

95% Confidence Interval for StDev


95% Confidence Intervals
66.71 139.76

Mean

Median

50 75 100 125 150 175 200

Appendices106
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for SVU-Pr


Anderson-Darling Normality Test
A-Squared 0.22
P-Value 0.702

Mean 84.888
StDev 43.829
Variance 1921.005
Skewness 0.509669
Kurtosis 0.074706
N 6

Minimum 27.802
1st Quartile 52.411
Median 76.020 Not Required
3rd Quartile 125.189
40 80 120 160
Maximum 152.927
95% Confidence Interval for Mean
38.892 130.884

95% Confidence Interval for Median


39.521 139.719

95% Confidence Interval for StDev


95% Confidence Intervals
27.359 107.496

Mean

Median

50 75 100 125 150

Appendices107
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.6.2.2 Productivity Types
Non-Transformed Transformed with Ft

Summary for IIS-Pr Summary for IIS-Ln(Pr)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 1.37 A-Squared 0.53
P-Value < 0.005 P-Value 0.162

Mean 425.04 Mean 5.9846


StDev 164.59 StDev 0.3697
Variance 27089.34 Variance 0.1367
Skewness 0.983520 Skewness 0.198585
Kurtosis 0.356784 Kurtosis -0.194392
N 35 N 35

Minimum 169.29 Minimum 5.1316


1st Quartile 304.45 1st Quartile 5.7185
Median 362.63 Median 5.8934
3rd Quartile 541.11 3rd Quartile 6.2936
200 400 600 800 5.2 5.6 6.0 6.4 6.8
Maximum 874.81 Maximum 6.7740
95% Confidence Interval for Mean 95% Confidence Interval for Mean
368.50 481.58 5.8576 6.1116

95% Confidence Interval for Median 95% Confidence Interval for Median
330.50 437.97 5.8006 6.0821

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
133.13 215.64 0.2991 0.4844

Mean Mean

Median Median

350 400 450 500 5.80 5.85 5.90 5.95 6.00 6.05 6.10

Appendices108
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for MP-Pr Summary for MP-Ln(Pr)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 2.65 A-Squared 0.59
P-Value < 0.005 P-Value 0.117

Mean 189.07 Mean 5.1174


StDev 110.41 StDev 0.4955
Variance 12189.41 Variance 0.2456
Skewness 2.6731 Skewness 0.03841
Kurtosis 10.5868 Kurtosis 2.20613
N 47 N 47

Minimum 34.44 Minimum 3.5392


1st Quartile 124.37 1st Quartile 4.8233
Median 159.32 Median 5.0709
3rd Quartile 236.43 3rd Quartile 5.4656
0 160 320 480 640 4.0 4.8 5.6 6.4
Maximum 716.86 Maximum 6.5749
95% Confidence Interval for Mean 95% Confidence Interval for Mean
156.66 221.49 4.9719 5.2628

95% Confidence Interval for Median 95% Confidence Interval for Median
138.37 183.91 4.9299 5.2144

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
91.74 138.67 0.4118 0.6224

Mean Mean

Median Median

140 160 180 200 220 4.9 5.0 5.1 5.2 5.3

Appendices109
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for RTE-Pr Summary for RTE-Ln(Pr)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 1.26 A-Squared 0.17
P-Value < 0.005 P-Value 0.927

Mean 136.14 Mean 4.7877


StDev 72.66 StDev 0.5102
Variance 5280.14 Variance 0.2603
Skewness 1.67782 Skewness -0.0835250
Kurtosis 4.85448 Kurtosis 0.0854313
N 53 N 53

Minimum 33.06 Minimum 3.4984


1st Quartile 82.73 1st Quartile 4.4155
Median 124.33 Median 4.8229
3rd Quartile 167.84 3rd Quartile 5.1230
100 200 300 400 3.5 4.0 4.5 5.0 5.5 6.0
Maximum 443.01 Maximum 6.0936
95% Confidence Interval for Mean 95% Confidence Interval for Mean
116.11 156.17 4.6471 4.9283

95% Confidence Interval for Median 95% Confidence Interval for Median
107.71 142.35 4.6793 4.9583

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
60.99 89.91 0.4282 0.6312

Mean Mean

Median Median

110 120 130 140 150 160 4.65 4.70 4.75 4.80 4.85 4.90 4.95

Appendices110
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for SCI-Pr Summary for SCI-(Pr^1.5)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 0.90 A-Squared 0.76
P-Value 0.019 P-Value 0.044

Mean 221.03 Mean 3656.3


StDev 119.00 StDev 2530.5
Variance 14161.76 Variance 6403504.9
Skewness -0.21790 Skewness 0.155112
Kurtosis -1.15300 Kurtosis -0.989185
N 39 N 39

Minimum 9.24 Minimum 28.1


1st Quartile 106.28 1st Quartile 1095.6
Median 247.68 Median 3898.0
3rd Quartile 313.11 3rd Quartile 5540.5
0 100 200 300 400 0 2000 4000 6000 8000
Maximum 431.11 Maximum 8951.1
95% Confidence Interval for Mean 95% Confidence Interval for Mean
182.45 259.60 2836.0 4476.6

95% Confidence Interval for Median 95% Confidence Interval for Median
167.52 280.60 2170.0 4700.3

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
97.25 153.37 2068.1 3261.3

Mean Mean

Median Median

160 180 200 220 240 260 280 2000 2500 3000 3500 4000 4500 5000

Appendices111
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for SCI-Pr


Anderson-Darling Normality Test
A-Squared 0.62
P-Value 0.100

Mean 49.810
StDev 19.422
Variance 377.224
Skewness -0.17597
Kurtosis -1.13890
N 38

Minimum 9.951
1st Quartile 32.964
Median 52.608 Not Required
3rd Quartile 66.526
20 40 60 80
Maximum 80.043
95% Confidence Interval for Mean
43.426 56.194

95% Confidence Interval for Median


37.728 60.386

95% Confidence Interval for StDev


95% Confidence Intervals
15.834 25.127

Mean

Median

40 45 50 55 60

Appendices112
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for SYS-Pr


Anderson-Darling Normality Test
A-Squared 0.30
P-Value 0.566

Mean 224.74
StDev 78.35
Variance 6139.38
Skewness 0.368573
Kurtosis -0.131695
N 60

Minimum 60.61
1st Quartile 171.46
Median 219.15 Not Required
3rd Quartile 274.11
80 160 240 320 400
Maximum 421.14
95% Confidence Interval for Mean
204.50 244.98

95% Confidence Interval for Median


193.16 245.78

95% Confidence Interval for StDev


95% Confidence Intervals
66.42 95.57

Mean

Median

190 200 210 220 230 240 250

Appendices113
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
9.6.2.3 Operating Environment - Productivity Type Sets
Non-Transformed Transformed with Ft

Summary for AVM_MP-Pr Summary for AVM_MP-ln(Pr)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 1.90 A-Squared 0.72
P-Value < 0.005 P-Value 0.055

Mean 173.50 Mean 5.0396


StDev 88.12 StDev 0.4988
Variance 7764.96 Variance 0.2488
Skewness 1.20981 Skewness -0.32485
Kurtosis 0.75188 Kurtosis 1.75426
N 31 N 31

Minimum 34.44 Minimum 3.5392


1st Quartile 121.76 1st Quartile 4.8021
Median 140.93 Median 4.9483
3rd Quartile 188.41 3rd Quartile 5.2386
100 200 300 400 3.5 4.0 4.5 5.0 5.5 6.0
Maximum 389.56 Maximum 5.9650
95% Confidence Interval for Mean 95% Confidence Interval for Mean
141.18 205.82 4.8567 5.2226

95% Confidence Interval for Median 95% Confidence Interval for Median
125.59 171.30 4.8330 5.1434

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
70.42 117.79 0.3986 0.6667

Mean Mean

Median Median

120 140 160 180 200 4.8 4.9 5.0 5.1 5.2

Appendices114
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for AVM_RTE-Pr Summary for AVM_RTE-ln(Pr)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 0.83 A-Squared 0.60
P-Value 0.019 P-Value 0.079

Mean 139.98 Mean 4.8021


StDev 77.96 StDev 0.5595
Variance 6077.32 Variance 0.3131
Skewness 0.57445 Skewness 0.26890
Kurtosis -1.82660 Kurtosis -1.74694
N 9 N 9

Minimum 56.96 Minimum 4.0424


1st Quartile 81.31 1st Quartile 4.3979
Median 89.04 Median 4.4890
3rd Quartile 237.23 3rd Quartile 5.4689
50 100 150 200 250 4.0 4.4 4.8 5.2
Maximum 242.86 Maximum 5.4925
95% Confidence Interval for Mean 95% Confidence Interval for Mean
80.05 199.90 4.3720 5.2322

95% Confidence Interval for Median 95% Confidence Interval for Median
80.13 239.66 4.3834 5.4791

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
52.66 149.35 0.3779 1.0719

Mean Mean

Median Median

100 150 200 250 4.50 4.75 5.00 5.25 5.50

Appendices115
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for AVM_SCP-Pr


Anderson-Darling Normality Test
A-Squared 0.23
P-Value 0.725

Mean 56.078
StDev 12.840
Variance 164.866
Skewness -0.679094
Kurtosis -0.347898
N 8

Minimum 33.322
1st Quartile 46.146
Median 57.835 Not Required
3rd Quartile 67.806
40 50 60 70
Maximum 70.440
95% Confidence Interval for Mean
45.344 66.813

95% Confidence Interval for Median


44.133 68.559

95% Confidence Interval for StDev


95% Confidence Intervals
8.489 26.133

Mean

Median

45 50 55 60 65 70

Appendices116
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for GSF_IIS-Pr


Anderson-Darling Normality Test
A-Squared 0.44
P-Value 0.274

Mean 375.51
StDev 85.30
Variance 7276.65
Skewness 0.714674
Kurtosis 0.262814
N 23

Minimum 236.41
1st Quartile 304.45
Median 358.54 Not Required
3rd Quartile 435.94
300 400 500 600
Maximum 580.72
95% Confidence Interval for Mean
338.62 412.40

95% Confidence Interval for Median


319.32 425.02

95% Confidence Interval for StDev


95% Confidence Intervals
65.97 120.73

Mean

Median

320 340 360 380 400 420 440

Appendices117
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for GSF_MP-Pr


Anderson-Darling Normality Test
A-Squared 0.23
P-Value 0.661

Mean 161.69
StDev 51.55
Variance 2657.58
Skewness 0.28077
Kurtosis 1.21448
N 6

Minimum 87.27
1st Quartile 125.49
Median 158.30 Not Required
3rd Quartile 199.44
80 120 160 200 240
Maximum 243.16
95% Confidence Interval for Mean
107.59 215.79

95% Confidence Interval for Median


105.47 222.34

95% Confidence Interval for StDev


95% Confidence Intervals
32.18 126.44

Mean

Median

100 120 140 160 180 200 220

Appendices118
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for GSF_RTE-Pr


Anderson-Darling Normality Test
A-Squared 0.41
P-Value 0.310

Mean 128.79
StDev 45.19
Variance 2042.14
Skewness 0.681898
Kurtosis 0.726621
N 23

Minimum 51.22
1st Quartile 99.29
Median 130.33 Not Required
3rd Quartile 148.53
50 100 150 200 250
Maximum 239.18
95% Confidence Interval for Mean
109.25 148.33

95% Confidence Interval for Median


106.88 142.16

95% Confidence Interval for StDev


95% Confidence Intervals
34.95 63.96

Mean

Median

110 120 130 140 150

Appendices119
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for GSF_SCI-Pr Summary for GSF_SCI-(Pr^2)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 0.95 A-Squared 0.48
P-Value 0.013 P-Value 0.210

Mean 253.60 Mean 73444


StDev 97.70 StDev 44766
Variance 9545.01 Variance 2003968683
Skewness -0.676131 Skewness 0.152910
Kurtosis -0.289559 Kurtosis -0.156978
N 23 N 23

Minimum 60.70 Minimum 3685


1st Quartile 200.99 1st Quartile 40398
Median 274.21 Median 75189
3rd Quartile 313.11 3rd Quartile 98039
100 200 300 400 0 40000 80000 120000 160000
Maximum 410.23 Maximum 168289
95% Confidence Interval for Mean 95% Confidence Interval for Mean
211.35 295.85 54086 92802

95% Confidence Interval for Median 95% Confidence Interval for Median
246.89 307.19 60953 94388

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
75.56 138.28 34622 63359

Mean Mean

Median Median

200 220 240 260 280 300 320 50000 60000 70000 80000 90000 100000

Appendices120
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for GSF_SCP-Pr Summary for GSF_SCP(Pr^2)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 0.86 A-Squared 0.53
P-Value 0.020 P-Value 0.140

Mean 56.024 Mean 3393.2


StDev 16.603 StDev 1678.3
Variance 275.653 Variance 2816765.6
Skewness -0.857414 Skewness -0.333019
Kurtosis -0.221174 Kurtosis -0.159418
N 13 N 13

Minimum 26.819 Minimum 719.3


1st Quartile 43.952 1st Quartile 2059.7
Median 60.347 Median 3641.7
3rd Quartile 66.740 3rd Quartile 4454.4
30 40 50 60 70 80 1000 2000 3000 4000 5000 6000
Maximum 80.043 Maximum 6406.9
95% Confidence Interval for Mean 95% Confidence Interval for Mean
45.991 66.057 2379.0 4407.4

95% Confidence Interval for Median 95% Confidence Interval for Median
48.128 66.582 2426.8 4433.3

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
11.906 27.407 1203.5 2770.5

Mean Mean

Median Median

45 50 55 60 65 2500 3000 3500 4000 4500

Appendices121
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for GSF_SYS-Pr


Anderson-Darling Normality Test
A-Squared 0.30
P-Value 0.571

Mean 240.20
StDev 63.60
Variance 4045.48
Skewness 0.56326
Kurtosis 1.35086
N 28

Minimum 115.23
1st Quartile 199.19
Median 245.29 Not Required
3rd Quartile 274.11
100 150 200 250 300 350 400
Maximum 421.14
95% Confidence Interval for Mean
215.54 264.87

95% Confidence Interval for Median


213.01 265.11

95% Confidence Interval for StDev


95% Confidence Intervals
50.29 86.57

Mean

Median

210 220 230 240 250 260 270

Appendices122
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for MVM_MP-Pr


Anderson-Darling Normality Test
A-Squared 0.44
P-Value 0.200

Mean 188.89
StDev 52.13
Variance 2717.31
Skewness -0.35267
Kurtosis -2.10327
N 7

Minimum 121.86
1st Quartile 127.32
Median 202.04 Not Required
3rd Quartile 236.43
120 140 160 180 200 220 240
Maximum 242.48
95% Confidence Interval for Mean
140.68 237.10

95% Confidence Interval for Median


125.86 238.04

95% Confidence Interval for StDev


95% Confidence Intervals
33.59 114.79

Mean

Median

120 140 160 180 200 220 240

Appendices123
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for MVM_RTE-Pr Summary for MVM_RTE-Ln(Pr)


Anderson-Darling Normality Test Anderson-Darling Normality Test
A-Squared 0.58 A-Squared 0.18
P-Value 0.074 P-Value 0.853

Mean 158.30 Mean 4.7282


StDev 149.51 StDev 0.8974
Variance 22354.48 Variance 0.8053
Skewness 1.78797 Skewness 0.269630
Kurtosis 3.44193 Kurtosis 0.103773
N 6 N 6

Minimum 33.06 Minimum 3.4984


1st Quartile 61.90 1st Quartile 4.0770
Median 110.07 Median 4.6517
3rd Quartile 247.29 3rd Quartile 5.4266
0 100 200 300 400 3.5 4.0 4.5 5.0 5.5 6.0
Maximum 443.01 Maximum 6.0936
95% Confidence Interval for Mean 95% Confidence Interval for Mean
1.39 315.20 3.7865 5.6700

95% Confidence Interval for Median 95% Confidence Interval for Median
46.79 349.81 3.7739 5.7760

95% Confidence Interval for StDev 95% Confidence Interval for StDev
95% Confidence Intervals 95% Confidence Intervals
93.33 366.70 0.5601 2.2009

Mean Mean

Median Median

0 100 200 300 400 4.0 4.5 5.0 5.5 6.0

Appendices124
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for MVM_SCI-Pr


Anderson-Darling Normality Test
A-Squared 0.54
P-Value 0.141

Mean 185.20
StDev 130.83
Variance 17117.26
Skewness 0.51734
Kurtosis -1.05039
N 15

Minimum 36.11
1st Quartile 54.51
Median 169.46 Not Required
3rd Quartile 326.65
100 200 300 400
Maximum 431.11
95% Confidence Interval for Mean
112.75 257.65

95% Confidence Interval for Median


56.10 296.19

95% Confidence Interval for StDev


95% Confidence Intervals
95.79 206.34

Mean

Median

50 100 150 200 250 300

Appendices125
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for MVM_SCP-Pr


Anderson-Darling Normality Test
A-Squared 0.43
P-Value 0.217

Mean 38.896
StDev 19.265
Variance 371.132
Skewness 1.48446
Kurtosis 2.51764
N 7

Minimum 20.243
1st Quartile 23.505
Median 33.071 Not Required
3rd Quartile 45.978
20 30 40 50 60 70 80
Maximum 77.186
95% Confidence Interval for Mean
21.079 56.713

95% Confidence Interval for Median


22.635 54.300

95% Confidence Interval for StDev


95% Confidence Intervals
12.414 42.422

Mean

Median

20 30 40 50 60

Appendices126
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for MVM_SYS-Pr


Anderson-Darling Normality Test
A-Squared 0.53
P-Value 0.159

Mean 234.35
StDev 85.89
Variance 7376.37
Skewness 0.515429
Kurtosis -0.673920
N 21

Minimum 103.63
1st Quartile 177.76
Median 202.15 Not Required
3rd Quartile 303.39
100 150 200 250 300 350 400
Maximum 393.13
95% Confidence Interval for Mean
195.26 273.45

95% Confidence Interval for Median


188.54 278.50

95% Confidence Interval for StDev


95% Confidence Intervals
65.71 124.03

Mean

Median

200 220 240 260 280

Appendices127
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

Non-Transformed Transformed with Ft

Summary for OVU_RTE-Pr


Anderson-Darling Normality Test
A-Squared 0.27
P-Value 0.593

Mean 141.07
StDev 75.88
Variance 5758.12
Skewness 0.429912
Kurtosis -0.693482
N 11

Minimum 49.82
1st Quartile 55.68
Median 126.72 Not Required
3rd Quartile 192.35
50 100 150 200 250 300
Maximum 277.35
95% Confidence Interval for Mean
90.09 192.05

95% Confidence Interval for Median


55.51 196.20

95% Confidence Interval for StDev


95% Confidence Intervals
53.02 133.17

Mean

Median

50 75 100 125 150 175 200

Appendices128
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual

9.7 References
[Abts2004] AbtsC,ExtendingtheCOCOMOIISoftwareCostModelto
EstimateEffortandScheduleforSoftwareSystemsUsing
CommercialofftheShelf(COTS)SoftwareComponents:The
COCOTSModel,PhDDissertation,DepartmentofIndustrialand
SystemsEngineering,UniversityofSouthernCalifornia,May2004
[Anderson2010] AndersonD,Kanban,BlueHolePress,2010
[BankerKemerer1989] Banker,R.andKemerer,C.,ScaleEconomiesinNewSoftware
Development.IEEETransactionsonSoftwareEngineering,Vol
15,No10,October1989.
[Beck2000] Beck,K.,ExtremeProgrammingExplained,AddisonWesley,
2000.
[Boehm1981] BoehmB.,SoftwareEngineeringEconomics.EnglewoodCliffs,NJ,
PrenticeHall,1981
[Boehmetal.2000] BoehmB.,AbtsC.,BrownW.,ChulaniS.,ClarkB.,HorowitzE.,
MadachyR.,ReiferD.,SteeceB.,SoftwareCostEstimationwith
COCOMOII,PrenticeHall,2000
[Boehmetal.2000b] BoehmB,AbtsC,ChulaniS,SoftwareDevelopmentCost
EstimationApproachesASurvey,USCCSE00505,2000
[Boehmetal.2004] BoehmB,BhutaJ,GarlanD,GradmanE,HuangL,LamA,
MadachyR,MedvidovicN,MeyerK,MeyersS,PerezG,
ReinholtzKL,RoshandelR,RouquetteN,UsingEmpirical
TestbedstoAccelerateTechnologyMaturityandTransition:The
SCRoverExperience,Proceedingsofthe2004International
SymposiumonEmpiricalSoftwareEngineering,IEEEComputer
Society,2004
[BoehmLee2005] BoehmBandLeeK,EmpiricalResultsfromanExperimenton
ValueBasedReview(VBR)Processes,Proceedings,ISESE2005,
September2005
[Boehm2009] BoehmB,ApplyingtheIncrementalCommitmentModelto
BrownfieldSystemDevelopment,Proceedings,CSER2009.
[BoehmLane] Boehm,B.W.andLane,J.A,UsingtheIncrementalCommitment
ModeltoIntegrateSystemAcquisition,SystemsEngineeringand
SoftwareEngineering,TechReport2007715,Universityof
SouthernCalifornia,2007.

Appendices129
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
[BoehmLane2010] BoehmBandLaneJ,DoDSystemsEngineeringand
ManagementImplicationsforEvolutionaryAcquisitionofMajor
DefenseSystems,Proceedings,CSER2010.
[BisignaniReed1988] BisignaniM.,andReedT,SoftwareSecurityCostingIssues,
COCOMOUsersGroupMeeting..LosAngeles:USCCenterfor
SoftwareEngineering,1988.
[Booch2009] PersonalcommunicationfromGradyBooch,IBM,2009
[Broy2010] BroyM,SeamlessMethodandModelbasedSoftwareand
SystemsEngineering,TheFutureofSoftwareEngineering,
Springer,2010.
[Cohn2005] CohnM,AgileEstimatingandPlanning,PrenticeHall,2005
[ColbertBoehm2008] ColbertEandBoehmB,CostEstimationforSecureSoftware&
Systems,ISPA/SCEA2008JointInternationalConference,June
2008
[CSSE2000] GuidelinesforModelBased(System)ArchitectingandSoftware
Engineering(MBASE),availableat
http://sunset.usc.edu/classes_cs577b,2000.
[DCARC2005] DefenseCostandResourceCenter,TheDoDSoftwareResource
DataReportAnUpdate,ProceedingsofthePracticalSoftware
Measurement(PSM)UsersGroupConference,19July2005
[Galorath2005] GalorathInc.,SEERSEMUserManual,2005
[Brooks1995] Brooks,F.,TheMythicalManMonth,AddisonWesley,1995.
[DSRC2005] DefenseCostandResourceCenter,TheDoDSoftwareResource
DataReportAnUpdate,ProceedingsofthePracticalSoftware
Measurement(PSM)UsersGroupConference,19July2005.
[Elssamadisy ElssamadisyA.andSchalliolG.,RecognizingandResponding
Schalliol2002] toBadSmellsinExtremeProgramming,Proceedings,ICSE2002,
pp.617622
[GalorathEvans2006] GalorathD,EvansM,SoftwareSizing,Estimation,andRisk
Management,AuerbachPublications,2006
[GarmusHeron2000] Garmus,DavidandDavidHerron.FunctionPointAnalysis:
MeasurementPracticesforSuccessfulSoftwareProjects.Boston,
Mass.:AddisonWesley,2000
[GillIansiti1994] GillG.,andIansitiM.,MicrosoftCorporation:OfficeBusiness
Unit,HarvardBusinessSchoolCaseStudy691033,1994.

Appendices130
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
[Goethertetal1992] GoethertW,BaileyE,andBusbyM,SoftwareEffortand
ScheduleMeasurement:AFrameworkforCountingStaffHours
andReportingScheduleInformation.SoftwareEngineering
Institute,CarnegieMellonUniversity,ESCTR92021,1992.
[HopkinsJenkins2008] HopkinsR,andJenkinsK,EatingtheITElephant:Movingfrom
GreenfieldDevelopmenttoBrownfield,IBMPress.
[IFPUG1994] FunctionPointCountingPractices:ManualRelease4.0,
InternationalFunctionPointUsersGroup,BlendonviewOffice
Park,500828PineCreekDrive,Westerville,OH430814899.
[IFPUG2009] InternationalFunctionPointsUsersGroup,http://www.ifpug.org
[ISO12207] ISO/IEC12207,InternationalStandardonInformationTechnology
SoftwareLifecycleProcesses,InternationalOrganizationfor
Standardization(ISO),1995.
[ISO1999] ISOJTC1/SC27,EvaluationCriteriaforITSecurity,inPart1:
Introductionandgeneralmodel,InternationalOrganizationfor
Standardization(ISO),1999.
[Jensen1983] JensenR,AnImprovedMacrolevelSoftwareDevelopment
ResourceEstimationModel,Proceedingsof5thISPAConference,
1983
[Koolmanojwong KoolmanojwongSandBoehmB,TheIncremental
Boehm2010] CommitmentModelProcessPatternsforRapidFielding
Projects,Proceedings,ICSP2010,Paderborn,Germany.
[Kruchten1998] Kruchten,P.,TheRationalUnifiedProcess,AddisonWesley,1998.
[Lane2009] LaneJ.,CostModelExtensionstoSupportSystemsEngineering
CostEstimationforComplexSystemsandSystemsofSystems,
7thAnnualConferenceonSystemsEngineeringResearch2009
(CSER2009)
[LaneBoehm2007] LaneJ.,andBoehmB.,ModernToolstoSupportDoDSoftware
IntensiveSystemofSystemsCostEstimationADACSStateof
theArtReport,August2007.
[Lewisetal.2008] LewisGetal.,SMART:AnalyzingtheReusePotentialofLegacy
ComponentsonaServiceOrientedArchitectureEnvironment,
CMU/SEI2008TN008.
[Lietal.2009] LiQ,LiM,YangY,WangQ,TanT,BoehmB,HuC,Bridgethe
GapbetweenSoftwareTestProcessandBusinessValue:ACase
Study.Proceedings,ICSP2009,pp.212223.

Appendices131
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
[Lumetal.2001] LumK,PowellJ,HihnJ,ValidationofSpacecraftSoftwareCost
EstimationModelsforFlightandGroundSystems,JPLReport,
2001
[Madachy1997] MadachyR,HeuristicRiskAssessmentUsingCostFactors,IEEE
Software,May1997
[MadachyBoehm2006] MadachyR,BoehmB,AModelofOptionsandCostsforReliable
Autonomy(MOCA)FinalReport,reportedsubmittedtoNASA
forUSRAcontract#4481,2006
[MadachyBoehm2008] MadachyR,BoehmB,ComparativeAnalysisofCOCOMOII,
SEERSEMandTrueSSoftwareCostModels,USCCSSE2008
816,2008
[NCCAAFCAA2008] NavalCenterforCostAnalysisandAirForceCostAnalysis
Agency,SoftwareDevelopmentCostEstimationHandbook
Volume1(Draft),SoftwareTechnologySupportCenter,
September2008
[Nguyen2010] NguyenV.ImprovedSizeandEffortEstimationModelsfor
SoftwareMaintenance,PhDDissertation,Departmentof
ComputerScience,UniversityofSouthernCalifornia,December
2010,
http://csse.usc.edu/csse/TECHRPTS/by_author.html#Nguyen
[Park1988] ParkR,TheCentralEquationsofthePRICESoftwareCost
Model,COCOMOUsersGroupMeeting,1988
[Park1992] ParkR,SoftwareSizeMeasurement:AFrameworkforCounting
SourceStatements,SoftwareEngineeringInstitue,Carnegie
MellonUniversity,ESCTR92020,1992
[Putnam1978] Putnam,L.H.,ExampleofanEarlySizing,CostandSchedule
EstimateforanApplicationSoftwareSystem,Computer
SoftwareandApplicationsConference(COMPSAC),1978
[PutnamMyers1992] Putnam,L.H.,andW.Myers.MeasuresforExcellenceReliable
SoftwareonTime,WithinBudget.PrenticeHall,Inc.Englewood
Cliffs,NJ,1992
[PutnamMyers2003] Putnam,L.H.,andW.Myers.FiveCoreMetrics.DorsetHouse
Publishing.NewYork,NY,2003
[PRICE2005] PRICESystems,TRUESUserManual,2005
[QSM2003] QuantitativeSoftwareManagement,SLIMEstimateforWindows
UsersGuide,2003

Appendices132
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease
UNCLASSIFIED
SoftwareCostEstimationMetricsManual
[Reiferetal.1999] ReiferD,BoehmB,ChulaniS,TheRosettaStoneMaking
COCOMO81EstimatesWorkwithCOCOMOII,Crosstalk,1999
[Reifer2008] Reifer,D.,TwelveMythsofMaintenance,ReiferConsultants,
Inc.,2008.
[Reifer2002] Reifer,D.,LettheNumbersDotheTalking,CrossTalk,Vol.15,
No.3,March2002.
[Reifer2002] ReiferD.,Security:ARatingConceptforCOCOMOII.2002.
ReiferConsultants,Inc.
[Royce1998] Royce,W.,SoftwareProjectManagement:AUnifiedFramework,
AddisonWesley,1998.
[Schwaber Schwaber,K.andBeedle,M.,Scrum:AgileSoftware
Beedle2002] Development,PrenticeHall,2002.
[Selby1988] SelbyR,EmpiricallyAnalyzingSoftwareReuseinaProduction
Environment,InSoftwareReuse:EmergingTechnology,W.
Tracz(Ed.),IEEEComputerSocietyPress,1988
[Stutzke2005] Stutzke,RichardD,EstimatingSoftwareIntensiveSystems,Upper
SaddleRiver,N.J.:AddisonWesley,2005
[Tanetal.2009] Tan,T,LiQ,BoehmB,YangY,HeM,andMoazeniR,
ProductivityTrendsinIncrementalandIterativeSoftware
Development,Proceedings,ACMIEEEESEM2009.
[Tan2012] Tan,T,DomainBasedEffortDistributionModelforSoftware
CostEstimation,PhDDissertation,ComputerScience
Department,UniversityofSouthernCalifornia,June2012.
[USC2006] UniversityofSouthernCaliforniaCenterforSoftware
Engineering,ModelComparisonReport,ReporttoNASAAMES,
DraftVersion,July2006
[USD(AT&L)2008] SystemsEngineeringGuideforSystemofSystems,Version1.0,
OUSD(AT&L),June2008.
[Valerdi2011] ValerdiR,SystemsEngineeringCostEstimationwithCOSYSMO,
Wiley,2011.
[Yangetal.2005] YangY,BhutaJ,BoehmB,andPortD,ValueBasedProcessesfor
COTSBasedApplications,IEEESoftware,Volume22,Issue4,
JulyAugust2005,pp.5462

Appendices133
UNCLASSIFIED
DistributionStatementA:ApprovedforPublicRelease

You might also like