You are on page 1of 36

Monitoring

1
Contents

■ Monitoring for Contingency Plans


◆Identify Contingency Triggers
■ Monitoring cost and schedule against
function
■ Indicators of quality problems
■ Indicators of planning problems
■ Progress Reports

2
Risk Monitoring

■ Continuous iterative process


■ Performed regularly
◆Compare reality with plan
■ Related to project management
monitoring
◆Difference in emphasis

3
Project Management
Monitoring
■ Protection of envelope
■ Solve immediate problems using short
term tactics
■ Few viable options due to task
interdependencies
■ Inappropriate tactics make things worse

4
Risk Management Monitoring

■ Early problem detection


- Protect project strategy
■ Identify and track triggers
■ Data depends on viable project plans by
enlarge, review them regularly

5
Identifying Triggers for
Contingency Plans
■ Top level manifestation
◆Project 6 months late observed too late for
recovery action
■ Need hazard symptom
◆Indicate hazard is imminent
◆Increased probability of occurrence
◆May derive from low level hazards
◆Triggers are critical levels of symptoms

6
Symptomless Hazards

■ Fire
■ Earthquake
■ Sudden Illness
■ Special Contingency Plans
■ Crisis Management

7
Hazards, Symptoms, Triggers

■ Top level hazard:


■ Detectable earlier hazard:
■ Activity plan
◆Identifies problem
◆Define symptoms to be monitored

8
Loss of skilled Personnel
■ Symptoms
- High Personnel Turnover Rate
- Profile of Staff Skills Changes
- Proportion of Highly Skilled too low
■ When
- Change over points (milestones)
- Monthly Reviews

9
Monitored By
SoftwarePersonnelPlot
800

750

700

650
P 600
e
550
r
s 500 Total
o 450
n
n 400

e 350
l 300
Skilled
250

200

150

100

50 Stafflosses
0
-50
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85

Weeksthroughproject

10
Techniques for Assessing Project
Progress

11
Monitoring Costs and
Schedules
■ Project axis of project envelope
■ Normal project management helps to
monitor these risks
■ Three Techniques
- Earned Value Analysis
- Cost to Complete
- Phase Completion

12
Earned Value Analysis

■ Identify critical resources to be tracked


■ Allocate resource budgets to systems
components and processes
■ When satisfactorily completed compare
budgeted to actual

AA
∑(
i
i )
Cost Perfo rmance ind ex(CPI) = BAi

13
Cost to complete

(%pc * %tep)
%comp = ∑
100
allphases

%remain = 100 - %comp

14
Monitoring Completion

■ Needs good phase end definition and measurement


◆ Preliminary Design
■ % of requirements through preliminary design review

◆ Detailed Design
■ % of modules through detailed design review

◆ Code and Unit Test


■ % of modules released to integration

◆ Integration Test
■ % of modules integrated

◆ System Test
■ % of requirements tested

15
Warnings

■ Is past effort typical of future effort


■ Are the hardest components still
outstanding
- May need to use weights

16
Risk to Quality

17
Failure Cause Analysis

R e l i a b i l i t y G o a l
n o t a c h i e v e d

I n c o m p l Pe t o e o r Q u a l i t y
S y s t e m T eS s o t f i tn w g a r e

I n a p p r o p r i Pa t o e o r
S t a f f S o f t w a r e D e s ig

18
Indicators of Poor Design
■ Strong Coupling
◆Many links between modules
■ Weak cohesion
◆Modules performing many functions
■ Reliability/Performance Critical
◆Modules used in many calling paths
■ Must identify measures that indicate
problems

19
Module Fan-in and Fan-Out

M o d u l eD aA t a S t o r e 1

M o d u l Me oB d u l e C

M o d u l e D

20
Quality Conclusions

■ Do an FCA for those aspects important


to your project
■ Identify the data needed and how to
analysis
■ Make people aware of the targets

21
Techniques for Identifying
Potential (General) Problems
■ Phase-based monitoring
◆Monitoring against plans
■ Anomaly Detection
◆Modules/Documents with unusual
characteristics
◆Use scatterplots and quality control charts
■ Trend Analysis
◆Plot characteristics over time
◆Look for unexpected patterns

22
Scatterplot of Size against
Defects
18

16

14

12
Defects

10

0 100 200 300 400 500 600

Size (LoC)

23
Normalised Test Effort against
Faults
.8

Test Effort/Total Design Effort


.7

.6

.5

.4

.3

.2

.1

0
0 2 4 6 8 10 12

Number of Faults

24
Trend Analysis
■ Time Based
◆Monitor individual objects over time
◆Assess deviation from expected trends
■ Intercheckpoint Monitoring
◆Build up of partial products towards a
checkpoint
■ Inspections performed to date
■ cumulative modules through design, code, unit test
■ system test runs per week

25
Fault rates 450

400

350

Cumulative Faults
300

250

200

150

100

50

0
0 6 12 18 24 30

80
Week

70

60
Cumulative Faults

50

40

30

20

10

0
0 6 12 18 24 30

Week 26
Fault Detection against
Fault Removal
40

35 Faults Cleared
Faults Arrived/Cleared

30 Faults Arrived

25

20

15

10

0
0 4 8 12 16 20

Week

27
Management Hazards

■ Many project hazards arise from poor


project management
■ Analyse plans for weakness/problems
■ Anomaly detection for networks
■ Review plans for:
◆uncertainties
◆optimistic assumptions

28
Project Plans

■ Each Task
- Optimistic duration
T1 T4
estimate
- Most likely estimate
- Pessimistic duration
T2 T3
estimate

29
Network Review
■ Review ‘nearly’ critical paths
◆What would make them critical
■ Review network structure
◆fan-in to and fan-out from nodes
◆large values may indicate
■ plan bottlenecks
■ incorrect dependency analysis
■ Watch out for our own mistakes

30
Looking for uncertainties
■ Plans can hide
◆misplaced assumptions
◆lack of knowledge
■ Look out for large unanalysed ‘blobs’
◆areas of plans with large number of uncertainties
◆areas of plans with large allocations of
effort/elapsed time and little description
◆estimated by ‘remainder’
■ detailed hardware costs therefore remainder must be
software

31
Example Chart

When events are possibly uncertain make them


a precondition of as little as possible in plans

32
Resource Problems

■ Network Computed by Tool?


- Time Scheduled
◆ Every activity starts ASAP
◆ Resource level demands fluctuate widely with unrealistic
peaks
◆ Parallel working excessive
◆ Danger of task interactions
- Resource Limited Scheduling
◆ Float used as lead, fewer resources used, lower peaks,
maintain times
◆ Paths become near critical, little float, more hazardous

33
Risk Reporting
■ Report to
◆Senior Management
◆Customer
■ Initially use ‘Top 10’ reporting
◆Batch if more than 10
■ Know what they are and what you are doing
about it

34
A Risk Report

■ All major hazards and associated risks


■ In addition to standard risk information
◆Length of time active
◆New items
◆Slow moving items
◆Successes

35
Example Risk Report
Rank Prev On Item Consq Progress
Rank List
1 4 2 Replace- Lack of Chosen
ment for expertise, delay replacement
team in code made
leader production, less unavailable
reliable in by other
operation project
2 6 2

3 2 5

4 3 6

5 5 3
36

You might also like