Professional Documents
Culture Documents
Abstract
There exists a commonly held, if rarely spoken, belief that the usefulness of any predictive
model is directly related to its accuracy as measured against the real world. In a deliberately
contentious manner, this belief is tested and found wanting with several examples of real world
processes. There is a time when sufficient accuracy is optimum. An approach is described
which encourages discernment of optimum accuracy by all involved in the process.
Introduction
Since the dawn of the engineering profession, there have been developing methods for
predictively evaluating proposed designs and codifying lessons learned during the realisation of
those designs. With time and experience, the formulation of these methods has improved
steadily from the flawed methods used to design the Tay Rail Bridge, which collapsed with the
loss of 75 lives in 1879 (figure 1), until they have been suitable for the design of large, highly
optimised structures like the Boeing Dash 80, debuted in 1954 (figure 2). This is an astonishing
achievement in 75 years - a single lifetime.
The Dash-80 is a four-jet-engined passenger aircraft and was designed in an era of manual
calculation. It reflects well on all involved. None of the design staff would have had access to
symbolic spreadsheets, finite element analysis, multi-body system software and any of the other
tools the author takes for granted. The tools available today would have seemed like some sort
of utopian vision to the Dash 80 design team, and yet to us today it is easy to believe they
generate more work than they save.
Why would that be?
What is Accuracy?
Before embarking on a discussion about accuracy and its merits, it may be helpful to tender a
definition for this purpose:
"Accuracy is the absence of a numerical difference between predicted behaviour and measured
behaviour"
Accuracy is not a "yes/no" quantity, but instead a varying absence of difference exists between
predicted (calculated) behaviour and measured behaviour. Such a "difference" is commonly
referred to as "error". This definition neatly sidesteps two other difficulties:
is the measured data what actually happens in the absence of measurement?
is the measured data what actually happens during service?
For example, the mass-loading effect of accelerometers may introduce inaccuracies at high
frequencies and could mean that the system of interest behaves differently when being
measured to when not. The accuracy of controlled measurements in discerning the behaviour of
the system when in normal uncontrolled use is another matter entirely and is a topic for another
day. Both topics are far from trivial.
Accuracy as a quantity is defined entirely in terms of numbers; it is not about judgements or
subjectivity or other aspects of evaluation which may not be performed repeatably and
independently of the performer.
What is Usefulness?
Similarly, a definition of usefulness is helpful:
"Usefulness is the degree to which predictions are able to be used advantageously in the design
process."
"The degree to which" implies a sliding scale and not simply a "go/no-go" gate for evaluation.
The knowledge that there may always be something more useful is an instrument of torture for
many engineers. "To be used advantageously" is a pivotal phrase; predictions which may not be
used offer no advantage.
An example of a non-useful prediction is one that arrives after conceptual or tooling
commitments have been made which, had it been available before such commitments, would
have changed the decision. Similarly, a prediction which is so coarse as not to distinguish the
consequences of one decision from another has no use.
Real-World Examples
1 - Axle Tyre Size Difference for Mid-Engined Sports Car
For initial vehicle package work it is crucial that the major masses are placed correctly to deliver
some key aspects of vehicle dynamic behaviour. Poor early placing of the major masses relative
to the wheels (or vice versa) may result in late changes such as the use of different size tyres
front and rear to restore high-speed control to within typical driver limits.
This in turn halves the volume required for each tyre (therefore reducing economies of scale
and increasing costs) and adds logistical difficulties to the assembly process. Early sourcing
decisions ("Can we use the same size tyres front and rear on our mid-engined sports car?")
need answers in which confidence can be placed. A late reversal of that decision will have
significant implications for the programme costing. Even if it proves inevitable given the design
constraints, early recognition of the fact costs less than late.
1a - Simplest Possible Representation
An important role of the tyres is to control the vehicle while manoeuvring. A spectrum of
predictive methods exist for modelling the lateral behaviour of vehicles in the ground plane. The
simplest useful one is the single track model with two degrees-of-freedom - yaw and lateral
motion. It produces solutions so fast it is, for all practicable purposes, instant to use. It requires
a very small number of parameters to describe the vehicle.
The shortcoming of the two degree-of-freedom model is principally its inability to capture the
interaction of roll and yaw behaviour. It is frequently the case that such models do not represent
tyre characteristics in detail but this is a choice made at the time of the formulation of the model
rather than a fundamental shortcoming. The two degree-of-freedom model cannot hope to
capture issues such as steering feel.
1b - Most Elaborate Possible Representation
At the other end of the spectrum is a fully defined multi-body system model. It has each
suspension link, intricate mathematical descriptions of dampers and each elastomer, quite
possibly flexible descriptions of metallic elements in the form of embedded finite element
models, further in-depth mathematical representations of the tyre behaviour, detailed
characterisation of the frictional behaviour of each joint and faithful representation of on-board
control systems (e.g. for ABS, ESP). It may also have a representation of the driver as a control
system.
1c - Optimum?
The most elaborate model may take quite some time to prepare. While the task of data entry is
greatly eased by improvements in software tools and templates, of which the Prodrive Modular
Modelling System is one, the task of obtaining the data to feed the model remains. The run time
is not so quick as the simplest model, but the difference in elapsed time for the predictions is
nothing compared to the data-gathering time.
Moreover, not only does the information take time to gather, but the design process must have
run for a certain amount of time in order to have defined those items.
It is this fact which is the key to discerning the appropriate level of accuracy for calculations.
The length of time required to define the design for a fully detailed multi-body system model will
significantly lessen the usefulness of the predictive work to answer the question. However, the
basic two degree-of-freedom model is incapable of the required level of resolution to discern the
answer reliably.
Both are inadequate - the basic model is insufficiently accurate and therefore not useful, the
elaborate model is later to answer the question - particularly if the company has no previous
experience of manufacturing this type of vehicle. This latter scenario is likely to become more
common since the vehicle market is fragmenting more and more.
Somewhere between the two extremes lies a model of sufficient complexity to answer the
question with confidence. A simplified multi-body system model or a more elaborate classical
model with reasonable tyre representation will be most appropriate; the choice between them
will depend on the availability of the tools, pre-existing templates and so on.
However, although the escalating scale of complexity and data definition are relatively easy to
comprehend, there is a difficulty with vehicle dynamics work generally. The objective responses
of the vehicle as measured are difficult to relate directly to the emotions induced in the driver as
a result of either good or bad dynamic behaviour. Thus the difference between good and bad, or
acceptable and unacceptable, is poorly understood and therefore difficult to discern in terms of
appropriate levels of accuracy.
2 - Chassis Component Integrity
The consequences of suspension component failure are serious enough that a great deal of
attention is given to their design. Unlike engine components, which see a fairly predictable
series of (nevertheless quite violent) events during their service life, the differences between
different users of vehicles are extreme for suspension components and the loading environment
is essentially random.
The nature of current vehicle design is such that there are several components which are
subject to widely varying loads in many directions simultaneously - for example, suspension
knuckles for strut-type suspensions.
This loading environment - described as "non proportional multi-axial" - results in a complicated
and time-varying stress distribution throughout the component. The cyclic loading results in the
development of fatigue "micro-cracks" which will coalesce into a "macro-crack" (engineering
crack) given enough reversals of loading at a high enough amplitude. Further repeated loading
will probably result in growth of the crack until failure - unless the structure is such that load is
removed from the area of interest as the structural compliance increases with crack length.
2a - Simplest Possible Representation
Classical calculations, as used by the Dash-80 design team, can be used to calculate stress
states in fairly simple structures but start to become unacceptably inaccurate when the
component has a particularly "organic" form - i.e. blended transitions and free-form surfaces
which defy explicit description in terms of cylinders, cubes and so on.
2b - Most Elaborate Possible Representation
There are currently several predictive software tools for computing this behaviour of metals.
Most start with a description of the stress response to loading in the form of a finite element
model. Some use a combined multi-body system and FE approach to distribute loads, some use
a fully finite element approach to model the interaction of the vehicle with its environment. They
all combine this with a material model of some description to infer the behaviour of the
component from the behaviour of test specimens of the same material under well controlled
loading conditions. These methods are usefully accurate if appropriate load-time histories are
available. Acquiring those histories is an elaborate exercise in itself, since attaching delicate
instrumentation which will remain functional under the (by definition) most arduous loading
conditions is not trivial. The volume of data generated is large and can become cumbersome.
The time taken to predict whether or not critical components will survive durability sign-off
testing is significant.
2c - Optimum?
For minor changes, and certainly to rank priorities, straightforward finite element analysis can
quickly predict stress responses to "design" events - these are events which are notional
extreme service events but which may well exceed maximum loads recorded in service by real
users. This is significantly more rapid than the full fatigue analysis (probably by a factor of
around 100 in elapsed time for the calculation alone).
The response of metals to stresses is well defined and so accuracy can be well discerned and
defined. The difficulty is with understanding the inputs to the analysis - i.e. the load time
histories - rather than understanding the outputs.
Thus, the "quick and dirty" approach can lead to almost as much uncertainty as if no analysis
had been performed at all, since it uses design load conditions defined by a single digit - for
example "3 g Bump". It seems curious to search for convergence to within a few percent of FE
model outputs when the input is so poorly defined.
Is There a Better Way?
Even this cursory glance reveals that discerning the optimum accuracy for predictive analysis is
far from simple.
For most predictive analysis work, there are essentially two schools of thought; this is perhaps
an exaggeration to the point of parody but is suggested nevertheless.
The first is the use of a single comprehensive method, of sufficient accuracy to capture any and
all of the phenomena which will be of interest during the lifetime of the model.
The second is to use several methods, the complexity of each of which is optimised to the
immediate task at hand.
Proponents of the first method point to the immediacy of any analysis task once the model is
complete and the high consistency of data between all analysis tasks. And the very high level of
accuracy.
Proponents of the second method point to the extremely rapid availability of results from a
standing start in the absence of historical data.
A third possible approach is to use methods of evolving complexity and to perform analysis at
each stage of evolution, searching for convergence of decision criteria as predicted. The siren
voice of added complexity must be resisted as a habit. The question should always be "If I add
this to the analysis, what decisions will it change once the results are known?"
If the answer is none, extra complexity should surely be rejected?
Conclusions
A simultaneous demand for increased quality and reduced development is putting very
high pressure on predictive methods.
An ill-thought out approach can cost a great deal while contributing little to important
decisions during the process.
There is no simple approach to discerning optimum complexity for a given predictive
task save for clear thinking at the beginning of each piece of work about what decisions
it must support.
Figures