You are on page 1of 30

ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS IN ENERGY

SYSTEMS ANALYSIS
Enrico Sciubba
University of Roma I, Italy
Keywords: Optimization of energy systems, Design optimization, Synthesis optimization,
Artificial intelligence, Process Synthesis, Inverse Design, Expert Systems, Intelligent Health
Control.
Contents
1. Introduction
2. Is there a "universal" design paradigm?
3. Application of the Universal Design Procedure to Process Synthesis
4. "Design" and "Optimization"
5. Process Optimization
6. Computer-aided Synthesis and Design tools
7. Application of the Universal Design Procedure to the Design of Components
8. Expert Assistants for Process Diagnostics and Prognostics
9. Conclusions
Related Chapters
Glossary
Bibliography
Biographical Sketch
Summary
This chapter is an introduction to the field of artificial intelligence (AI) applications to the
design and monitoring of energy systems, and serves as a compendium for the related
chapters that follow under this topic. After a brief discussion of the characteristics that make
AI useful for engineering applications, a concise definition of terms and concepts is given.
The presentation style has been tailored to provide readers with a general introduction to AI
topics, without burdening them with excessive formalism. Since our goal is to describe
engineering applications to thermal design, emphasis on the applicative side has been
stressed.

1. Introduction
This chapter describes in detail the general activities connected with the application of a
powerful set of the so-called AI-procedures to the selection, synthesis, design and control of
energy systems. Since the field is very broad, we shall restrict our treatment, and employ only
a sub-set of AI, called Expert Systems (ES), to the above tasks: other tools, like Neural
Networks (NN) and Fuzzy Logic (FL) are treated only marginally. The general principle is to
implement a computer-assisted procedure that possesses, in a form that will be discussed in
detail for each implementation, some of the "intelligence" of the human designer. The process
is in principle quite simple, and it is based on the premise that for each "design" task (the type
and structure of these tasks shall be also discussed in detail) there exists a set of general
guidelines, derived from engineering experience formalized and catalogued in the form of
either design manuals or textbooks or otherwise published and accepted design procedures.
An ES is therefore a computer code that mimics not so much the human reasoning, but rather
the way this reasoning can be (and has been) organized at the present stage of technology. The
first point to argue is clearly that there is indeed one general design protocol for all types of
design problems: this is crucial to our thesis, and is in fact the justification for the search for
AI-based "Design Assistants". Once the existence of such a protocol has been established, it is
a simple matter to show that the two fundamental design tasks encountered by an engineer,
namely the direct (simulation) and inverse (design) problem, can be considered embedded
into a single meta-procedure. The individual chapters under this Topic discuss the application
of this meta-procedure to the synthesis of a process, to the design and/or choice of
components, and to the development of intelligent monitoring and control systems.

2. Is there a "Universal" Design Paradigm?


To answer the fundamental question whether it is possible to construct a "universal" design
paradigm that can describe every conceivable act of design, it is necessary to examine first
what a design task consists of. A thorough analysis of the existing procedures shows that an
essential feature of most design tasks is that they are posed as ill-structured problems. An ill-
structured problem is one that:
1. cannot be described solely in terms of numerical variables;
1. possesses goals that cannot be specified in terms of a well-defined
objective function; and
2. admits of no algorithmic solution.
Ill-structured problems are also called ill-defined or ill-posed, and their most striking feature
is that their solutions are unpredictable, in the sense that the environment in which the
solution is to be sought for has a strong influence on the existence, uniqueness and type of
solution. In the search for a solution, an engineer relies on judgment, experience, heuristics,
intuition and analogy rather than on specific knowledge of solution procedures applicable per
se. The question whether ill-structured problems can be solved by some kind of structured
engineering reasoning has been answered in the affirmative long ago, in direct and indirect
ways. It will be shown in AI in Process Design that what goes under the name of "design
activity" is in reality a concatenation of several complex actions, only some of which fall
within the responsibilities of the engineer. Moreover, if we compile a detailed list of the tasks
that constitute a "design procedure", it becomes apparent that, with few exceptions, most of
the activities are common to every design task, as if they were logical building blocks of the
design procedure: this reinforces our intuitive idea that there must be a universal underlying
paradigm in the solution of every engineering task. Though there is no logically complete
proof that this is indeed the case, heuristic evidence abounds.
2.1. The "Universal Design Procedure": a possible Flowchart
In the real world, a design project includes both technical and non-technical tasks: often, the
physical "design" (if we take this word in its restrictive meaning of "quantitative sizing of
units and systems") is a minor activity in the general project perspective. Several other
activities are of importance before the "sizing", during (i.e., concurrently with) it and after its
conclusion, and it is their coordinated sequence that constitutes the actual "design" task and is
accordingly managed in its totality. Extrapolating from design manuals and textbooks, a
"Universal Design Procedure" can be identified and is presented in Figure 1. This flowchart
shall be discussed in greater detail in AI in Process Design, where the single activities shall be
examined in detail. We discuss here only those that are more closely related to "design" in the
layman meaning of the word.
Figure 1: Block scheme of a possible "Universal Design Procedure"
2.1.1 Definition of Needs and Objectives
Scope of this task is to formulate an explicit and complete explanation of the reasons that
justify the investment of resources in a specific project. The "needs" as well as the
"objectives" of a project are not necessarily formulated in terms of economic convenience:
often, social equity requirements, social opportunity, macro-economics or even political
interests are valid criteria to include in the description of needs and objectives. Though this
phase is usually advocated by the "Management", there is a growing tendency to allow for
some "input from below", i.e., from the more technical portion of the organization. If present,
the R&D Department can also co-operate in the definition of needs and objectives.
2.1.2 Preliminary Estimate of the Design Costs
This task is usually performed by a specific division, responsible for technological methods
and production scheduling, on the basis of detailed specific information provided by the
technical and commercial support structure. Errors made in this phase can be very costly:
since the specific design problems are usually unknown in detail at this stage, care must be
exercised both to limit the risk of cost overruns (that would produce a net loss at the end of
the project), and to avoid overestimating the costs (and thus preliminarily reject a lucrative
project).
2.1.3 Feasibility Study
This is a well-codified engineering activity, aimed at determining whether all conditions that
make the project feasible are met at the time of its foreseen realization and in the operative
conditions under which the project will be undertaken. To be feasible, a project must be:
technically possible
pperationally reliable
industrially sustainable
economically advantageous
legally acceptable
2.1.4 Final Design
This is what is usually called "design": a very complex activity rich of interdisciplinary
details, in which the item, unit or system to be built is completely designed to specifications,
by means of a concerted co-operation of process-, structural-, industrial-, mechanical-,
chemical-, material-, environmental- and control engineers, possibly assisted by field experts
for specific problems.
2.1.5 Construction
This task is directly supervised by specialist mechanical-, chemical-, structural-, or civil
engineers, who have specific knowledge and experience in the "construction" field (where by
construction we mean all of the possible production technologies that lead from the raw
materials to the end products).
2.1.6 Testing and Customers Acceptance
This task is performed by Quality Assurance (Q&A) specialists, with the assistance of design
engineers. Usually, two teams work jointly on this activity: that of the constructor and that of
the customer. If required, a third independent party can co-ordinate the activities of the first
two ("Arbitrate").
2.1.7 Modifications and Improvements
This is one of the activities of the R&D Division, but requests for modifications are mostly
originated by the Process Engineers responsible for the maintenance and operation of the
hardware. Very useful for proving the technological production line but potentially wasteful if
too many unjustified modifications are proposed, because the amount of resources that must
be allocated for the analysis of each unsuccessful process modification may become too
costly.
3. Application of the Universal Design Procedure to Process Synthesis
3.1 Formulation and Position of a Process Engineering Design task
A synthesis design task is quite different from what normally goes under the name of
"design". The difference can be reduced to the concepts of "direct" and "inverse" design
problems. In a "direct" design problem, the structure of the process is assigned a priori: task
of the designer is that of selecting or/and sizing the components and "optimize" the overall
plant performance and cost effectiveness. In an inverse problem, the goal is prescribed in
terms of the expected product, an expected global performance indicator and the expected
cost effectiveness, and the task of the designer is first that of selecting a convenient (the most
convenient) process flowchart, and then of executing a further "optimization" on it by
performing a direct design exercise. Clearly, inverse problems are more difficult. What is not
often pointed out, however, is that their difficulty is not related to the additional work to
perform, but almost exclusively to the nature of this additional work. To synthesize a process
means to devise its structure, and this is a highly non-quantitative task that cannot be
performed algorithmically (even the so-called deterministic synthesis methods like simulated
annealing require a non-deterministic decision on the first trial structure). Historically,
engineers have relied on experience and technical common sense in deciding about the most
convenient process layout, and neither one of these "mental tools" are amenable to be
expressed by a set of formulae. Here is where our Universal Design Procedure can be put to
work: we can try to express the single "actions" that constitute the general engineering design
task in the form of "rules" or "propositions", and apply the tools of propositional calculus to
translate them into "Artificial Intelligence" procedures. Since this leads to the implementation
of paradigms that are very much different from the quantitative procedures we are accustomed
to, a more detailed discussion is in order. First of all, we must accept that, however accurate
the problem formulation has been, the result is an ill-posed design problem. More correctly,
such problems are usually incompletely, or fuzzily, specified. Thus, the first subtask is that of
defuzzifying the problem position, which can be achieved in three steps:
1. First, examine the (assumedly fuzzy) set of input data. Determine what are
the actual inputs, what is their quantitative availability, what are their
respective chemical and physical properties, how and at which cost are
they supplied at the boundary of our design control volume;
2. Then, perform a similar examination on the desired outputs. To minimize
the risk of over-specifying the problem, it is useful to operate a distinction
between mandatory and accessory goals: mandatory goals are "musts", and
they are included by force in the general set of criteria for success of the
system we are about to design; accessory goals are "wants", and while they
may be absent at all from the final solution, yet their presence in one of the
proposed process layouts can be seen as a "bonus point" that makes that
layout more desirable for the final customer. We must keep in mind that
the selection is dynamic, and that an accessory goal may become a
mandatory one as the design activity proceeds;
3. Analyze the constraints. Interpret them: it is usually safer to pose all weak
constraints in their strong form first, and relax them only after a solution
has been found.
After these three steps have been sequentially performed, the problem formulation will be
found to be somewhat different from the original one: in any case, it is now in such a form
that the general design procedure may be applied to it.
3.2 Towards a General Process Synthesis Paradigm
Strongly connected with the idea of "design" are the ideas of "innovation" and of "creativity".
This is not a naive statement: it is actually the principle on which to develop a non-
quantitative process synthesis paradigm. At the "Synthesis" stage, all options must be
explored, and only the clearly unfeasible ones discarded: creativity rather than exactness
must be the leading principle. A general paradigm ought to contain some or all of the
following guidelines:
1. Consider all possible processes that may lead from the available inputs to
the specified output. Rank them from the simplest to the more complex. At
this stage, no alternative ought to be discarded on the grounds of
commercial prejudices (components difficult to find, or too expensive) or
technical biases (mature technology, non-standard solutions). However,
processes that have been proven faulty in the past under similar conditions,
or require components off scale of more than one order of magnitude, or
that require extensive input- or output treatments, may be legitimately
discriminated against (eliminated) in this phase.
2. Perform a detailed conceptual analysis of the more promising
configurations. These configurations may be chosen on the basis of
expert's advice, engineering intuition, customer's preference: it is
important that a clear ranking is assigned to each source, so that decisions
may be traced back later.
3. If in the course of this analysis new configurations are discovered, add
them to the existing list and examine them in turn. By such a selective
pruning, the list ought to be reduced to relatively few alternatives (the
number depends on the resources dedicated to the project, but it is to be
expected that for most processes of current technological level up to 10
alternatives have survived at this point).
4. Screen the resulting list carefully, applying the constraints identified in the
problem formulation phase. If necessary, introduce new constraints based
on experience or common sense (always clearly identifying them so that
the path to the solution may be later retraced). Scope of this step is to
reduce the number of surviving alternatives so that a quantitative
calculation may be performed on each one of them. Again, the number of
process configurations comprised in this final list is determined by the
complexity of the subsequent simulation and by the availability of
computer resources and tools (software).
5. Perform a simplified simulation of each alternative. Neglect conceptually
secondary items, like pressure loss in pipes, heat loss to surroundings, etc.
Compute all required performance indices, and generate (or estimate) a
gross preliminary sizing for the major components, and especially of non-
standard equipment.
6. Refine or re-formulate the objective function that assesses the absolute or
relative performance of each configuration. Rank the alternatives
according to the values attained by this objective function, calculated on
the basis of the approximate simulations previously performed.
7. Select the few alternatives for which this objective function clearly attains
its "near-optimum" range. Perform a sensitivity study on each of them, at
design and off-design points as specified by the operational characteristics
of the problem position. If possible, perform a Life-Cycle analysis.
8. Finally, choose two or three of the "best" surviving configurations and
discuss them in depth with the final customer and possibly with some
independent field expert. If necessary, repeat the simulations adding the
previously neglected second-order effects.
9. Proceed with final design and sizing of the chosen configuration.
4. "Design" and "Optimization"
The term optimization is often misused in the field of engineering. In a disturbingly large
number of technical reports and archival publications there is confusion about what is being
optimized with respect to what, or what was to be kept constant. What is even more
disappointing for engineering purposes, operation and maintenance issues are frequently
neglected or grossly underestimated, and any solution obtained via a purely mathematical
procedure is presented as the solution to the given design-and-optimization problem. Thus,
neighboring "quasi-optima" are disregarded, that in real applications often represent the most
convenient solution. One of the possible causes for this is surely the sharp separation
maintained in textbooks between the two concepts: it appears that the main goal of the design
activity is to generate some working solution, and the purpose of optimization is to intervene
on the final solution to "prove it". This is a wrong and dangerous misconception: nobody ever
has tackled a design task without an at least implicit self-posed constraint of "optimality" of
the final outcome. Design and optimization are both essential steps in any design activity, and
they cannot be separately performed without incurring in the risk of producing the wrong
answer to the question posed in the design problem formulation. With this concept in mind,
we can now better understand the quite different, "systemic" approach that will be proposed
here below.
It is definitely useful to review the terms in which a design-and-optimization problem is
formulated. A detailed discussion of direct and inverse design & optimization problems is
offered in Design and Synthesis Optimization of Energy Systems, to which the reader is
referred.
5. Process Optimization
5.1 The Classical Viewpoint
Referring the reader to the definitions and to the considerations introduced in Section 4 above,
we shall try here to describe in some detail what we mean here by "Process Optimization". To
begin with, let us state that the "Synthesis Optimization" as described here is different from
the "classical" optimization methods previously discussed in Optimization Methods for
Energy Systems, Operation Optimization of Energy Systems and Design and Synthesis
Optimization of Energy Systems. We are interested here in procedural questions, because it is
the procedure that the ES must mimic: to define the problem in practical terms, we need
therefore to create a "logical flow chart" of a synthesis optimization procedure. This task is
easily performed if we start by examining the list of the questions a process engineer is faced
with:
a. Is there only one physical process that can produce the required output?
b. For each transformation that composes a process, is there only one piece of
equipment capable of performing it?
c. How can components be connected, under the given constraints, to obtain
the best results in terms of the overall objective function?
d. What is the most convenient size of each component?
e. What is the most convenient technological level of each component?
f. What is the most convenient physical location of each component in the
plant layout?
g. What is the most convenient value of each one of the operating
parameters?
It is important to realize that these questions do not generally admit of univocal answers: this
means that however well formulated and "exactly" solved, an optimization problem possesses
an intrinsic qualitative essence that cannot be captured by a purely deterministic approach.
We shall return to this point in AI in Process Design. To remain within the realm of the
classical viewpoint, let us recall that there are two levels of optimization for thermal systems:
a structural level at which one seeks the "best" possible process configuration that achieves
the specified goals under the given constraints, and an operational level (called "design and
operation" in Operation Optimization of Energy Systems and Design and Synthesis
Optimization of Energy Systems) at which one tries to find the most convenient combination
of design parameters so that a given process configuration attains "best" results. These two
procedures are logically at different levels (the structural being at a meta-level with respect to
the operational), and as a consequence they can be run in sequence, possibly under an iterative
protocol. This is in fact what Simulated Annealing (SA), Genetic Algorithms (GA) or
"Supertargeting" Methods (notably, Pinch) do: they first examine an initial configuration and
compute the optimal values for its operating parameters; then introduce a series of structural
modifications and repeat the operational optimization, and so on, until a global optimum is
reached. Such protocols are straightforward, but suffer from two main limitations: they are
computationally very taxing (because they consist of a series of nested optimization loops,
one for each configuration); and -more important for our considerations here- they require by
necessity the scanning of the entire process tree. In fact, once the first configuration has been
optimized, no clue can "automatically" be found to guide the designer in his search of some
structural modifications to produce a second configuration. This decision is in fact invariably
taken on the basis of the designers experience and knowledge of the process
phenomenological description ("model", see Modeling and Simulation Methods) he has of the
process. While the fast advancing hardware technology can alleviate the first limitation, there
is a conceptual problem at the basis of the second one, because structural and operational
optimization may share the same objective function, but their domain of application (the
decision variables), the system boundaries and the solution space are different. For an
operational optimization, the decision variables are the technical and economical system
parameters, the system boundary is the physical boundary of the given configuration (which is
fixed), and the solution space contains all of the possible sets of independent technical and
economical design parameters. For the structural optimization, the decision variables are the
same, but the system boundary may vary from a configuration to another, and the solution
space consists of the set of all feasible configurations.
At present, virtually all optimization procedures available in the literature are of the
operational type: structural optimization is left to the ingenuity of the process engineer, and no
codification is offered for it, the only remarkable exceptions being the above mentioned SA,
GA and Pinch methods. These "classical" (operational) optimization procedures invariably
consist of the following steps:
1. Problem formulation: definition of the system to be optimized, of its
boundaries, of the relevant variables, of the constraints;
1. Formulation of the objective function;
2. Problem formalisation: writing of a (generally ad hoc) numerical code to
implement the optimization procedure; some standard codes are available
in the most recent numerical optimization libraries;
3. Problem solution: application of the code to the specific design problem;
4. Critical review: analysis of the results and possibly reformulation of the
objective function;
5. Iteration from 1 to 5.
A technique often used to reduce the required amount of computational resources is that of
performing a certain number of sub-optimizations: in this approach, certain portions of the
plant or process are considered in isolation, and proper additional boundary conditions are
posed to take into account the interchange with the remaining portions. This procedure rests
on the assumption that the performance of the sub-system is substantially the same both when
considered in isolation and when inserted in the "context" of the overall process: provided the
additional boundary conditions are congruently chosen, this condition can always be satisfied
by a proper iterative procedure. Unfortunately, the "optimal" operating point of a sub-system
may be substantially different if its outputs are considered per se, or if the same outputs are to
be considered as "internal" fluxes to a larger system: therefore, this technique must be used
with care. We shall not dwell further in this topic, and refer the interested reader to specialized
treatises on general optimization procedures, or to specific textbooks that deal with thermal
systems optimization.
5.2 Some Additional Remarks on the Optimization of Thermal Systems
If we are interested in Process Synthesis, the "classical" approach to optimization, in spite of
its invaluable practical usefulness, is to be regarded as a limited and in a theoretical sense
obsolete one. The justification of such a strong statement is the following: while classical
optimization methods can still be employed as a refining tool after a process structure has
been already identified, their purely deterministic approach makes them incapable of
capturing the meaning, the goals and even the concept of "global optimization", that must
perforce include structural and operational considerations. One must recall that, no matter
how apparently exact the formulation is, and how accurate the computational approach is, an
optimization problem does generally admit of more than one solution, and often these
solutions show qualitative (structural) rather than quantitative differences. This fact has very
important consequences on the rational design of an optimization code, but has been routinely
overlooked in the literature. Furthermore, it is undeniable (but equally not acknowledged in
the engineering literature) that, when using their experience and knowledge to solve an
optimization problem, process engineers heavily rely on qualitative much more than on
quantitative thinking. This observation, coupled with the large extension of the solution space,
leads to the conclusion that optimization per se ought to be performed as a fuzzy process, one
in which both the goals (the objective function, the "thinking") and the means (the
optimization procedure, the software) are not susceptible of "exact" quantification. This ought
to be a strict guideline when looking for a set of feasible solutions. A detailed discussion of
the general questions that constitute "the optimization problem" at large (as posed above, in
Section 5.1) is presented in AI in Process Design, and conclusively shows that a qualitative
approach is of the utmost importance.
5.3 Optimization Criteria
In the course of the synthesis, it is necessary to provide the Expert System (ES) with
guidelines that can direct its search for the "optimal" process. There are instances in which
this can be made at a conceptual level, but in the vast majority of cases the objective function
must be recursively computed on each one of the process instances suggested the ES. Some
examples of conceptual rules are:
When transferring heat, assign a ranking to the possible source/sink
couplings, considering more convenient those that perform the heat
transfer across the lowest possible temperature difference;
In mechanical as well as in fluid dynamic processes, rank irreversibilities
by means of their foreseen entropy generation;
In "building up" a process, before looking for an external heat source,
explore feasible heat recovery possibilities inside of the process;
Similarly, when looking for a power input, consider first whether there are
available power sources inside of the process;
In process design or selection, rank the candidate configurations according
to the foreseen (or known altogether) pollution class.
All processes "invented" by the ES must then be "optimized" with respect to a proper
objective function. The most common are:
Process efficiency
Marginal production cost (sum of the capital, energy, operation,
decommissioning and environmental costs over the plant's life-cycle)
Socio-economical "costs" (a fuzzy issue!)
Marginal exergetic cost (in a "thermo-economic" sense)
Extended Exergy cost.
6. Computer-aided synthesis and design tools
In the last decades we have witnessed a hitherto unseen development of the computational
resources (both hardware and software). Starting from 1936, when the first "electronic"
computer, the Z1, was switched on by Konrad Zuse in Saarbrcken, Germany, all of the
attributes of every computing device of interest to us have been growing at an exponential
rate: number of operations per unit time (FLOPS), storage size (RAM), input/output devices,
reliability and portability, user friendliness have reached levels today that were almost
unthinkable fifteen years ago, and -in spite of some notorious flops caused more by
commercial greed than by technical errors- progress is continuous. This led of course to the
development of specific engineering applications: computer tools are now available for
process simulation, for the design (sizing) of components and structures, for process
monitoring and control, etc. Given the proliferation of "design related" software, the relative
scarcity of "Process Synthesis Software" is even more striking. One of the reasons is of course
that process synthesis is a difficult task, for which no general procedures had been discovered
until recently: but another equally important reason is the lack of appreciation displayed by
designers and managers alike towards a software system that seems to "replace" the engineer,
by apparently performing his "creative" task. We hope to convincingly show in AI in Energy
Systems: Scope and Definitions, AI in Component Design and AI in Process Design that these
fears are generated by lack of specific knowledge on the actual performance of these
"intelligent" codes: but first, let us discuss the "deterministic" approach to process synthesis,
whose limitations are in a sense also responsible for the lack of acceptance of this kind of
tools.
6.1 Deterministic Methods for Process Synthesis
There are four main deterministic procedures for operating a synthesis of the process
structure: they are called respectively the Connectivity Matrix ("CM"), the Simulated
Annealing ("SA"), the Genetic Algorithms ("GA") and the "Target" or "Pinch" methods
("PM"). Since SA, GA and PM are discussed in Modeling and Simulation Methods and
Design and Synthesis Optimization of Energy Systems, we shall briefly describe only the CM
method here.
6.1.1 The Connectivity Matrix Method
This method is a direct application of graph theory to process design. Since it is discussed at
some length in Design and Synthesis Optimization of Energy Systems, we only list here its
main steps:
a. Create a logical process scheme.
b. Construct the connectivity matrix "CM" for the logical process scheme.
Notice that CM is the matrix representation of the logical process scheme
(Figure 2).
Figure 2: A Process and its Connectivity Matrix
c. "Translate" each operation listed in CM into a series of physical
transformations and devise one elementary sub-process scheme for each
transformation. Introducing these sub-process schemes into each one of
the applicable columns of CM corresponds to augmenting the matrix
column-wise.
d. Substitute to each transformation in every sub-process the component that
performs it.
The resulting matrix is the connectivity matrix of the "real" process P. A proper quantitative
simulation of P can now be performed to obtain the "optimal" set of operational parameters.
It is apparent that this method is a direct translation of the "mental scheme" a process engineer
applies to a design task (Section 3.2 above). It is also clear that the procedure is entirely
deterministic. Unfortunately, it is also clear that the method is strongly biased by the choices
made in points (a) and (c) above. Choosing a process scheme in fact sets a major structural
constraint on the resulting process configuration: and this step is entirely left to the
"experience" of the designer. Similarly, splitting a process into sub-processes can be done in
more than one way, and selecting the one or the other corresponds to biasing the entire
procedure. It is in a sense illuminating that a similar strong bias affects the other deterministic
"synthesis" procedures, like SA, GA and Pinch methods. In spite of its limitations, this
method has been reported here because it has many similarities with the AI methods that will
be discussed later.
6.2 Process Synthesis based on AI Methods
6.2.1 Expert Systems for Design
In the preceding sections it was argued that all process design calculations could be carried
out by properly implemented automatic routines. Process design is a highly labor intensive
and highly interdisciplinary task and is therefore also very expensive in monetary terms, so
that there is a strong incentive to reduce its labor intensity. The only task that has as of yet not
been automated is the conceptual one: the choice of the type and of the characteristics of the
process itself. It is indeed apparent that the methods described in the previous section can be
at the best seen as helping guidelines for the process engineer. In the following two sections
we introduce two different but clearly connected topics that shall be extensively discussed in
AI in Process Design:
1. The possibility of constructing an automatic procedure capable of
autonomously choosing the most convenient process configuration for a
given set of design goals;
2. The possibility of the actual implementation of this automatic procedure in
a code.
As stated above, we will actually make use only of a subset of AI techniques, called Expert
Systems, whose goal is to reproduce the engineers decisional path and to proceed from the
design data and constraints to possible process configurations.
Expert Systems are based on relational languages that use the symbolism of formal
propositional logic. They draw inferences from a number of facts stored in a particular
database, properly called a knowledge base. These facts can be design data, design rules,
physical or logical constraints, etc. Each ES manipulates this knowledge in its own way,
according to a logical procedure contained in its inference engine.
Only few Process Synthesizers have been implemented to date into verifiable working codes,
and are also described in some of the References. We can though state with certainty that an
ES can be constructed to perform the following operations:
1. To acquire in machine-readable form a series of inputs representing the
state of the universe the ES will have to deal with: design data, type and
state of the environment, component specifications, state-of-the-art of a
certain technology, etc.;
2. Be interactively instructed by the user about both logical and physical
design goals, i.e. about some physical, logical or numerical properties the
solution must possess;
3. To manipulate the data contained in its knowledge-base according to a
predetermined set of rules contained in its inference engine, and extract
guidelines on how to proceed in different situations;
4. To call other numerical procedures if necessary. Of interest here is the fact
that an ES can call any process calculation code, like those discussed in
Modeling and Simulation Methods and Design and Off-Design Simulation
of Complex Systems.
6.2.2 General Knowledge Representation for Design Applications
Let us assume that the decision has been made to construct an ES for a certain design activity.
What are the necessary steps that lead the engineer from design inception (definition of need,
general engineering concept, see Section 2) to the generation and technical description of one
or more final designs? The more general the conceptual description of the design procedure
becomes (i.e. the higher is the level of abstraction at which we attempt to describe the
procedure), the higher becomes the danger of overgeneralization. It is therefore left to the
reader to fill-in the gaps that may exist between the general method described here and any
particular application.
The process of generating a design plan can be decomposed into three phases, roughly
identifiable as problem specification, functional analysis and design plan generation. We shall
present here an outline of the activities of each phase, followed by a brief discussion (a more
thorough discussion is provided in AI in Process Design). In the first phase, we try to identify
and define the (physical and logical) parameters that constitute the design goal, those that
constitute the data, and those that constitute the constraints. In the second phase, we
investigate the functional relations that exist among the problem parameters and how they
must be accounted for during design. The third phase consists of a mapping of the logical
design algorithm onto a relational procedure that can be implemented in an application using
one of the available AI languages and tools: a graphical representation of the process is
presented in Figure 3.
Figure 3: An example of the mapping of a logical design flowchart (a), onto a relational
procedure (b)
Phase 1 is the problem specification phase and is divided into two sub-phases: problem
identification and position (at best performed on the basis of an itemized task list), and
identification of "always-relevant" parameters, in which we try to reduce the size of the
problem by assigning a "logical rank" to process parameters.
The Functional Analysis phase aims to identifying and describing the relations between the
given specifications and the possibly-relevant parameters (these functional relations are the
usual design relations: equality, constraint, polynomial dependence, etc., and constitute a sort
of unstructured skeleton of functional links between different parameters).
Phase 3 is the Design Plan Generation, and consists of the implementation of the skeleton
plan produced in Phase 2 into a design plan.
A design plan does not necessarily generate a unique solution: in most cases, it will actually
produce more than one feasible design. If needed, "optimization" procedures can be devised
to choose, based on some criterion, the "best" one among different configurations proposed by
the ES (but, see Section 5 above). Notice that, if we wish to perform an optimization after
having devised several different process layouts, additional information is needed to execute
this extra task. This is always the case, in fact, when the only task we require of the ES is that
of creating a list of feasible alternative process layouts: if the logical patterns implemented in
the code are correct, each one of the configurations proposed by the ES is feasible, but none is
explicitly optimized at the time of its generation. Clearly, a far superior approach would be
that of including some "logical" (qualitative) optimization in the process synthesis procedure:
at the present state of the art, this is only possible for relatively simple and well-known
processes.
6.2.3 Example of Automatic Process Design
The general problem that a designer can be faced with in the field of energy systems is that of
choosing the most appropriate configuration to extract assigned amounts of exergy from a
given resource base under some environmental as well as configuration constraints. This
apparently very complex problem turns out to have a relatively simple qualitative solution,
but its complexity makes a quantitative solution difficult. This example is discussed here only
to investigate whether it in principle admits a solution, i.e., whether it is possible to devise the
logical flowchart of an "intelligent" procedure: a completely worked-out application is
described in AI in Process Design.
The proper position of the problem requires the following knowledge to be acquired:
A) Design data:
1 - The required exergy rate(s) to be "produced" must be specified both as
to their type (electrical, mechanical and thermal) and their amounts (MW
installed);
2 - Some information on the type of environment: type and amount of
available resources, amount of water available, any environmental
characteristic foreseen to influence the choice of the process, all legal
and/or technical constraints known to apply at the site under consideration,
a set of negative instances, i.e. of processes that cannot be accepted as a
solution;
B) Functional and operational characteristics of all the components that may be used in the
process: the procedure must possess its own library of units and sub-units that are normally
employed in energy systems. Such a list constitutes the Knowledge Base of the ES about the
"tools" and the "building blocks" the code can use to produce the required output starting
from the prescribed input. For more details, see AI in Component Design and AI in Process
Design. Here, we can say that a component is identified by three main attributes: a set of input
and output streams, a set of "design parameters" that uniquely identify its operation, and a set
of mathematical rules that allow a complete specification of both the inlet and outlet streams
given a certain proper subset of the design parameters (this subset need not be unique).
A specific component is, in AI terms, an instance of a class. Such a class can be divided into
subclasses according to "typical" peculiar characteristic of each subclass. For example, if the
power range is chosen as the discriminating criterion, the class "Heat Exchangers" can be
divided, say, into four subclasses: "less than 1 kW", "between 1 and 100 kW", "between 100
kW and 10 MW", and "above 10 MW". All subclasses exhibit all properties of the class which
they belong to (they are said to inherit them) but can display different values (or ranges) of
the design parameters. Each subclass can be further divided (where feasible) into three types:
"high tech", "state-of-the-art" ("standard"), and "low tech". Continuing with the above
example, since for heat exchangers the technological level is measured by the maximum
admissible temperature, each sub-class can be divided, say, into two subclasses: "Tmax 500
K" and " Tmax 500 K". Notice that the division in sub-classes is by no means unique: thus,
heat exchangers may with an equally valid approach be divided, regardless of the amount of
power they exchange, into "Higher than Tambient", "Lower than Tambient" units, or on the basis of
the type of fluid they process, or of their pressure range, etc.
The AI procedure is intended to mimic the thinking patterns of the process engineer. It is not
difficult to induce the structure of these patterns: either a solution comes immediately to mind,
or it must be sought after. We can express this formally by saying that there are two logical
possibilities:
1. The problem admits of a solution which is somehow "standard" in the
mind of the engineer;
2. The problem does not admit of an "immediate" solution, and a logical
chain of technical considerations must be deduced to reach a solution.
The first possibility is not of interest for the considerations that we will develop in this
section, but is important for developing the concept of "ES-memory": it is clear in fact that
this immediateness of the solution depends on both the experience of the engineer in the
specific field and on his capacity of "recalling" from his human memory the necessary
technical associations which point to that particular solution.
The point of interest of the second possibility is of course the "chain of technical
considerations": while it is clear that the chain must end with the generation of a process
layout, it is not immediately clear where it starts, i.e., what will be its first (logical) link. At
the risk of over generalizing again, a little reflection shows that the chain works backwards,
considering the design goal as an effect and trying to find its cause(s). This procedure is
reported here using the language of relational logic (symbolic or object oriented language, see
AI in Process Design). The similarity between the machine activities as described here below
and the "human" thinking patterns is intentional and is reflected in the selection of words, but
nevertheless the substantial identity of the "human" and "artificial" Designers procedures is
striking.
Given the design goal, the engineer (for an ES, the Inference Engine) examines (for an ES,
scans) the component library to seek the components that would meet the goal. The first
component satisfying the goal is placed in the first slot of the process sketch (for an ES,
working memory) and labeled as the last component of a possible but still to be identified
process P1. If n components are found, there will be n possible "process sketches" (for an ES,
"process trees") on which the procedure has to work. Then, for each Pi (i = 1,...n), the inputs
required by the component just found are taken to be the new design goal, and the library is
scanned again seeking for components whose outputs match the goal. The procedure is
repeated until:
a) All the required inputs are available from the class "environment" and
the configuration developed so far meets all specified constraints. In this
case, a process has been found, and is displayed in a proper form (if more
processes have been found, all of them are displayed);
b) At a certain level in the process tree no match can be found (under the
specified constraints) for the required inputs. The procedure is then
aborted.
If the procedure identifies a mismatch, it tries to work around it by keeping track of all
mismatches and checking at each step about the possibility that one or more "by-products", or
"secondary streams" accumulated in the assemblage of the (virtual!) plant can be used to force
the match.
Cost considerations can of course be explicitly included in the code, which has a much higher
capability of executing concurring calculations than the human mind. In addition, a "relative
cost" can be attached to each of the constructed processes, which is a function of the number
of components and of the number and amount of external resources used. So, for instance, a
steam power plant with an externally fuelled feedwater heater train will have a higher "cost"
than the same process with regenerative feedwater heating (notice that this "cost" may be
expressed in non-monetary units). A prototype code has indeed been developed along these
guidelines, and some of the results of its testing are reported in AI in Process Design.

7. Application of the Universal Design Procedure to the Design of Components


A common strategy adopted for the design of components is the so-called Propose-Critique-
Modify method, which can be synthetically described in the following four steps:
1. Given a design goal, propose a solution. If no proposal, exit.
2. Verify proposal. If verified exit with success
3. If unsuccessful, critique the proposal to identify the reason of failure. If no
useful interpretation is found, exit with failure
4. Otherwise, modify the proposal and go to step 2.
Steps 1 and 2 together correspond to the selection of a component from a given "Components
List". Steps 3 and 4 correspond to the action an engineer takes when he realizes that his first
choice has failed. The strict correspondence between "human" and "computer" procedures is
again not casual: computer procedures are after all designed by humans! However, on the
basis of this initial success we may investigate whether the general frame of the Universal
Design Procedure outlined in Section 2 above can be adopted to encapsulate Component
Design activities.
Modern components design consists of the following steps:
1. Evaluation of the design goals;
2. Definition of the tasks that must be performed in order to achieve the goal;
3. Definition of the constraints;
4. Definition of the inputs and outputs;
5. Selection of the process technology to be used;
It is easily seen that these steps fall under the umbrella of the general design procedure
outlined in Figure 1: further considerations are presented in AI in Component Design.
8. Expert Assistants for Process Diagnostics and Prognostics
The "monitoring" of a process is a complex task, and has never been really "automated" until
very recently. In several industrial production lines, as well in several energy conversion
systems, "monitoring systems" are installed that perform a logically secondary function: they
acquire data from the process in real time, possibly elaborate them according to a fixed
paradigm, and display the results onto an (human) Operator Workstation. Such systems are
NOT intelligent: they do not take any decision, nor provide any second-level information to
the operator. The advantages of having an ES perform some logical manipulation on the raw
data are apparent:
1. In case of a subinstaneous fault, the ES can provide a list of possible
causes, giving the Plant Manager the possibility of targeting the
maintenance to one or two units;
2. In case of a creeping fault, the ES can alert the operator and even assist
him in a possible re-scheduling of the maintenance so that the needed
repair/replacement can be done in the course of a programmed shutdown;
3. For a given forecast operating curve, the ES can dynamically re-schedule
the maintenance interventions so that the global output is maximized over
a certain period.
These actions on the part of the ES correspond to (intelligent) Diagnostic, Prognostic and
Process Management capabilities respectively. In spite of their apparent complexity,
Intelligent Process Manager Expert Assistants are not difficult to implement: the central idea
is always that of reproducing the logical path that a human manager would follow. There are
of course some problematic technical issues, which will be discussed in AI and Energy
Systems: Scope and Definitions, but they can be solved with state-of-the-art intelligent shells.
A possible problem is that these ES usually need an on-line simulator to interrogate when the
performance of the plant is outside of pre-defined bounds: such a simulator must obviously
perform in almost-real-time, and this is in some instances a difficult task per se.
9. Conclusions
The main conclusions of our discussions above may be thus summarized:
i. Process Synthesis is by its own nature a non-deterministic task. In
addition, it is also logically non-linear. This leads to the important
consequence that not all relationships between its activities can be
described in an algorithmic form. Therefore, all deterministic methods are
bound to fail unless they are complemented by some sort of non-
algorithmic reasoning on the part of the human expert.
ii. Expert Systems appear to be the technique of choice to tackle both
synthesis and structural optimization of a process: it must be stressed that
these two activities are in reality tightly linked, and that it just does not
make sense to "design" a process without some "optimization criteria" in
mind.
iii. Component Design is also a strongly non-deterministic task. Its
procedures, though, appear to follow the guidelines of the Universal
Design Procedure derived for Process Design: therefore, it is likely that
ES will perform well in this field as well.
iv. Intelligent Diagnostic and Prognostic Expert Assistants can be developed
for all processes for which a sufficient Knowledge Base exists and whose
causal fault chains, whatever complex they may be, are known. Load-
curve scheduling appears to be a logical consequence of the successful
implementation of Diagnostic and Prognostic ES.
As a general conclusion, we think that this and the other chapters of this Topic conclusively
show that ES have many possible applications in Engineering Design, from its inception
(choice of the initial Process structure) to its exact definition (choice and design of
Components and Units), to its operation (Diagnostics, Prognostics and Load Programming).
Though there may be at present a sort of psychological resistance on the part of design
engineers and plant managers, who fear they "cannot control" the outcome, there is no doubt
that ES will find more and more extensive application in the field of engineering in the near
future.

Related Chapters
Click Here To View The Related Chapters

Glossary
Abstraction : Every data set can be represented by two levels of specification:a
functional one, which describes what the objects do (their function), and a
practical one, which describes the actual implementation of that function
(the many elemental tasks that the object performs to accomplish its
function). The logical action of separating the functional from the
practical specification level, and of taking only the former to represent the
object, is called abstraction.
Aggregation level : The logical level at which components or units of a process are
described. At aggregation level 0 each single component is treated as a
black box; 1 is the level at which functional groupings are considered, and
so on. There is always a maximum aggregation level at which the entire
process is treated as a single black box.
Algorithm : A finite set of clear and unambiguous elementary instructions that
accomplish a particular task.
Always-relevant : In a design task, all those parameters that are known to be, or are thought
parameters to be, important in the identification of the properties or the operational
behavior of the system to be designed.
Analogy : A particular type of logical link between two objects whose attributes
may not be the same or be in the same "order", but which can be mapped
onto each other at some higher level:this means that there exists at least
one set of rules capable of representing the relationships among
corresponding relevant attributes of both objects.
AND-tree : A decision tree that possesses only AND nodes.
AND/OR tree : A decision tree that includes at least one AND/OR node.
Approximate : Logical deduction based on approximate knowledge, that is, on a
reasoning knowledge base (KB) expressed by non-exact facts. An approximate KB
ought not to be confused with a fuzzy KB: a characteristic of approximate
statements is that they can always somehow (albeit at times in an awkward
format) be expanded to give origin to "exact" expressions.
Artificial : The portion of Computer Science that investigates symbolic and non-
Intelligence algorithmic reasoning, and attempts to represent knowledge in such a form
that it can be employed to generate instances of machine inference.
Attribute : A property of an object. Said to be characteristic if it is the attribute that
establishes whether the object belongs to a class.
Automatic : The implementation of a series of monitoring and control devices
operation of a connected to a central unit that coordinates the flux of information from
plant and to the processes, and manages them according to either a
predetermined plan ("non-intelligent" or rigid control) or to a general
schedule that attempts to meet some general goals ("intelligent" or flexible
control).
Backward : A form of reasoning that attempts to validate (or confirm) a fact (called
chaining (BC) goal) by scanning the knowledge base (KB) to determine whether the
rules it contains univocally generate the logical possibility of the goal. In
practice, this means that if a fact q is asserted by the KB, every rule that
contains q as the THEN predicate is assumed to be true, and therefore p,
the predicate of its IF, is also true.
Backtracking : When a scanning procedure reaches a "dead branch" (a node Nij which
does not possess suitable successors) in the tree it is exploring, it must
resume the search from ("backtrack to") some previous node Nhk. If h = i -
1, k = j (that is, Nhk is Nijs predecessor), the backtracking is called
chronological; otherwise, it is procedure-driven (for instance, breadth
search resumes from Ni,j-1, depth search from Ni-1,j, etc.). Backtracking can
also be directed by a logical-connection list, and in this case it is called
dependency-directed.
Belief : A particular form of approximate knowledge, in which a fact is asserted
not with absolute or quantifiable certainty, but with a vague and non-
quantifiable proposition:"I believe that f is true." Belief can be quantified
by assigning a computable degree of unlikeness to any counter-fact that
would disprove the believed fact.
Blackboard : A particular form of hierarchically organized expert system (ES),
system consisting of a certain number of local ES ("sources"), each handling a
portion of the knowledge base and communicating their conclusions to a
"blackboard," which represents the "latest state of affairs" of the global
inferential activity. A dedicated inference engine controls the flow of
information and resolves contradictions between different sources.
Certainty : An event is said to be certain when the probability of its negation is
unconditionally equal to zero. The majority of engineering data are not
absolutely certain:this can be accounted for by attaching to the
corresponding propositions a certainty factor 0 fc 1, and applying the
rules of approximate reasoning.
Class : A collection of objects, based on some common attribute of its
components, or on some peculiar relationship between them. Notice that
there may be more characteristic attributes and multiple characteristic
relationships.
Clause : The predicate of a logical connective (IF and THEN) in a rule.
Connection : A process-dependent structured data table containing information about
matrix (CM) the connectivity of a process. The CM is in fact a matrix representation of
the connectivity graph of the process.
Connectivity : A graph consisting of points (nodes), each representing an object (or
graph event), connected by lines that represent the interconnections between
pairs of nodes. These interactions may be logical or physical, and may
possess an implicit "direction" (that is, be equivalent to an arrow) or be
direction indifferent. Sometimes, each connection bears a "cost label,"
which identifies the price (monetary, energetic, or functional) that one
incurs if that route is taken.
Constraint : Any restriction posed by technological, environmental, legal, or practical
requirements on the configuration of a system. Constraints may limit the
values of some process parameter, or exclude some system configuration.
Criteria for : A set of attributes of the object to be designed that can be assigned
success quantitative values, so that the probability that a particular design will be
successful can be assessed.
Data : A collection of facts, relationships, and propositions about a portion of
the Universe.
Data type : A collection of objects and operations to be performed on these objects.
Decision tree : A graphical representation of the set of possible choices for a given
situation. The "tree" can be thought of as a single trunk giving origin to
several branches, each branch having twigs, each twig bearing twiglets, ...
until the leaves (or fruits) are reached.
Declarative : Knowledge of facts and relationships expressed in propositional
knowledge form:"fact A is an object B having the relationship p with the object C".
Also called knowing what.
Deduction : A logic procedure that marches forward from the causes to their effects.
Deep knowledge : Substantial knowledge of a problem (which includes a complete
understanding of the premises, a "feel" for the outcome, and a vast
experience of application cases) that implies a qualitative and quantitative
comprehension of the logical chain of reasoning and of the underlying
physical phenomena. An individual possessing deep knowledge about a
field is called an expert in that field.
Degree of : The degree to which a value belongs to a fuzzy set. For example, a level
membership measurement may be 40% positive large and 20% positive medium.
Deterministic : Numerical methods stemming from the quantitative modeling of a
programming portion of the Universe, and the discretization of the model equations.
They usually assume that at least one solution exists for the problem under
consideration, and generate solutions that are strongly model-dependent.
Direct design : A problem that requires the engineer to find the "best" solution (usually,
problem highest possible efficiency and/or minimum production cost) to attain a
specified design objective (the required output) for a given process
configuration.
Domain expert : An expert in a specific field related to the domain of an expert system
(ES), whose knowledge must be imparted to the ES.
Embedding : "Containing", in a logical sense. A procedure p is said to embed another
procedure q if every time p is performed, q is too, but not vice-versa.
Notice that p is necessarily at a higher logical level than q.
Encapsulation : Concealing the details of the implementation of an object. For example,
a black box approach encapsulates the actual internal structure of the
component it represents, and projects to the outside user only the function
performed by that component.
Facts : Expressions that describe particular situations. The expressions can be
logical (propositions), symbolic ("p = q"), numerical, or probabilistic.
Feasibility study : One of the subtasks of any design activity. It consists of creating a
preliminary concept configuration for the process, approximately sizing
the main equipment, and executing a preliminary technological and cost
analysis.
Feedback : In the context of this Theme, this word has two meanings. A control
system is said to possess a feedback mechanism when its output (that is,
the controlling action) depends on the value assumed by the controlled
quantity downstream of the control. A process is said to have feedback
fluxes ifseen from the point of view of its main productthe process
diagram is not linear, because certain components receive inflows from
downstream (the P&I displays some "feedback loops"). See also "logical
loop."
Forward chaining : A form of reasoning that attempts to deduce a hitherto "unknown" fact
(FC) (called goal) by scanning the knowledge base (KB) to determine whether
the data it contains may generate the logical possibility of the goal. In
practice, if a fact p asserted by the KB is the "IF"-predicate of a rule, then
we can infer that q, the "THEN"-predicate of that rule, is also true.
Frame : A collection of semantic network nodes and links. A frame is an object
denoted by its name, endowed with some slots, each slot having more
facets: slots and facets may store values, attributes, or relations.
Fuzzy knowledge : A synonym for vague knowledge. Vagueness may be introduced into a
problem by incomplete or incorrect data, incorrectly expressed
relationships, or the use of an inappropriate model.)
Fuzzy search : A tree-scanning procedure in which the searching criterion is "fuzzified."
Fuzzification : The process of decomposing a correlated set of knowledge chunks
(usually available in an uncertain or vague fashion) into one or more
qualitative groups called fuzzy sets.
Fuzzy set : A set admitting a non-binary degree of membership. In normal ("crisp")
set theory, an object either is or is not a member of a certain set. In fuzzy
set theory (FST), a degree of membership 0 < dm < 1 can be attached to
each object to express the likelihood of its belonging to a given set: dm is
clearly a particular kind of relationship between a set and its objects.
"Normal" language expressions ("very likely," "highly improbable,"
"almost certain," "more likely than") can be easily quantified by a fuzzy
approach. Therefore, FST is well suited to handling vague knowledge.
Heuristics : A set of rules, often approximate and vague, dictated by intuition,
experience, and judgment. Can also be seen as a consistent but incomplete
set of propositions about a chunk of specific knowledge, for which an
exact description is either unavailable or impossible. Proper application of
heuristics, especially in the early stages of a search process, drastically
limits the solution space.
Hierarchical : Top-down procedure by which a design plan is implemented in
refinement (HR) descending levels of abstraction: first the general plan layout, then a
detailed preliminary plan, then a general activity schedule, then a detailed
activity schedule, and so on. HR is, though, a more general concept that
has useful applications in several fields, for example in process synthesis.
Ill-posed : A problem that is not well posed. Also called Ill-structured.
Induction : A logic procedure that marches backwards from the effects to their
causes.
Inference : The derivation of conclusions from premises. Logically expressed in
propositional calculus by "IF [(a AND b) OR c] THEN d" rules.
Inference Engine : A consistent and ordered set of rules embodied in a shell-like program
(IE) that establishes controls and instantiates the problem-solving strategy of
an expert system.
Instance of a class : A specific object of the class, identified by a quantitative or qualitative
value assigned to one of its characteristic attributes.
Inverse design : A problem that requires the engineer to design (i.e., to define and
problem compute the physical configuration of) a device or a plant given the design
objective (the required output) and an objective function (usually,
efficiency or unit production cost) .
Knowledge : A qualitative and/or quantitative description of a specific portion of the
Universe, in the form of a collection of data (i.e., of certain sets of facts,
relationships, rules, forecasts, estimates, and numerical expressions).
Knowledge : The collection and ordering of knowledge. Requires some knowledge at
acquisition a higher level than the problem being considered (meta-knowledge).
Knowledge base
: All of the knowledge collected by the knowledge acquisition activity.
(KB)
Knowledge : The subdivision of the knowledge base into elemental information bits,
decomposition that is, into the largest possible set of the smallest possible data. KD forms
(KD) the conceptual basis of semantic networks.
Knowledge : A person dedicated to the eliciting, gathering, and collecting of
engineer knowledge (typically from domain experts), and to its organization into a
knowledge base.
Knowledge : Organization of the "raw" knowledge base ("as collected" knowledge)
representation into a coherent, comprehensive, and workable knowledge base.
Logical loop : A chain of formal propositions p q r z whose last term z is
the premise for the first one p. To avoid being a tautology, a logical loop
must be implemented on instances of propositions, that is, on facts. In this
case, it means that there is a feedback mechanism, driven by the output of
the last fact (component, flux) z, which influences the input of the first
fact (component, flux) p.
Logical system : A system consisting of symbols ("signs," "words") and of a complete and
coherent set of rules ("grammar," "syntax," "semantic") that describe the
correct use of the symbols.
Memory : The ability of an expert system to "remember" facts not originally
contained in its knowledge base, that were constructed, deduced, or
induced in the course of the inference process.
Macro : The process of collecting, under a single expert system, solution
structuring procedures for different problems. Usually, the macrostructure consists of
a macro-IE (inference engine) that drives several specific IE, one for each
problem (as in blackboard systems, for example).
Membership : The curve attached to a fuzzy set, which maps an input (or output) value
function onto a corresponding value for the degree of membership.
Meta structuring : The process of devising a single higher-level procedure capable of
solving distinct, but logically similar, problems. The meta-procedure is
likely to be simpler than each of its instances.
Modular : A process simulator that is not dependent on the structure of the process,
simulator but can be applied to whatever structure one can build using the (modular)
components and fluxes contained in its library.
Neural network : A set of nodes (neurons) arranged in layers (conventionally, from left to
right), where each neuron of each layer is connected to all of the neurons
of the previous ("upper") and following ("lower") layer, but not to the
neurons of its own layer. Neurons communicate messages to each other,
and the firing of a message depends on the state of the neuron at the
particular time of firing. If the leftmost level experiences some
environmental change (it is given some input), the information propagates
until it reaches the rightmost level:the modification of the states of this last
layer, taken globally, represents the answer of the neural network to the
initial stimulus.
Objective : The formalization (not necessarily in numerical form) of the criteria for
function optimality of a solution.
Object : A computational structure that contains data, data structures, and related
procedures. This means that an object can operate on itself, because it
contains both the data and the operational information about how to
process them.
Optimization : The mathematical procedure that leads to the quantitative minimization
or maximization of a certain expression of the state parameters of the
process under consideration, called the procedures objective function. The
set containing the values of the state parameters that extremize the
objective functions are called the optimal solution set. In qualitative terms,
optimization is the construction of the "best possible" solution to a given
problem.
OR-tree : A decision tree that possesses only OR nodes.
Paradigm : An underlying principle of organization that constitutes the conceptual
basis for a programming language or a solution procedure.
Possibly-relevant : State parameters of a process that are not essential to its description, but
parameters may become essential as a consequence of external circumstances
(imposition of constraints, activation of a correlation with otherrelevant
parameters, and so on.)
Predicate calculus : A pseudo-language consisting of logical predicates (facts describing
objects and their relationships) joined by a small and fixed set of
connectives (AND, OR, IF...THEN, EQUAL TO, LARGER THAN, FOR
ALL...THEN) that establish mutual correlations between predicates. The
only form discussed in this article is the first order predicate calculus, in
which objects may assume quantitative values, but predicates cannot.
Primitive : The form in which an engineering problem is first formulated. It is
problem usually ill structured (that is, vague, incomplete, with a non-precisely
formulated objective function, and with unpredictable knowledge of its
solution).
Probability of : Qualitative or quantitative likeliness that a product is produced within
success the schedule and budget, meets the objectives, and corresponds to the
criteria for success preset for it.
Procedural : Knowledge of the inference procedure to apply to a certain data set to
knowledge reach certain conclusions. Also called knowing how.
Procedure : A set of subtasks with a predetermined interconnection, whose correct
execution leads to the fulfillment of a task or project.
Process : A series of transformations that modify the physical state of some
specified amount of matter.
Process simulator : A computational software that deterministically computes the
instantaneous state of a system on the basis of the local properties of the
working media and of the operating conditions of its components.
Propositional : A pseudo-language consisting of logical propositions (assertions about
calculus objects and their relationships) joined by the four fundamental connectives
(AND, OR, IF...THEN, EQUAL TO), which establish mutual correlations
between these propositions. Propositional calculus is not quantifiable:its
goal is to derive logically necessary conclusions from a set of premises.
Property : An attribute of an object.
Prototype : A working instance of a computational code, usually less complex than
the finished product, but having all of its essential features (specifically
the inference engine). Generally, the prototype version has the same
problem-solving depth of the final version, but much less breadth (that is,
it can handle a much smaller solution space).
Qualitative : A form of reasoning used in process modeling. It is based on non-
reasoning quantitative concepts, like analogy, similarity, asymptotical behavior,
interactions, order-of-magnitude values, relationships, structure, and
functionality.
Recursive : An algorithm A that calls on itself during its own execution. Recursion
algorithm can be direct, if the link is of the type "call (A)", or indirect, if the link is
of the type "call (B)", and B contains a "call (A)" statement.
Relational : Non-numerical methods resulting from the qualitative modeling of a
programming portion of the Universe, and the conceptual representation of the context
in which the problem arises. They do not assume a priori that a solution
exists, and generate models for the solutions rather than the solution
themselves.
Relationships : Connections between objects.
Rules : Logical constructs of the form "IF p THEN q", where both p and q are
propositions or instances of propositions.
Search methods : Numerical or qualitative techniques that scan a certain number of
alternatives to find the most desirable one. The search can be performed
on layered structures (decision trees), in which case the search criterion
may be local ("among the successors of a node, proceed to the one with
the most convenient value of a certain state parameter") or global ("choose
the path composed by the nodes and branches such that the overall value
of a certain function of both a state and a path parameter is the most
convenient"). Global searches are obviously more difficult, and special
methods have been devised to limit the number of nodes explored by the
scanning procedure.
Semantic : A set of nodes and their connections:each node represents an object, and
network each connection expresses a formal relationship between objects, chosen
from a list of "allowable" relationships.
Shells : A domain independent framework of rules connected to form a very
general inference engine with specific problem solving capabilities. A
shell is used by implementing its "reasoning" on a set of specific (i.e.,
domain- and problem-oriented) knowledge added modularly to the
knowledge base of the shell.
Symbolic : The application of manipulation rules (syntax) to a set of pre-defined
reasoning (SR) symbols. SR follows the rules of predicate calculus.
Taxonomy : A systematic set of rules used for classification purposes. Given a
database composed of various chunks of knowledge and raw data,
knowing its taxonomy means, for example, being able to partition the
database into distinct but correlated subsets, in each of which a different
inference engine may be applied.
Thermal System : A collection of orderly interconnected devices/components whose goal is
that of performing an energy conversion, usually from chemical and/or
thermal into mechanical and/or electrical and/or thermal.
Thermo- : A second-law based cost optimization technique. The costs (in monetary
Economics units) are computed with the aid of entropic and exergetic considerations.
Weighted search : A search process in which each branch of the decision tree carries a
"weight" or "penalty" function:the objective of the search is to find the
path(s) for which the sum of the weights on the n branches constituting the
path(s) is minimal.
Well-posed : In a strict mathematical sense, a problem is said to be well-posed if it has
one unique solution, and this solution depends continuously on the
relevant data of the problem (boundary and initial conditions, values of the
coefficients, and so on).
Well-structured : A problem that can be described in terms of numerical variables,
possesses a univocally and well-defined objective function, and admits of
an algorithmic routine for its solution.

Bibliography

A.Bejan, G.Tsatsaronis, M.J.Moran (1996): Thermal Design and Optimization, J.Wiley & Sons [A fundamental
work, complete and clearly written, covering the engineering side of the design of Thermal Systems]

M.De Marco et al.(1993): COLOMBO: an expert system for process design of thermal powerplants, ASME-AES
vol.1/10 [The first completely documented Intelligent Process Synthetizer]

R.A.Gaggioli, S. Qian, D.A. Sama (1989): A Common-Sense Second Law Approach for proving Process
Efficiencies, Proc. TAIES 1989, Beijing, International Academic Publishers-Pergamon Press [An invaluable
source of "procedural thinking", useful in the development of Expert procedures]

M.Green (Ed.) (1992): Knowledge Aided Design, Academic Press, 257 p. [A collection of specific topics in
direct AI applications to design. For specialists]

T.Gundersen,L.Nss, (1988) The synthesis of cost optimal Heat Exchanger Networks: an industrial review of the
state of the art, Comp.Chem.Eng. vol.12, no.6 [A much valued reference: though outdated, it is a very useful
review paper on HEN Design techniques]
E.C.Hohmann,F.J.Lockhart (1976): Optimum Heat Exchanger Network synthesis, Proc. AIChE Meet., Atlantic
City [The original work (after Hohmanns Doctoral Thesis) that laid the foundations of modern HEN Design]

A.S.Kott, J.H. May, C.C. Hwang (1989): An autonomous Artificial Designer for thermal energy systems, ASME
Trans., Oct. 1989 [One of the first Process Synthetizers]

B.Linnhoff (1997): Introduction to Pinch Analysis, in "Developments in the Design of Thermal Systems",
R.Boehm (Ed.), Cambridge U.Press [A mature description of Pinch and Supertargeting methods]

M.Maiorano,E.Sciubba: HENEA: an exergy-based Expert System for the synthesis and optimization of heat
exchangers networks, Int.J.Appl. Thermod., v.3, n.2, 2000 [The first ES-based HEN Design procedure]

N.Metropolis et al (1953).: Equation of state calculations by fast computing machines, Jnl. of Chemical Physics,
vol. 21 [The original description of the Simulated Annealing Method]

M.J.Moran (1997): Second Law applications in Thermal System Design, in "Developments in the Design of
Thermal Systems", R.Boehm (Ed.), Cambridge U.Press [Another valuable source of "expert human thinking" in
the Design of Thermal Systems]

A.Newell (1990): Unified theories of cognition, Harvard Univ. Press [An epistemological explanation of the
basis of AI techniques, including (but not limited to) Expert Systems]

L.T.Ngaw (1998): HEN synthesis using HENCALC and Second Law insight, M.S. Thesis, U. Mass. at Lowell
[An extraordinary example of the extremely sophisticated level at which exergy thinking can be used in HEN
Design]

B.Paoletti, E.Sciubba (1997): Artificial Intelligence Applications in the Design of Thermal Systems, in
"Developments in the Design of Thermal Systems", R.Boehm (Ed.), Cambridge U.Press [One of the first
systematic analyses of the field of AI applications to Thermal Design]

P.Y.Papalambros,D.J.Wilde (1988): Principles of optimal design: Modeling and Computation, Cambr.U.Press [A


discussion on the concept and practice of Engineering Optimization]

G.V.Reklaitis, A. Ravindran, K.M. Ragsdell (1983): et al.: Engineering Optimization, J.Wiley [A sort of
handbook on Engineering Optimization, with many examples of applications]

D.A.Sama (1995): The Use of the Second Law of Thermodynamics in Process Design, JERT, vol. 117, no.9
[Another valuable source of "expert thinking" for Process Engineers]

E.Sciubba (1998): Toward Automatic Process Simulators: -Part II An Expert System for Process Synthesis, J.
Eng. for GT and Power , vol.120, no.1 [Contains a systematic discussion of a second-generation Expert Process
Synthetizer]

R.D.Sriram (1997): Intelligent Systems for Engineering, Springer Verlag [A fundamental reference for AI
practitioners. Many techniques explained in detail, many worked-out examples]

W.F.Stoecker: Design of Thermal Systems, McGraw Hill, 1980 [An old, but still valid textbook. To be used as a
source of "expert advice" for the Design of Thermal Systems]

Biographical Sketch

Enrico Sciubba (born July 11, 1949) is a Professor in the Department of Mechanical and Aeronautical
Engineering of the University of Roma 1 "La Sapienza", in Roma, Italy. He received M.Eng. Degree in
Mechanical Engineering from University of Roma in 1972. After working for two years (1973-75) as a Research
Engineer in the Research & Development Division of BMW, Munich (Germany), he returned to the University
of Roma as a Senior Researcher (1975-1978). He then enrolled in the Graduate School of Mechanical
Engineering, majoring in Thermal and Fluid Sciences, at Rutgers University, Piscataway, NJ, USA, where he
was granted a Ph.D. degree in 1981. He joined the Department of Mechanical Engineering of the Catholic
University of America, in Washington DC, USA, as an Assistant Professor in 1981, and worked there until 1986,
when he returned to the University of Roma 1 first as a Lecturer, then as an Associate and finally Full Professor.
He holds the Chair of Turbomachinery, and lectures on Energy Systems as well, both at the undergraduate and
graduate level. In 1999 Dr. Sciubba was elected a Fellow of the American Society of Mechanical Engineers. In
2000, he received an Honorary Doctoral title from the University Dunarea de Jos of Galati (Romania). His
research is related to CFD of Turbomachinery, to Exergy Analysis, and to Artificial Intelligence applications in
the design of Energy Systems. His publications include more than 40 archival papers, over 150 articles in
international conferences, one book on Turbomachinery (in Italian) and one on Artificial Intelligence (in
English).

To cite this chapter


Enrico Sciubba, (2005), ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS IN ENERGY
SYSTEMS ANALYSIS, in Exergy, Energy System Analysis, and Optimization, [Ed. Christos A. Frangopoulos], in
Encyclopedia of Life Support Systems (EOLSS), Developed under the Auspices of the UNESCO, Eolss
Publishers, Oxford ,UK, [http://www.eolss.net] [Retrieved July 20, 2007]

You might also like