You are on page 1of 16

Distinguishing Ontological Frameworks

By Philip Boxer

Introduction
In his paper on the lightweight enterprise, Richard Veryard uses stack architectures to give an account
of the relationships between the various service platforms offered by businesses, and the ultimate
customers of those services [1]. He has the following to say about the height within a stack at which a
business positions itself:
“One reason for the difficulty comes from the complexity of the business environment, which
generates complexity in the business stack. The height and configuration of each platform is a
difficult strategic question: too low and you leave a value deficit, too high and you lose the
economies of scale or scope, too inflexible and you can’t respond to change.”
The stack architecture is a way of aligning supply-side services to users’ demand-side needs, and just as
SOA’s help align services to users’ needs, so too does mashup technology as a way of generating
lightweight solutions to users’ problems. In general, however, this approach raises a number of
questions about the nature of the stack itself:
“How many layers? How can this geometry be both adapted and adaptable? What is the
appropriate granularity in each layer? What is the rate of change within each layer? How much
coupling is there between layers? What are the trust and security requirements in each layer?
What is the appropriate technology for each layer?”
Richard offers a number of viewpoints from which these questions might be answered, including those
of integrating processes or content, of structures of service delegation or of desirable effects for the
ultimate customer. Implicit in all of these is the ontology underlying the way the stack is built.
In his later paper on Web 2.0 [2], he goes slightly further by clarifying the differing demands on supply-
side and demand-side ontologies:
“The supply side needs to be based on a fairly stable canonical data model, which provides integrity
across the architecture. The demand-side may be based on a highly dynamic emergent ontology,
produced using such technologies as tagging and the semantic web. This is sometimes known as
1
‘Folksonomy’.”
There has to be a relationship between the canonical and emergent ontologies associated with a stack.
So how are we to think about this relationship? This paper argues that there are different kinds of
strategy for relating the supply-side and demand-side, which make different kinds of assumption about
the ontological frameworks needing to be supported by the stack architecture.

1
See “Explaining and Showing Broad and Narrow Folksonomies”, Thomas Vander Wal,
http://www.personalinfocloud.com/2005/02/, Feb 2005. For a fuller commentary see “Collaborative thesaurus
tagging the Wikipedia way”, Jakob Voss, Wikimetrics research papers, Vol 1 issue 1. April 2006. Essentially, a
demand-side ontology emerges from a collaborative tagging process.

1
Types of ontological framework
In addressing the question of ontologies and the nature of the stacks within which they emerge, a good
starting-point is the following on semantics [my italics]:
“On the way to information systems integration in an ever growing distributed world,
developers come face to face with one of the earliest and most venerable disciplines:
semantics. SOAs, armed with suitable descriptive languages and powerful reasoning
algorithms, provide a solid and standardized basis that facilitates the design and
implementation of semantics-enabled IT infrastructures. Nevertheless, at the current state of
the art, although service-oriented infrastructures are rapidly becoming a reality, the adoption
of specific frameworks to address the semantic layer is still at an early stage. Many kinds of
difficulties hinder the acceptance of semantic-oriented artifacts, technologies, methodologies,
practices, and standards. Above all, we believe that the lack of awareness of what semantics is,
why it is important, and how it could be modeled constitute the most significant obstacles in its
application to the semantic layer of service-oriented infrastructures.”[3]
This paper defines ontology as “a representation of a set of concepts within a domain and the
relationships between those concepts, used to define the domain and to reason about its properties.”2
Viewed from the perspective of formal semantics, this approach to defining ontology assumes that
meaning can be defined strictly in terms of some combination of behaviors, referents and axioms3 in a
way that can be made speaker-independent if not domain independent. In their paper on why it is that
standards are not enough in managing interoperability, Lewis et al [4] construct the following stack to
point out the limits of standards:

layer 4: Organizational Interoperability


(shared understanding of organizational processes)

layer 3: Semantic Interoperability


(shared understanding of meaning)

layer 2: Syntactic Interoperability


(language syntax)

layer 1: Machine Level Interoperability


(lexis)
Figure 1: Why standards are not enough

Essentially, ontologies are defined within the first three layers of this stack, with behaviors defined by
the content of level 1, referents by layer 2 in relation to 1, and axioms by layer 3 in relation to layers 2
and 1, but ultimately by the way all three layers reference layer 0 – the world itself. This leaves open
the question of how the definition of these layers is embedded within a particular organizational context
(layer 4). An examination of OWL-S has shown the limitations of this process of definition, and its
ultimate dependency on the social context within which definitions are situated [5]. This dependency is

2
Adapted from http://en.wikipedia.org/wiki/Ontology_(computer_science).
3
These are operational, denotational and axiomatic semantics respectively – see
http://en.wikipedia.org/wiki/Formal_semantics_of_programming_languages

2
not only on the definition of the domain itself, but also on the scope for social agreement on any
hypothesized definitions about the ontology of that domain. Practically speaking, this means that such
an ontological approach is limited, at best, to the first three layers, and even there limited in its scope.
So what does this mean for the stack architecture?
The approach taken here is first to consider the organizational layer 4 to be discursive in nature,
understood in terms of discourse semantics [6, 7]. This discursive layer is then placed within a
pragmatic situation in which some particular effect is being generated in relation to some context of use
associated with the customers of the organization. This adds layers 5 and 6, layer 5 being a constraining
of layer 4 according to the pragmatics of addressing layer 6 [8] (see Appendix I for definitions of the
terms used in Figure 2):
layer 6: Context-of-use
(the context in which the effect is experienced)

layer 5: Pragmatic Interoperability 3


(the way the situation is engaged with)

layer 4: Discursive Interoperability


(shared understanding of organizational processes)
2

layer 3: Semantic Interoperability


(shared understanding of meaning) 1
layer 2: Syntactic Interoperability
(language syntax)
formal
semantics layer 1: Machine Level Interoperability
(lexis)
Figure 2: The layers of a stack architecture

Layers 1-3 are definable in terms of a formal semantics, represented by the ‘pyramid’ in Figure 2. There
can be any number of these pyramids, depending on how they have been defined by their systems, but
each one corresponding to a framework based on layers 1-3. This is a type I ontological framework
(arrow 1 in Figure 2). An example would be a balance enquiry service provided by a bank. The
relationships established between these type I ontological frameworks that are specific to an
organization then define a type II ontological framework (arrow 2 in Figure 2). An example would be a
mashup combining a booking service, a credit card charge, a hotel availability service and a mapping
service.4 Such a mashup would support a particular customer’s need to link together systems in order to
make a booking that satisfies their need for a weekend break. Note that this is a type II ontological
framework which leaves the customer with the task of managing the pragmatics of determining the
usefulness of the outputs of the service.
Finally the relationship between these type II frameworks and their particular pragmatics of use by the
customer define a type III ontological framework (arrow 3 in Figure 2): the way the service is delivered
to the customer is mediated by data that is specific to his or her needs, as we find in social mashups, but
also in situational applications such as supporting a particular doctor’s clinic. In each case, the
ontological framework is defined not only by the formal semantics of a given service (which may take

4
http://www.programmableweb.com/mashups/directory shows a directory of mashups in the public domain.

3
the form of an API), but by the way that ontology is then modified by constraints particular to
constraints introduced by discursive and pragmatic considerations.

Ontological Framework 1.0


In its use of the ATAM method, the Mitre Corporation concluded that it found the ATAM [9] “a useful
method for assessing a software architecture against known quality criteria established by the contract
and the stakeholders”[10]. The overall approach used by ATAM is as follows:

Figure 3: The ATAM Method

Essentially the method asks a number of questions (quoting from the Mitre paper):
What are the driving architectural constraints, and where are they documented; what
component types are defined; what component instances are defined by the architecture; how
do components communicate and synchronize; what are the system’s partitions; what are the
styles of architectural approaches; what constitutes the system infrastructure; what are the
system interfaces; what is the process/thread model of the architecture; what is the deployment
model of the architecture; what are the system states and modes; what variability points are
included in the architecture; how far along is the architecture’s development; and what is the
documentation tree?
These are good questions to ask of any system within the context of the requirements that it is designed
to support, and reflect an approach that is ultimately going to be able to establish the formal semantics
of the system. Even without reaching that degree of formalism, however, examining the quality
attributes of the way a system meets its requirements tells a great deal about the way the architecture
is able to support demands to provide new services, for example:
Flexibility – compatibility with other systems; Scalability – number of links supported;
Modularity – framework architecture; Interoperability – with external interfaces; Extensibility –
ability to accommodate growth and changes in future increments; Consistency – easy to use
consistent HMI; Portability – Linux, NT platforms; Reliability – ensure compliance and provide
metrics on progress to meet availability; and Producibility – produce different configurations of
the final system to satisfy operational conditions.
But it is still a single system being examined within a Type I ontological framework. What are the
challenges that have to be faced when dealing with a system of systems environment in which a single
organizational model cannot be assumed for the way services are to be combined?

4
Ontological Framework 2.0
A mashup is “a Web technology-based lightweight composite application created by sourcing
capabilities from established content and systems functionality” [11]. The architecture supporting
mashups has the following characteristics [12]:

Figure 4: Mashup reference architecture

The normal user interface to a Type I ontological framework is the yellow layer at the bottom. Added to
this are the layers associated with the Community Environment across whom mashups are being
supported. Thus, continuing to quote from Gartner [12], these mashups have the following
characteristics:
Lightweight composite applications using Web Oriented Architecture (WOA); Content or
Functionality is Sourced from Existing Systems; Result is an explicit mixture of sourced content
and functionality; Presentation layer Web Browser-based Integration. Noninvasive and
Emergent; Opportunistic vs Systematic Applications; Dynamic and Tactical.
The core benefits expected from their use include faster application development, heavy reuse of
existing capabilities, achievable composite applications, and the promise of user-driven application
development.

The importance of shared discursive practices – common sense


There are a number of suppliers active in this field [13] and a number of issues that have to be
considered in preparing for their use, most important of which is “the difficulty of determining where
the enterprise stops and the Web begins” [14]. From the point of view of the web user, this is a
question of how the enterprise is defined. It is no accident that the first generation of mashups have
formed ecosystems of applications that cluster around such applications as Amazon, Google and MSN.
The following landscape shows this, based on data from programmableweb [15]:

5
Figure 5: Interoperability landscape for mashups

These mashups define Type II ontological frameworks because the ontology of each individual (Type I)
service is sufficiently commonsensical to allow their layer 4 organization to be imposed in a way that is
independent of the way the mashup is used in particular situations [16-18]. These are the ‘emergent
ontologies’ associated with folksonomies (see footnote 1 above), since while the different combinations
of service will reflect differing pragmatics, the overlapping patterns of mashup will reflect emergent
forms of social (discursive) organization across the population of mashup users.

Planning for situational applications


The way IBM refers to mashups is as a new breed of situational application [19] that solve business
challenges that were low priority or unaffordable from a pan-enterprise perspective. This is the
challenge of the long tail [20] where applications become increasingly one-off as they adapt to the
specific (pragmatic) needs of the user. We are therefore dealing with the increasing economies in the
economics of alignment brought about by web-enablement (see Appendix II for definitions of these
economics contrasted with those for scale and scope). This is the world of end-user computing, of
which the distinguishing characteristics are user communities of interest that also characterize multi-
sided markets [21]. This creates what IBM refers to as a Situational Applications Environment (SAE) that
combines internal and external (to the enterprise) services in situational applications that have
important implications for (community-based) governance, tools and infrastructure.
Thus for an enterprise to design for leveraging web-based services and content as resources for
mashups, it needs to be able to consider a number of questions:
Requirements - what mashups will be valuable to the enterprise; Design - how are systems to be
configured, and what enabling technology and standards are to provide the best platform;
Governance - how is lifecycle management to be applied to the community of mashups
themselves; Security - there has to be a systematic security policy and technology layer that will
protect value; Deployment - how will technology link with the governance and security needs;
and Testing - how are composite applications to be tested within the above context?

6
All of these questions imply taking a demand-side view of the services and their potential variety of
uses. And as long as the ontologies of the individual services overlap sufficiently across the relevant
community of users, then this will create value beyond developing applications one-by-one. IBM argues
that under these conditions, these situational applications need three distinct types of role [22]:
 Mashup enabler: The mashup enabler writes the widgets and adds them to the catalog. They
dialogue with the assemblers to anticipate their needs and add the appropriate services both
retroactively and proactively. This is usually an individual from the IT department or someone
with sufficient technical skills to write software.
 Mashup assembler: This is typically a nonprogrammer that's a line-of-business user or subject
matter expert. The mashup assembler builds mashups by wiring together the mashup
consumables that have been created by the mashup enabler.
 Knowledge workers: This is the community that uses the application for its intended purpose
and employs community mechanisms on the application, such as ratings and comments, to
provide feedback so the application can be improved on the next iteration.
The primary focus of concern shared by these roles is data fusion: the bringing together of processes
and data around the particular situation in a way that is relevant to the particular situation being
addressed – creating understanding and insight into the nature of the situation itself. This is the
underlying issue presented by situational applications, relating to how the ontological framework is
defined. Thus IBM points out [my italics]:
“Qualitative surveys suggest that the number one enterprise IT concern today is data
integration within the enterprise virtual organization. (In this context, I use the term virtual
organization to mean a composition of federated business units, each contained within its own
administrative domain.) Like many enterprise IT managers who find themselves up to the task
of integrating legacy data sources (for example, to create corporate dashboards that reflect
current business conditions), mashup developers are faced with the analogous challenges of
deriving shared semantic meaning between heterogeneous data sets. Therefore, to get an idea
for what mashup developers have in store, you need look no further than the storied
integration challenges faced by enterprise IT.” [23]
This is the challenge of ‘data fusion’. Once the user goes beyond the position of shared common sense
that characterizes the world of collaborating enterprises, whether that common sense is particular to an
enterprise, or more widely shared, the user must consider the challenges of composing data and
processes from disparate sources.

7
Ontological Framework 3.0 – managing data fusion
The GAO report on Geospatial Information [24] had the following to say on the multiple challenges of
bringing together geospatial data in a way that was relevant to Wildland Fuels and Fire Management:
1. Geospatial data is not consistently available and not compatible across different agencies, states
and local entities.
2. Agencies are developing multiple duplicative systems, many of which are not interoperable.
3. There are inadequate infrastructures for accessing and manipulating data.
4. There exist major differences in the quality of GIS know-how available locally.
5. There is low awareness within the Wildland Fuels and Fire community of the new products and
services available.
The recommendation the GAO report made was to develop an enterprise architecture. Such a type I
framework would have had a significant impact on 1, 2 & 3 above, and insofar as they were deployed in
a way that supported web-enabled use (such as through an SOA approach), they would also have begun
to meet the demands of the community in 4 & 5 above, at least for reporting and visualization purposes.
The GAO recommendations were slow in being implemented by most of the federal agencies who
managed land, but standardization and interoperability were facilitated by the fact that most people
used ESRI software5. But an enterprise architecture alone would not have been able to support the
situational needs of this community, given the multiple enterprises involved (see Appendix III for a
summary of the limitations of an enterprise architecture exemplified by Zachman).

The distinguishing characteristics of data fusion


The distinguishing characteristic of the forms of data fusion being undertaken by the Wildland Fuels and
Fire community was the presence of allometric scaling across the inputs and outputs of its models (see
Appendix IV). This was indicative of the complexity in the systems being observed, and was what
prevented the use of a general ontology (qua common sense) in how data could be fused across models.
It was this complexity that challenged the universal approach to ontology that characterizes the Type I
and Type II ontological frameworks.
Related to the need for allometric scaling was the nature of the scales themselves. To assert the
commensurability of two systems, there must be a scaling relation between measurements of the two
systems. For the purposes of data fusion, a criterion of commensurability needs to be established
between the input and output variables mediated by a particular system or model – that each system or
model must establish commensurability across its inputs and outputs. When applied to projective
analysis, this gives a criterion for establishing whether or not a particular data fusion is possible: it has
to establish scale commensurability and scope consistency between the input and outputs of each
model, tool and dataset being used in support of a situational application, placing primary emphasis on
the nature of the observations being fused.

5
See http://en.wikipedia.org/wiki/ESRI

8
Conclusion
The data fusion associated with Type III frameworks contrasts with Type I and II approaches based on
defining the ontology of the data that is being fused in terms of its formal semantics. Thus inputs and
outputs are semantically matched [25], something that is referred to elsewhere as syntactic
interoperability [26] in order to contrast it with semantic interoperability that assumes a unified
classification of spatial features [27]. This produces definitional structures of extraordinary complexity
that suffer from the same limitations as OWL-S in being able to yield automated translation [5, 28]. Thus
most available approaches to semantic integration in fact require ad-hoc non-systematic subjective
manual mappings that can take the layer 4 and 5 modifications of ontology into account, but at the
expense of their universality [29].
Such approaches to ontology mapping start from a universal position from which to make such
mappings. The alternative is to model the way the data is to be used in order to define the semantics of
the data fusion process itself. This places the emphasis instead on establishing the ontology of the
observations themselves within their context-of-use, and ensuring the scale commensurability and
scope consistency between inputs and outputs at each stage of the modeling process. This subordinates
the formal semantics to the pragmatics of the modeling situation. This is an approach to ontology
argued for in Searle’s status functions [30] and Peirce’s triadic relations [31].

9
Appendix I – Stratification
A number of terms are being used relating to the stratification. What follows is a glossary of these
terms. Two definitions are provided in each case, one as it relates to linguistics and the other in terms
on the stratification:
Linguistics Stratification
Lexis “The storage of language in our mental The repertoire of usable behaviors.
lexicon as prefabricated patterns that can
be recalled and sorted into meaningful
speech and writing.”6
Syntax “The rules of a language that show how All possible combinations of usable behaviors.
words of that language are to be
arranged to make a sentence of that
language.”7
Behavioral An approach to the meaning of sentences What each possible behavioral combination
Semantics based on the analysis of some denotes in a socio-technical system.
combination of the axioms they obey,
their referents and the behaviors they
entail. 8
Discourse Meaning ‘beyond the sentence boundary’ How each possible behavioral combination
Semantics associated with ‘naturally occurring’ may influence the interactions among the
language use.9 actors of a given socio-technical system.10
Pragmatics “the ways we reach our goal in How actors choose among behavioral
communication”11 combinations in order to generate
anticipated effects in the environment of a
given socio-technical system.

6
See http://en.wikipedia.org/wiki/Lexis_%28linguistics%29
7
See http://en.wikipedia.org/wiki/Syntax
8
See http://en.wikipedia.org/wiki/Formal_semantics_of_programming_languages
9
See http://en.wikipedia.org/wiki/Semantics for a general treatment of semantics, and
http://en.wikipedia.org/wiki/Discourse_analysis for its particular relation to discourse.
10
See http://en.wikipedia.org/wiki/Discourse for different ways of applying this approach to social systems.
11
See http://en.wikipedia.org/wiki/Pragmatics

10
Appendix II – Glossary of Economic terms
The following terms are used to refer to the different kinds of economics that have to be managed by an
enterprise:
Economic definition
Economies of Scale “The cost advantages that a firm obtains due to expansion of its scale of
operation”.12
Economies of Scope “Whereas economies of scale primarily refer to efficiencies associated with
supply-side changes, such as increasing or decreasing the scale of production,
of a single product type, economies of scope refer to efficiencies primarily
associated with demand-side changes, such as increasing or decreasing the
scope of marketing and distribution, of different types of products.”13
Transaction Costs The costs to the supplier of engaging in a transaction with a customer
associated with providing a particular service or product. The nature of these
costs depends on the nature of the economies of scale and scope available to
the supplier. 14
Economies of A supplier has to be able to assemble the capabilities it needs to be able
Alignment engage in a transaction. The costs of doing this are the costs of alignment.
Under changing conditions of demand, the supplier is incurring these costs
each time it seeks to re-align its capabilities to the opportunities it is pursuing.
Economies of alignment refer to the cost advantages that the supplier obtains
in the way it incurs these costs. [32-34]

12
See http://en.wikipedia.org/wiki/Economies_of_scale
13
See http://en.wikipedia.org/wiki/Economies_of_scope
14
See http://en.wikipedia.org/wiki/Transaction_cost

11
Appendix III – The Zachman Framework Architecture 1.0
The Zachman Framework can be approached in terms of the following dimensions (which are the
dimensions corresponding to Governance (N), Capabilities (S), Demands (E) and Know-how (W) [35]:

contexts-of-
governance

modality contexts-
of reality of-use
modelling
context
Figure 6: defining enterprises within ecosystems

When applied to the Zachman Framework, it identifies gaps. The Zachman framework is defined by
reference to a single enterprise, whether virtual or not. So an extra row is needed to account for the
collaborative processes between enterprises (the pragmatics row in Figure 2). An extra column is also
needed to deal with there being multiple contexts-of-use, each one of which may require its own
supporting enterprise logic. And an extra column is needed to address the way data is related to events
through the particular modality of reality being used by the enterprise(s). This gives leads to the
following:
The WHAT The HOW The WHO/M The WHY

EVENT DATA FUNCTION NETWORK PEOPLE TIME USE CONTEXT MOTIVATION


(WHAT) (WHAT) (HOW) (WHERE) (WHO) (WHEN) (WHO for WHOM) (WHY)
e.g. things done e.g. data e.g. function e.g. network e.g. organisation e.g. schedule e.g. particular client e.g. strategy

SCOPE
(Competitive context)

Planning

COLLABORATIVE
MODEL
(Pragmatic)

Governing

BUSINESS MODEL
(Conceptual)

Owning

SYSTEM
MODEL
(Logical)

Designing

TECHNOLOGY
MODEL
(Physical)

Building

DETAILED
REPRESENTATIONS
(out-of-modelling-context)

Subcontracting

Figure 7: the modified Zachman framework

12
In order to give an account of this more complex understanding of the enterprise, able to address the
needs of distributed collaboration [36], a correspondingly more complex modeling method is needed.
This is provided by a method of projective analysis which can use five different layers of representation
including synchronization and demand, instead of the more usual three corresponding to structure-
function, trace and hierarchy:

Figure 8: the five layers of representation supported by projective analysis

The full five layers are:


 Structure/Function: The physical structure and functioning of resources and capabilities.
 Trace: The digital processes and software that interact with the physical processes.
 Hierarchy: The formal hierarchies under which the uses made of both the physical and the
digital are held accountable.
 Synchronization: The lateral relations of synchronization and coordination within and between
Agencies and the services they provide ‘on the ground’.
 Demand: the nature of the environment giving rise to demands on the way the operations are
organized to deliver effective and timely services.

13
Appendix IV – the concept of scale and scope in ecology
The content of this appendix is derived from the work of Schneider [37, 38].There are a number of
definitions of scale, which were originally treated as equivalent to defining hierarchy (e.g. cell, tissue,
organ, organism). Examples are as follows:
 Measurement scale [39] distinguishes variables quantified on a nominal scale
(presence/absence), an ordinal scale (ranks), an interval scale (equal steps, such as degrees
centigrade), and a ratio scale (equal steps and known zero, such as degrees kelvin).
 Cartographic scale is the ratio of the distance on a map to the distance on the ground. A meter-
wide map of the world has a scale of about 1:39,000,000.
 Scale refers to the extent relative to the grain of a variable indexed by time or space [40, 41].
Variables so indexed have a minimum resolvable area or time period (grain or inner scale) within
some range of measurement (extent or outer scale). For example, a tree-coring device resolves
annual changes over periods of thousands of years.
 In multiscale analysis, the variance in a measured quantity, or the relation of two measured
quantities, is computed with a series of different scales. This is accomplished by systematically
changing either the separation (lag) between measurements or the averaging interval (window
size) for contiguous measurements [42].
 Ecological scaling [43, 44] refers to the use of power laws that scale a variable (e.g., respiration)
to body size, usually according to a non-integral exponent. Respiration typically scales as
mass0.75; hence, a doubling in body size increases oxygen consumption by 20.75 = 1.7, rather
than by a factor of 2.
 Powell [45] defined scale as the distance before some quantity of interest changes.
When applied to observation instruments, we get scope, or ratio of extent to resolution. This leads to
scaling between measurements of differing scope, where a power law applies. Thus isometric scaling
uses an exponent of 1, whereas allometric scaling uses indices other than 1. For Euclidean scaling, the
exponent is an integer or a rational, whereas organisms use fractal scaling, which use exponents that are
not rational numbers.

Incommensurability
Incommensurability was put forward by Kuhn originally to indicate the necessity of a paradigm shift in
the way the two systems are understood. He later modified this to localize it to particular concepts
within a framework [46]. Fleck approached this issue of incommensurability differently, placing the
emphasis instead on error, which when viewed historically, revealed anomalies that required the
emergence of subsequent theories to account for in order to be given the status of being ‘true’ [47, 48].
Later arguments by Lakatos and MacIntyre place the focus more on the system of concepts and changes
in their internal consistency in order to give an account of incommensurability [49].

14
References
1. Veryard, R., Towards the Lightweight Enterprise: Business Modeling and Design for SOA and
Enterprise Mashups. CBDi Journal, 2006.
2. Veryard, R., Web 2.0 and Enterprise Architecture: Increasing synergy between Web 2.0 and SOA.
CBDi Journal, 2007.
3. Veter, G., Models for semantic interoperability in service-oriented architectures. IBM Systems
Journal, 2005.
4. Lewis, G.A., et al. Why Standards Are Not Enough to Guarantee End-to-End Interoperability. in
Seventh International Conference on Composition-Based Software Systems. 2008. Madrid.
5. Metcalf, C. and G.A. Lewis, Model Problems in Technologies for Interoperability: OWL Web
Ontology Language for Services (OWL-S), 2006, Software Engineering Institute.
6. Martin, J.R. and D. Rose, Working with Discourse: Meaning beyond the clause. Open Linguistics
Series, ed. R. Fawcett, London: Continuum.
7. Martin, J.R., English Text: System and Structure1992, Philadelphia: John Benjamins.
8. Boxer, P., et al. Systems-of-Systems Engineering and the Pragmatics of Demand. in Second
International Systems Conference. 2008. Montreal, Que.: IEEE.
9. Clements, P., R. Kazman, and M. Klein, Evaluating Software Architectures: Methods and Case
Studies2001: Addison-Wesley.
10. Byrnes, C. and I. Kyratzoglou, Applying Architecture Tradeoff Assessment Method (ATAM) As
Part Of Formal Software Architecture Review, 2007, The Mitre Corporation.
11. Bradley, A., 'Mashups' and their relevance to the Enterprise, 2007, Gartner Research.
12. Bradley, A., Reference Architecture for Enterprise 'Mashups', 2007, Gartner Research.
13. Bradley, A. and D. Gootzit, Who's Who in Enterprise 'Mashup' Technologies, 2007, Gartner
Research.
14. Linthicum, D.S., SOA Watch: How Mashups fit with SOA, 2008, SOAInstitute.org.
15. Boxer, P.J., Interoperability Landscapes, in Asymmetric Design2006.
16. Veryard, R., Enterprise Mashups and Situated Software, 2006.
17. Veryard, R., Collaborative Composition, 2005.
18. Veryard, R., Situated Software, 2004.
19. Cherbakov, L., A.J.F. Bravery, and A. Pandya, SOA meets situational applications, Part 1:
Changing computing in the enterprise, 2007, IBM.
20. Anderson, C., The Long Tail: Why the Future of Business is Selling Less of More2006, New York:
Hyperion.
21. Evans, D.S., A. Hagiu, and R. Schmalensee, Invisible Engines: How Software Platforms Drive
Innovation and Transform Industries2006, Cambridge: MIT.
22. Watt, S., Mashups - The evolution of the SOA, Part 2: Situational applications and the mashup
ecosystem, 2007, IBM.
23. Merrill, D., Mashups: The new breed of Web app, 2009, IBM.
24. GAO, Geospatial Information: Technologies hold promise for Wildland Fire Management, but
Challenges Remain, 2003, US Government.
25. Woolf, A., et al. Semantic Integration of File-based Data for Grid Services. in Fifth IEEE
International Symposium on CLuster Computing and the Grid, Vol 1. 2005.

15
26. Sen, S., Semantic interoperability of geographic information, in GIS Development2005.
27. Li, Y. and G. Benwell, On the Classification of Categories of Spatial Features, in 14th Annual
Colloquium of the Spatial Information Research Centre, University of Otago2002: Dunedin, New
Zealand.
28. Kalfoglou, Y. and M. Schorlemmer, Ontology mapping: the state of the art. Knowledge
Engineering Review, 2003. 18(1): p. 1-31.
29. Kavouras, M., A unified ontological framework for semantic integration, in International
Workshop on Next Generation Geospatial Information2003.
30. Searle, J.R., Making the Social World: The Structure of Human Civilization2010: Oxford University
Press.
31. Murphey, M.G., The Development of Peirce's Philosophy1993: Hackett Publishing Company.
32. Langlois, R.N., Transaction-cost Economics in Real Time. Industrial and Corporate Change, 1992.
1(1).
33. Antonelli, C., The Economics of Governance: Transactions, Resources and Knowledge, in DRUID
Summer Conference2003.
34. Williamson, O.E., The Economics of Governance. American Economic Review, 2005. 95(2): p. 1-
18.
35. Boxer, P.J., East-West Dominance, in Asymmetric Design2006.
36. Boxer, P., et al., The Double Challenge in Engineering Complex Systems of Systems. News at SEI,
2007: p. http://www.sei.cmu.edu/library/abstracts/news-at-sei/eyeonintegration200705.cfm.
37. Schneider, D., The rise of the concept of scale in ecology. BioScience, 2001. 51(7).
38. Schneider, D., Applied Scaling Theory, in Ecological Scale: Theory and Applications1998,
Columbia University Press: New York.
39. Stevens, S.S., On the theory of scales of measurement. Science, 1946. 103: p. 677-680.
40. Wiens, J.A., Spatial scaling in ecology. Functional Ecology, 1989. 3: p. 385-397.
41. Schneider, D., Quantitative Ecology: Spatial and Temporal Scaling1994, San Diego: Academic
Press.
42. Milne, B., Applications of fractal geometry in wildlife biology, in Wildlife and Landscape Ecology:
Effects of Pattern and Scale, J.A. Bissonette, Editor 1997, Springer-Verlag: New York.
43. Calder, W.A., Ecological scaling: Mammals and birds. Annual Review of Ecology and Systematics,
1983. 14: p. 213-230.
44. Peters, R.H., The Ecological Implications of Body Size1983: Cambridge University Press.
45. Powell, T.M., Physical and biological scales of variability in lakes, estuaries, and the coastal
ocean, in Perspectives in Ecological Theory, J. Roughgarden, R.M. May, and S.A. Levin, Editors.
1989, Princeton University Press. p. 157-176.
46. Chien, X., Thomas Kuhn's Latest Notion of Incommensurability. Journal of General Philosophy of
Science, 1997. 28: p. 257-273.
47. Fleck, L., Genesis and Development of a Scientific Fact, ed. T.J. Trenn and R.K. Merton1979,
London: University of Chicago Press.
48. Babich, B.E., From Fleck's Denkstil to Kuhn's paradigm: conceptual schemes and
incommensurability. International Studies in the Philosophy of Science, 2003. 17(1).
49. Miner, R., Lakatos and MacIntyre on Incommensurability and the Rationality of Theory-change,
in Twentieth World Congress of Philosophy1998: Boston, Masachusetts.

16

You might also like