Professional Documents
Culture Documents
A NOTE ON A D J U S T M E N T OF FREE N E T W O R K S
Abstract
The present paper deals with the least-squares adjustment where the design
matrix (A) is rank-deficient. The adjusted parameters (x) as well as their variance-
covariance matrix ( Z x ) can be obtained as in the "standard" adjustment where A has
the full column rank, supplemented with constraints, Cx = w , where C is the constrabTt
matrix and w is sometimes called the "constant vector" In this analysis only the inner
ad/ustment constraints are considered, where C has the full row rank equal to the rank
deficiency o f A , and AC T = 0 . Perhaps the most important outcome points to the
three kinds o f results ."
1) A general least-squares solution where both x and ~'x are indeterminate
corresponds to w = arbitrary random vector.
3) The minimum norm (least-squares) solution where both x and ~'x are determined
^
1. Introduction
A network is said to be free if its geometrical shape has been determined, as
during triangulation, but which is essentially unattached (to some well defined coordinate
axes) in a space of appropriate dimensions. The scale of the network could also be free, or
could be a part of an observational process ; for example, one or more baselines in a
triangulation network can be measured, or all sides of a network can be measured as
during trilateration, etc. These concepts of classical geodesy can be, of course, generalized
and extended to other fields. In the present context of geometric networks one realizes
that unless external information is supplied, a unique least-squares solution in terms of
coordinates is impossible without some further stipulations. The theoretical aspects of
such "further stipulations" form the backbone of this paper.
The subject of adjusting free networks is not without useful applications in
practice. Although external information may not be readily available, one could still be
compelled to form the observation equations and carry out the Least-squares solution, in
a preliminary coordinate system, for the sake of an analysis of the residuals and for other
Bull. G~od. 56 (1982) pp. 281-299.
281
G. BLAHA
reasons. Ther~ are an infinite number of ways in which this preliminary coordinate system
can be realized mathematically. However, some definitions could lead to the variance-
covariance characteristics favoring certain coordinates while impairing others, to the point
that numerical difficulties could imperil the solution itself. Furthermore, should the
preliminary coordinates serve in their own right for any length of time, their error
characteristics should be balanced as much as possible. Therefore, a useful definition of
the least--squares adjustment of free networks could entail a minimum trace of the
variance--covariance matrix of the adjusted parameters (here coordinates). This can serve
as an example of the stipulations mentioned above.
A preliminary adjustment as just discussed could be of interest, especially for an
analysis of the residuals, even if external information were available from the outset. In
particular, if a network were "attached" to a coordinate system via more information
than is strictly necessary, stresses in its structure would be produced which could
significantly affect the residuals (in general increasing their magnitude) and mask their
true consistency. This problem would be aggravated by a lack of consistency within the
external information as well as between such information and the observations. One could
eliminate any difficulty of this kind by adjusting the network separately as free, i.e., by
temporarily disregarding all of the external information.
Before theoretical aspects of adjusting free networks can be addressed in this
paper, its scope and limitations should be firmly established. For example, the least--
squares method used will be that known as the observation equation method (also called
the method of variation of parameters), as contrasted to the condition method or to more
general methods. The constraints, when used, will be absolute rather than weighted.
Further limitations are :
a) The general class of minimal constraints resulting in a stress-free adjustment of
free networks will not be used in its entirety. For the sake of simplicity, its sub--class
called the inner adjustment constraints will be utilized instead.
b) When applying the inner adjustment constraints to a free network, no weighting
of parameters will take place. This weighting would essentially amount to introducing
additional external information and, as such, could produce undue stress affecting the
residuals.
c) In the derivations leading to the minimum trace and other properties, the
parameter set will be considered in its entirety. One could, of course, minimize only that
part of the trace associated with some preferred parameters, but such an approach would
require a separate treatment. (It was considered in [Blaha, 1971], where the preferred
parameters were the ground station coordinates and the other parameters were the
coordinates of the satellite targets; consequently, the entries corresponding to these
targets were replaced by zeros in the pertinent constraint matrix which thus lost its
original quality of an inner adjustment constraint matrix with regard to the complete
observation equation matrix, making it necessary to partition most of the vectors and
matrices).
d) The weight matrix of observations will be assumed to be a unit matrix throughout.
Clearly, the "original" observation equations could always be normalized upon the pre-
multiplication by an upper-triangular matrix T computed by the well--known Choleski
algorithm, such that T T T is the "original" positive-definite weight matrix. (In practice
the latter would often be diagonal and thus T woutd also be diagonal, composed of the
282
A NOTE ON ADJUSTMENT OF FREE NETWORKS
square roots of the "original" weights; this would correspond to dividing each
observation equation by the appropriate standard deviation).
e) No consideration will be given to the numerical analysis aspects of the solution.
This problem area, subject of extensive research in its own right, addresses a number of
situations such as the sparseness of the observation and normal equation matrices, the
pivotal search, the computation of the variances and covariances for selected parameters
(all the possible variances and covariances are hardly ever needed in practice), an
automation of these and other processes, etc. The subject of w e l l - and ill--conditioned
regular matrices also belongs in this category.
This paper is intended to be almost entirely self-contained. Only one reference
on the subject is listed and even that is not indispensable. A major source of outside
information has proved to be the private communication the author had with the late
Professor P e t e r Meissl whose contribution is briefly outlined in the Acknowledgement.
2. Mathematical Background
Ax = y+v. (2.1)
The least-squares results will later be attributed the symbol ..... ( x and v will then
become x and v ) . Possible constraints associated with (2.1) are symbolized by
Cx = w , (2.2)
where C is the constraint matrix and w is sometimes called the "constant vector."
In free networks the rank of A is considered to be m - s , w h e r e s is the rank
deficiency. An important role in carrying out the least-squares solution of free networks
will be played by special kinds of constraints called "inner adjustment constraints." The
latter are identified through their matrix of dimensions s x m ,called the inner adjustment
constraint matrix, having two basic properties :
rank C = s, (2.3a)
AC T = O. (2.3b)
rank [ ~ ] = m, (2.4)
stating that the augmented design matrix of dimensions (n -t- s) x m has the full column
rank m (due to 2.3b the resulting rank is the sum of ranks of A and C ,and due to 2.3a
this rank is m - s + s = m ) . We note that the more general "minimal constraints" would
be defined through (2.3a) and (2.4) only ; the condition (2.3b) is sufficient for (2.4) to
283
G. B L A H A
hold true, but it is not necessary. However, due to (2.3b) the approach using the inner
adjustment constraints is much simpler and, as such, is adopted in this paper.
A specific inner adjustment constraint matrix, denoted for the moment as C",
can be used to generate a whole family of inner adjustment constraint matrices through
arbitrary nonsingular matrices D of dimensions s x s . Recalling that C satisfies (2.3a, b)
and forming
C = DC-, (2.5)
one can readily assert that any C in (2.5) also satisfies (2.3a, b) and is therefore an inner
adjustment constraint matrix. The product of any such C with its pseudo-inverse C + is
invariant " in particular, we have
C C + = CC + = I , (2.6a)
C + C - = C + C = CT ( c c T ) - I C, (2.6b)
C F = CT ( c c T ) - 1 , (2.7)
A = [ A a, A b ] , (2.8)
Aa = Ab R. (2.9a)
A a = A b (A T A b ) - I AT A a (2.9b)
is obtained. This identity will be used below to verify the inner adjustment constraint
character of the matrix C defined as
The condition (2.3a) is satisfied by the presence of the unit matrix [ of dimensions
s x s. And the condition (2.3b) follows from (2.9b) considered in conjunction with the
product of (2.8) and (2.10) transposed.
284
A NOTE ON ADJUSTMENT OF FREE NETWORKS
-1 0 0 1 0 0
0 1 0 0 1 0 ...
0 0 1 0 0 1
C . . . . . . . . . . . . . . . . . . . . , (2.11)
O z l -Yt 0 z2 -Y2
-zl 0 xt -z2 0 x~ ...
Yl -xl 0 y= -x= 0
where x t , Yl , z t are the initial coordinates of the first point in the network, etc. For
convenience, the coordinates may be scaled by a suitable constant since this would
correspond to applying a diagonal D matrix (see 2.5) with the first three elements equal
to unity and the remaining elements equal to that constant. We may add that if a network
should be "free" also with respect to its scale - i n addition to its position and
o r i e n t a t i o n - the matrix C in (2.11) would be augmented by the row Ix1 y l z l ,
x 2 y = z= . . . . ].
s (AAs T = AAs
in... (A m A ) T = A m A ,
g... AA g A = A ;
thus
A + - Args
A T = AT AA + = A + A A T , (2.12)
A+ = A+A+TA T = ATA+TA +, (2.13)
285
G. B LAHA
(A TA) + = A+ A+ T . (2.14)
These identities could be expanded upon transposition (such formulas need not be written
down) and/or upon their various combinations, for example
With the aid of the above formulas, we could derive a number of other identities
some of which could be rendered computationally appealing through the use of the inner
adjustment constraint matrix. As an example which may be of interest in practice, we
first write the identity
A+A+K(I-A+A) A+A +T = A + A ,
where K is a completely arbitrary matrix of dimensions m x m . This is expressed as
[A T A + K ( I - A + A ) ] ( A TA) + = A + A ,
yielding
(ATA) + = [ A T A + K ( I - A + A ) ] -1 A + A , (2.16)
A+ = [ATA+K(I-A+A)] -1 A T (2.17)
However, K in (2.16) and (2.17) is no longer completely arbitrary but subject to the
restriction that the matrix within the brackets should be nonsingular. But without further
modifications such as designed below these two equations would be of little use.
Equation (2.3b) entails the relationships :
AC T = O, CA T = O, (2.18a)
AC + = 0 , CA + = 0 , (2.18b)
(ATA)+C T = 0 , C(AT A) + = 0.
We next form
[_A+A-C+C = Y
where the matrix within the brackets has the full (column) rank as in (2.4). Accordingly,
Y = 0 and
286
A NOTE ON ADJUSTMENT OF FREE NE'DNORKS
This result would hold in general for any matrix C whose rank equals the rank deficiency
of A and which satisfies A C T = 0 . In the case of inner adjustment constraints
considered presently (the number of rows in C equals the rank of C and not more)
C + is given by (2.7) and (2.19a) becomes
A+A = I - CT ( c c T ) -1 C. (2.19b)
To render these formulas even more advantageous one can choose K simply as
K = k I. k > O. (2.22)
The matrix in the first brackets of (2.20) -- and the same matrix in (2.21)-- then
becomes positive--definite since it can be expressed as [ A t', ~/-kC T M T ] of full (row)
rank m post-multiplied by its transpose, where M T M = ( c c T ) -1 is positive--definite.
That the matrix just mentioned has the full (row) rank rn, or its transpose has the full
(column) rank m , follows immediately from (2.4). One of the advantages of choosing K
in practice as in (2.22), the simplest case being K = [ , is that it allows the use of efficient
computer algorithms designed for the inversion of positive-definite matrices.
The following identities will help to prove some of the relations in this paper.
With G 1 through G4 defined by their generalized-inverse properties, namely
GI =
A rgn '
G 2 = A~ '
G3 = A~ r , G4 = A m~r '
we have
G1 AG2 = A + , (2.23a)
G~ A = A + A , (2.23b)
AC,2 = A A + , (2.23c)
63 AA + = G3 (2.23d)
The symbols A m g ' etc., make allowance for the complete sets (but once G 3 is chosen it
is the same matrix on both sides of 2,23d ; a similar statement applies also for G4 ). The
identity (2.23a) can be proved by showing that all the four conditions ( g , s m , and r)
are satisfied ; (2.23b, c) then follow from post-multiplying and pre-multiplying (2.23 a)
by A . The product G 3 A A + in (2.23d) can be written as G 3 AG 3 (upon using
287
G. B L A H A
AA + = AG3 following from 2.23c) which equals G3 due to the r-condition. The
proof for (2.23e) proceeds along similar lines.
Next, consider a set of matrices U such that
where the matrix Z is completely arbitrary. Let U denote a complete set of matrices
such that, symbolically, A U = 0 . Since it holds true for any Z in (2.24a) that A U = 0 ,
U is included in U and we write U = U . On the other hand, asubset of U in (2,24a)
can be chosen such that Z is restricted to run through U . Due to A U = 0 , U in (6.24a)
with all such matrices Z covers the whole set U and, accordingly, I Z = U . These two
inclusions indicate that the above U represents the complete set U . The symbol " ~ "
will be omitted in the sequel and U will be understood as any possible matrix satisfying
AU= 0 . (2.24b)
Au = 0 . (2.25b)
V = Z'(I-AA+), (2.26a)
VA = 0. (2.26b)
Wherever U and V appear in the derivations below they can be replaced by the
expressions of the kind ( [ - A + A ) Z 1 and Z2 ( I - A A + ) , r e s p e c t i v e l y , as follows
from (2.24a) and (2.26a). For the g-inverse we have
A g = A ++U+V. (2.27)
The first inclusion (in the sense of the discussion that followed 2.24a) is readily
established using the properties of U and V matrices. The second inclusion is arrived at
bychoosing U = ( I - A + A ) A g A A + and V = A g ( I - A A + ) J A g in these two
expressions being the same matrix (it can take on any values from thecompleteset).
Since the first inclusion in all the cases considered results from straightforward testing,
only the second inclusion will be worth elaborating upon.
It is readily confirmed that
A~. = A+ + U , (2.28)
288
A NOTE ON ADJUSTMENT OF FREE NETWORKS
A ~r = A+ + U A + ' (2.29a)
A ~ r = A+ + U A T . (2.29b)
The second inclusion in (2.29a) follows from the choice U = ( 1 - A + A ) A{rA and
from (2.23d) and the second inclusion in (2.29b) follows from the choice
U=(I-A +A)A~r A+T'fr~176 A A + and from(2.23d).
Although A mg , A mgr (two equivalent formulas) and ARm will not serve in the
course of this study, they are listed for the sake of interest as
A I lgl = A + + V 9
As = A++W... AW = 0 , WA = 0 .
N(A) ... null spaceof A , a space of all vectors orthogonal to the rows of A .
It is seen that the matrix A maps N ( A ) s into R ( A ) , and N(A) into the zero vector 9
xeN(A) . . . Ax = 0 . (2.30b)
On the other hand, A T maps R(A) into N(A) • , and R(A) • into the zero vector 9
y e R ( A ) -L . . . A T y = 0 . (2.31b)
It can be shown that the same description applies also for the (unique) mapping by A + 9
y e R(A) L . . . A + y = 0. (2.32b)
289
G. BLAHA
One can also demonstrate the properties of the following projection operators "
For stating certain results in terms of mapping, brackets will be utilized to single
out this purpose in the text. [We had, for example, A x e R(A) .] If this interpretation is
not of interest, one may imagine such expressions deleted.
Ax = ( y + v ) . (3.1)
A A g ( y + v) = AA + ( y + v) = ( y + v ) . (3.2)
x = Ag(y +v)+u ;
this solution satisfies (3.1) as is confirmed through (3.2) and (2.25a, b). Upon using (2.27)
we obtain
x = A +(y+v)+U(y+v)+u, (3.3a)
290
A NOTE ON ADJUSTMENT OF FREE NETWORKS
A T A~r = AT y , AT ~ = 0 . (3.4)
A+~ = 0 (3.5)
Upon utilizing (3.5) and (3.6) in (3.3b) with the new notations &, v , we have
= A+y+(I-A+A)(ZA+TATy+z).
In using complete sets it can be shown that Z A + T A T y can be replaced, without any
loss of generality, by ~/A T y where W is an arbitrary matrix of dimensions m x m .The
above least-squares solution is thus rewritten as
,x = A + y + ( I - A+ A ) ( ~ / A T y+ z) (3.7)
II. The starting point of the second, more familiar approach is represented by the
consistent system of normal equations already seen in (3.4) :
A T A~ = AT y . (3.8)
= (A T A ) gA T y + u;
the latter criteria following from (AT A)U = 0 , V(A T A) = 0 , Accordingly, the general
solution becomes
~, = A + y + U A T y + u . (3.10)
Upon using the explicit expressions for U and u , the above solution is written as
291
G. B L A H A
which is the same result as (3.7) except that the symbol Z has replaced W.
[In considering (3.10), x is seen to consist of two parts. The first part, A + y ,
is contained in N ( A ) -L and is unique (if y e R ( A ) J', it is zero) ; the second, remaining
part is contained in N ( A ) and is arbitrary. The solution (3.10) is expressed below using
three different formulations. In the first formulation, it is rewritten with a new notation
(the second part is grouped into the vector u l ) . The second formulation, given in terms
of As r , is simply (3.10) with the information embodied in (2.29b) taken into account.
And the third formulation brings A ~ into the picture. Any result obtained through
any of the three formulations can be reproduced through the other two, upon properly
choosing the U' s and u' s (arbitrary except for the requirements 2.24b, 2.25b). With
regard, in particular, to the u' s, one can symbolically write u i e N ( A ) , arbitrary, where
i = 1, 2, 3 . The three formulations are represented by
= A+y+ul ;
To summarize, all three expressions represent the complete set of least--squares solutions
and they are equivalent ; only the form of the arbitrary vector in N ( A ) differs from
one expression to the next. A similar relation could also be written in terms of A ~ m . ]
where K1 , K ' , K" are matrices of arbitrary coefficients, y is the vector of observations
292
A NOTE ON ADJUSTMENT OF FREE NETWORKS
NY'Y' =
['~
0 I ' (3.13)
so that the corresponding random vectors zx and z2 are also stochastically independent.
In grouping the terms containing y , (3.11) and (3.12a, b) lead us to denote
Z A T y + K1 y = K y , (3.14)
where the new coefficient matrix K is completely arbitrary " (3.11) then becomes
[The first vector forming :~ above is in N(A) J- and is unique, random" the second
vector is {n N(A) and is arbitrary, random or constant. ]
We state without proof that due to the arbitrary character of y ' , both K ' y '
and ~]K' y' can be made equal to any desired vector and to any desired positive (semi-)
definite matrix~ respectively. This serves as an indication that the expression in the
second parentheses in (3.15a) might be in fact more general than needed for the purpose
of this study. However, if this expression were replaced by some K y the solution would
be restricted" for example, x would be restricted to zero whenever the (observed)
vector y were zero. In the subsequent derivations of the minimum trace and the
minimum norm solutions the general expression (3.15a) will be used as it stands.
The law of variance-covariance propagation applied to (3.15a/yields
Gx = DDT (3.15b)
where
and where (3.13) together with the fact that c is a constant vector have been taken into
accou nt.
wl-ere
293
G. BLAHA
Since Tr(WW T ) is the sum of the squares of all the elements in W and a similar
statement holds true for T r ( W ' W 'T) , the minimum is produced when
W-~O, W'-=O.
= A + y + ( I - A + A ) K '' c , {3.16a)
E x = A + A + T ---- ( A T A ) + (3.16b)
represents the minimum trace solution. [The first vector forming y, in (3.16a) is in
N(A) L and is unique, random ; the second vector is in N(A) and is arbitrary, constant. ]
"~ = ( I - A + A ) ( K y + K 'y'+K''c) o
x = A + Y, (3.17a)
:%~, = A + A + T - ( A T A ) + . (3.17b)
When comparing (3.16a, b) with (3.17a, b) we realize that the only difference
between the two results is the indeterminacy in x in (3.16a), imputable to the presence
of the arbitrary constant vector K " c . Clearly, the pre-multiplication of the right-
hand sides of (3.15a) and (3.16a) by A + A [the projection operator on N ( A ) •
transforms them into A + y , the result in (3.17a). Accordingly, if the constraint
x = A+A.{ (3.18)
were included as a part of an algorithm derived to yield (3.15a, b) or (3.16a, b), the
indeterminacy in the solution would be suppressed and the result would be (3.17a, b).
[The constraint (3.18) implies that ,~ is in N ( A ) L - - A + A is an identity operator on
N ( A ) J- only - and thus it can be only A + y . ] The constraint (3.18) would thus
guarantee that the minimum norm solution, a special case of the minimum trace solution
and, of course, of the general least--squares solution, has been arrived at. tn accordance
with (2.19b) this constraint could also be written as
294
A NOTE ON ADJUSTMENT OF FREE NETWORKS
Cx = 0.
Ky+K'y'+K" c = t , (3.19)
t = AT p + C Tq, (3.20)
where the n - v e c t o r p and the s-vector q may again be arbitrary (random or constant,
etc.). In fact, t could have been written in a more restrictive - b u t still completely
general - - form, namely
t = [AoT, cT][~1,
( I - A + A ) t = CTq, (3.21)
q = ( c c T ) - t w. (3.22)
For every q and ~q onecan uniquely determine w and ]Sw , and vice-versa. We thus
have
( I - A + A ) t = C+w (3.23)
and, according to (3.15a),
~, = A + y + C + w . (3.24)
295
G. B L A H A
A:~ = A A + y . (3.25)
This relation confirms the normal equations (3.8) which eventually lead to the
formulation (3.24) itself. (The equivalence between 3.25 and 3.8 can be seen on one
hand upon pre-multiplying the former by A T and arriving at the latter, and on the
other hand upon pre--muitipIying the tatter by A +T and arriving at the former.) Next,
wepre-multiply (3.24) by C which yields
Cx = w . (3.26)
The above two steps point to a regular least-squares adjustment with constraints. By
choosing the characteristics of the vector w in (3.26), one effectively chooses the type
of sNutioa according to the classificatiors in the preceding paragraph. Clearly, the
mini~.'q.um norm solution corresponds to
C~, = 0. (3.26')
The equations of the type (3.26) or (326') have been called the ~nner adjustment
constraints. According to the above sug~4~stion a least--squares adi~:stment of free
networks can be formulated with the aid of these constraints as follows 9
A~: = y + v ,
C~ = w,
where the vector w characterizes the flexibility of the solution. The (augmented) normal
equations for this set- ~,p are
The above assertion is readily verified upon post-multiplying the matrix in (3.27) by its
inverse in (3.28) and obtaining the unit m a t r i x in this process the earlier-derived
properties, such as A + A + C +C=] and CC + = [ , h a v e b e e n u t i l i z e d .
296
A NOTE ON A D J U S T M E N T OF FREE NETWORKS
;( = A + y + C + w , (3.29a)
K = 0. (3.29b)
c
and
KTcT j
~y,w C[( CKK T CT + CK' K 'T CT
Upon using this matrix in the variance--covariance propagation applied to (3.28), we get
~x'-Kc
_[Q ~1
0 0
where
4. Conclusions
In this paper a basic adjustment of free networks has been discussed. The
discussion has been based on the usual least.-squares criterion v T v = minimum where,
without loss of generality, the weight matrix has been assumed to be the unit matrix, and
on the notion of inner adjustment constraints. These constraints, considered in the usual
sense of absolute constraints with constant terms (as opposed to some random terms),
have led to an important class of the general least--squares solution for x having the
minimum trace property, T r ( ~ x ) = minimum.
297
G. B L A H A
in the coefficient matrix of observation equations and thus also in the matrix of normal
equations. A'lthough ~ x in the minimum trace solution is unique, the solution itself, ~,,
is not. This indeterminacy stems from the property that the constant vector w ,
containing the constant terms of the constraint equations, can be an arbitrary s-vector.
(Although there are infinitely many inner adjustment constraint matrices C, the results
do not depend on a particular choice of one such matrix.)
The constrained least-squares adjustment having the minimum trace property is
formulated as
A~ = y+v,
C~ =w.
= (4.1)
-K c w
x = A+y+C+w, (4.2a)
2 i = ( A T A) + . (4.2b)
Ax = E ( y ) ,
where x represents the "true" parameters. Thus, regardless of w, (4.3) implies that
E ( x ) =/: x . (4.4a)
However, from (4.3) and from the "g" property of the pseudo-inverse it also follows 9
which again holds true regardless of the constant vector w (w is eliminated due to
AC+=0).
The special case of the minimum trace solution of the greatest practical
significance is the minimum norm solution. It is the simplest case where the arbitrary
constant terms in the constraint equations are set to zero, i.e.,
298
A NOTE ON ADJUSTMENT OF FREE NETWORKS
w ---- 0 . (4.5)
All of the above equations could now be imagined written with this particular constant
vector w . Although the indeterminacy in the solution has thus been eliminated, it is not
so with the bias. As has been already indicated in (4.4a), the parameter estimates in free
networks are biased r~ardless of the numerical value of the constant vector w . This is
imputable to the rank deficiency which is, of course, what makes such networks free. On
the other hand, equation (4.4b) implies that the functions A x or R A x - thus the
adjusted observables or their linear c o m b i n a t i o n s - are indeed unbiased.
Acknowledgement
This paper is dedicated to the m e m o r y o f the late Professor Peter Meiss] w h o , as
one o f the original reviewers~ inspired the a u t h o r to l o o k in t o the p r o b l e m s o f free
networks from an u n o r t h o d o x perspective. A l t h o u g h he refused any c r e d i t or even
a c k n o w l e d g e m e n t f o r his selfless help, the m e m o r y o f Prof. Meiss] deserves t h a t at least
t w o areas where he o f f e r e d new ideas be m e n t i o n e d :
0 0
REFERENCE
(3. BLAHA : Inner Adjustment Constraints with Emphasis on Range Observations. Department of
Geodetic Science, Report No. 148, The Ohio State University, Columbus, 1971.
Received : 10.05.1979
Accepted : 05.08.1982
299