GHOST User Documentation

This is the original documentation with some hyperlinks to some supplemental information.

Created August 1987 by Don Beattie
Revised March 1992 by Sheryn McGregor
Revised 2 December 1998 by Mike Craymer
Revised 10 December 1998 by Mike Craymer

Contents

    Introduction
  1. Using the Programs
  2. GHOST Data File Descriptions
  3. GHOST Program Descriptions


INTRODUCTION

GHOST (Geodetic adjustment using Helmert blocking Of Space and Terrestrial data) is a series of programs written to perform the least squares adjustment of data as measured for the establishment of control surveys. The adjustment model is described in Mathematical Models For Use In The Readjustment Of The North American Geodetic Networks (by R.R. Steeves, GSD Technical Report Number 1, 1983) for a height controlled system. Traditionally measurements have been taken to determine spatial positions in terms of latitude and longitude with the determination of the third component of height being weak or inferior. The new approach allows the adjustment of these terrestrial measurements along with the space measurements without the necessity of including observations of ellipsoidal height differences.

The parametric adjustment requires an initial definition of the three dimensional coordinates in the geodetic and astronomic systems. Thus the program expects a coordinate value for each station in terms of latitude, longitude, and height, either an astronomic measurement of latitude and longitude or derived deflection components, and the geoid-ellipsoid separation. Conventional measurements are in the form of directions, azimuths and distances measured in the local astronomic system at the local orthometric elevation. Space systems measurements using Doppler, VLBI, or GPS are referred to a three-dimensional cartesian system.

System GHOST uses a number of different files to store the adjustment results. Programs in the GHOST set are used to create new data files, modify old files or list the contents of a file. Each file is identified in the system by a unique logical file name, as underlined below. The processing of a set of data files consists of a number of sequential steps. The major steps can be identified by their associated programs. Within each major step are a number of steps required to complete the process. A user can control the major steps by executing the associated programs in proper sequence. Some steps within the major processes can be controlled using the control record and the adjustment summary file (ADJSUMY). A user can also interact with a number of programs written to list or modify the associated files. The major processes and their associated programs are listed below as well as the utility programs used to interpret with the binary files.

The files are described in two ways. The coded files that are used as input are described in terms of record definitions and also in the data dictionary. The record definition describes each record in detail giving the column numbers and a short description of each item in the record as well as an example. The data dictionary describes each record as a series of items. There is an alphabetically sequenced list of each item with a short description of the term. The Help file on the VAX has a short description of each program.


1. USING THE PROGRAMS

The adjustment of control survey data is normally a batch oriented problem. The data is assembled by project into files. The files are adjusted separately until the data is clean (blunder free), then possibly combined with other files and adjusted together. A number of adjustments are usually done before the final result is obtained. It may be necessary to examine the results of an adjustment when there is minimum constraint followed by an adjustment where some stations are constrained to some previously adjusted value. It may be necessary to constrain the adjustment to different types of position observations. The adjuster will have had some training in the various combinations required by a particular organization. This part of the manual is not intended to instruct a user in the procedures required in processing the data, it is intended to acquaint a user with the programs so that being familiar with the program they can adapt the programs to their particular procedures.

The documentation to follow is written under the headings:

  1. Assembling data
  2. Reading the data
  3. Minimizing the profile of the normal equation matrix
  4. Adjusting the data
  5. Listing the residuals
  6. Analyzing the data


1.1 ASSEMBLING THE DATA

The data is collected in the fixed formats as described in the chapter on data types. There are a few rules to follow when forming files for an adjustment.


1.1.1 PARAMETER DEFINITIONS

COORDINATE DEFINITION

  1. Collect all the coordinate information together. The restriction on the order of the coordinates in the file is that all fixed stations (stations that are included in the adjustment but will not have unknowns in the normal equations) must precede a code 10 switch card. A code 10 record is not required if there are no fixed stations. The order of the remaining coordinate definition records is not important. Users may desire to order the records in some fashion for convenience sake. The input order is also the output order.

  2. Astronomic observations of latitude and longitude along with geoid deflection values are included in the coordinate definition file. Although these observations are included they are not assigned unknowns in the normal equations but are used to define the astronomic coordinate system for the Terrestrial observations of direction, distance and azimuth.

  3. Each coordinate must be defined for the adjustment to proceed. The one exception is when a coordinate is defined by a coordinate observation, the user may want to assign this value as the initial value.

  4. Coordinate values should be correct relative to one another. The adjustment process will iterate to a solution provided that the values are correct relative to one another and the normal equations have a solution. The wisest choice is to use values which are as accurate as possible especially where the network strength is questionable.

  5. The adjustment edit chooses the first definition encountered. If more than one record is encountered the initial one is used except for the junction code definition, where the maximum value of the junction code definition is kept. If a station is defined by a 4, then a 6 code record, the initial coordinates will be saved from the code 4 record however the station will be saved as a code 6 junction station.

AUXILIARY PARAMETER DEFINITION

The adjustment process allows for a number of auxiliary parameters to be defined and solved in the adjustment. The auxiliary parameters are identified along with the coordinates in the coordinate identification file. The exception is for direction orientation and ISS orientation and scale which are automatically assigned definitions by the adjustment process. The remaining auxiliary parameters are defined using a code 94 record in the coordinate definition file.

The initial value of the auxiliary parameter can be defined using this record. Normally the value is determined in the course of the adjustment, however it may be necessary to constrain an auxiliary parameter value within certain limits. This is done by defining the initial value on the auxiliary parameter record and by putting a constraint equation in the interconnecting data file.

The format of this constraint will depend upon the type of Auxiliary parameter to be constrained. The user can only constrain the correction to the parameter. This varies with the type of auxiliary parameter. For a distance scale parameter the corection to an initial value is solved but with a scale parameter for position observation the correction is the value of the scale parameter itself as the solution is linear.

The user may also wish to constrain an auxiliary parameter to be equivalent to another scale parameter. In other words constrain the difference between the two scale parameters to be zero. This would be required where a scale parameter has very little information with which to determine a value but should be close to another value.

SAMPLE COORDINATE DEFINITION FILE (BLKCORD)

 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
   4   629009   LASH                     64  3 52.70400 106 31 11.96400  540.000
  10
   4   629008   JADE                     63 59 18.91099 106 19   .70117  550.000
   4   629007   INDIA                    63 59  1.58620 106 33 21.51552  650.000
   4   629005   GATE                     63 53 49.31804 106 23 56.37197  530.000
   4   629006   HOPE                     63 55 13.06801 106 45  1.25770  430.000
   4   629003   EMBER                    63 47 13.35853 106 42 40.09976  540.000
   4   629004   SIFTON                   63 45 25.02235 106 22 56.58856  440.000
   4   629022   HANBURY                  63 35  4.82998 106 22 12.96829  550.000
   4   629023   FUNNEL                   63 36 40.00494 106 34 52.34573  460.000
   4   629024   CRITCHELL                63 29 35.29322 107 06 18.52198  550.000
   4   629021   MARY                     63 18 43.44249 106 27 38.33798  550.000
   7AST629003   EMBER                    63 47 26.72982 106 42 43.30000
   9   629009   LASH                            0.00001        11.97167     6.0
   9   629008   JADE                            0.00010         0.70323     6.0
   9   629007   INDIA                           1.59369        21.51695     6.0
   9   629005   GATE                           49.32758        56.36833     6.0
   9   629006   HOPE                           13.07454         1.25373     6.0
   9   629003   EMBER                          13.36491        40.09077     6.0
   9   629004   SIFTON                         25.03197        56.57602     6.0
   9   629022   HANBURY                         4.84011        12.94787     6.0
   9   629023   FUNNEL                         40.01303        52.32661     6.0
   9   629024   CRITCHELL                      35.29401        18.50068     6.0
   9   629021   MARY                           43.45239        38.29745     6.0
  94DISGEODSCAL Geodimeter      scpr    0.5    <-- optional initial value

1.1.2 OBSERVATION DEFINITION

The order of the observations is assumed to be random except that the sigma record used to define the observation standard deviation must precede the observation. Directions, position difference equations, partially reduced normal equations and position equation records are input in sets. A user may wish to create a file sorted in some fashion for convenience sake.


DIRECTION OBSERVATIONS

Directions are entered into a file in sets. A set is defined as the directions measured at a station during an observing period. The direction records are normally input with the observed directions ordered in an increasing clockwise order starting from an initial direction of zero. An apriori standard deviation is defined for each direction. Standard deviations for a group of direction sets can be defined using a sigma-direction record. The type on the direction record is used as a key along with the type on the sigma-direction record to associate the constants on the sigma direction record. These are then used to compute a standard deviation for each direction record. The definition of the sigma-direction record holds for every direction record with the same type until another sigma-direction record with the same type is input. If there is no matching type, the program will use the standard deviation on the record. If the field is blank, there will be an error message printed and the program will not complete the edit stage.

Directions that are measured as a set must be entered in sequence in the data file. The program determines the different sets either by a change of the observing station number or by a large difference in a set of directions computed from input coordinates. The large difference is ten times the maximum expected difference as entered on the control record. There is no required order for the directions in a set, however if the coordinate definition and direction observation are not matched the program may divide the sets unexpectedly. An increasing clockwise order is recommended.

There is not a check to ensure all directions have been included in an adjustment. The user is responsible for the completeness of the data. There are a number of checks done by the program that can be used to check the data. These are computed using the initial value of the coordinates and may indicate larger values than normal if the initial coordinates are inconsistent with the data. On the other hand if the initial value is relatively correct the checks will indicate where the data and coordinates disagree. If the network is well designed, these initial differences will be eliminated in the course of the adjustment as expected. If the adjustment runs into difficulty, however these values can be a source of information when debugging the data.

SIGMA DIRECTION RECORDS

The values of constants on the sigma-direction record should normally be determined from experience. Tables of recommended values are available and should be used by an organization to maintain a consistent set of values. Factors to be considered are the method of observation, number of repetitions, and centering accuracy as well as a nebulous reliability factor associated with the particular observing conditions and the experience of the personnel.

The program requires that if the sigma-direction record is to apply to a group of directions,the type on the sigma direction record must match the type on the direction record.

The user is responsible for defining the standard deviation for each direction. The sigma-direction record requires a definition of the standard deviation of the pointing as well as the centering errors for each end of the line. Although a user can derive a standard deviation using statistical methods for a series of observations, the standard deviation for the direction reflects as well certain unknown systematic errors such as refraction,and also must be proportionally correct with other observations such as distance and azimuth. These are usually determined by experience with adjustments of similar data over a period of time.

SAMPLE OBSERVATION DEFINITION FILE -- DIRECTIONS (INTCOBS)

 
 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  51F16   0.5 0.0 0.0 0.0 0.0
   1F16 629009  LASH            629008  JADE             0  0  0.00000   .600   
   1F16 629009  LASH            629007  INDIA           60 39 37.56      .6   
   1F16 629008  JADE            629009  LASH             0  0  0.00000   .600   
   1F16 629008  JADE            629005  GATE           250 59 17.60000   .600  
   1F16 629008  JADE            629007  INDIA          316 54  5.98000   .600   
   1F16 629007  INDIA           629009  LASH             0  0  0.00000   .600   
   1F16 629007  INDIA           629008  JADE            76 14 29.38000   .600    
   1F16 629007  INDIA           629005  GATE           130 22 44.41000   .600   
   1F16 629007  INDIA           629006  HOPE           222 27 29.29000   .600   
   1F16 629005  GATE            629004  SIFTON           0  0  0.00000   .600   
   1F16 629005  GATE            629003  EMBER           54 32 48.82000   .600    
   1F16 629005  GATE            629006  HOPE           101 42 46.54      .6    
   1F16 629005  GATE            629007  INDIA          144 33  5.80000   .600   
   1F16 629005  GATE            629008  JADE           204 30  2.44000   .600   
   1F16 629006  HOPE            629007  INDIA            0  0  0.00000   .600   
   1F16 629006  HOPE            629005  GATE            45  4 55.11000   .600
   1F16 629006  HOPE            629003  EMBER          119 16 25.68000   .600  
   1F16 629003  EMBER           629004  SIFTON           0  0  0.00000   .600   
   1F16 629003  EMBER           629022  HANBURY         41 31 22.35      .600   
   1F16 629003  EMBER           629023  FUNNEL          60 15 35.46000   .600   
   1F16 629003  EMBER           629006  HOPE           251  4 49.07000   .600   
   1F16 629003  EMBER           629005  GATE           309 43 22.21000   .600   
   1F16 629004  SIFTON          629003  EMBER            0  0  0.00000   .600   
   1F16 62900   SIFTON          629005  GATE            75 10 32.94000   .600   
   1F16 629004  SIFTON          629022  HANBURY        256 22 16.61000   .600   
   1F16 629022  HANBURY         629024  CRITCHELL        0  0  0.00000   .600   
   1F16 629022  HANBURY         629023  FUNNEL          31  5  3.39000   .600   
   1F16 629022  HANBURY         629003  EMBER           68 38 13.04000   .600   
   1F16 629022  HANBURY         629004  SIFTON         103 29  7.38000   .600   
   1F16 629022  HANBURY         629021  MARY           293 45  2.18000   .600   
   1F16 629023  FUNNEL          629022  HANBURY          0  0  0.00000   .600   
   1F16 629023  FUNNEL          629024  CRITCHELL      137 49 12.06000   .600   
   1F16 629023  FUNNEL          629003  EMBER          236 17 21.30000   .600   
   1F16 629024  CRITCHELL       629021  MARY             0  0  0.00000   .600   
   1F16 629024  CRITCHELL       629023  FUNNEL         301 11 31.92000   .600   
   1F16 629024  CRITCHELL       629022  HANBURY        312 17 17.98000   .600   
   1F16 629021  MARY            629024  CRITCHELL        0  0  0.00000   .600   
   1F16 629021  MARY            629022  HANBURY         66  2 22.40000   .600


DISTANCE OBSERVATIONS

Distances are entered into the observations file as single observations. The format of the distance records is described in the data types section. Distances are normally input as marker to marker distances with an indication of the reduction method in column 55. If column 55 is blank the program assumes it is a sea level distance and will 'reduce' the distance to marker to marker using the input orthometric heights. The distance units are metres.

A sigma-distance record may be associated with each distance record by including the sigma-distance record before the distance record in the file. The sigma-distance record allows for a constant and a proportional standard deviation estimate of the distance measurement as well as the estimate of the centering error. The program will associate each distance record with the type on the sigma-distance record. If there is no matching type the program will use the standard deviation on the record. If the standard deviation on the distance record is blank the program will list an error and not form the edited observations file.

The program allows for the solution of two types of systematic errors for distances. A constant systematic error can be envisioned as a difference between the electronic center of the instrument and the physical center or as a measuring tape that has been calibrated as 100 metres rather than 99.995 metres. A proportional error might result from an improper calibration of the instrument measuring frequency or a measuring tape improperly tensioned.

The method of associating an auxiliary parameter with a particular distance is to use the sigma-distance record to identify the auxiliary parameter. The auxiliary parameter identification record includes a field which defines the class of systematic error. The same systematic error can be associated with as many groups of distances as required. As many systematic errors as are required may be solved. A restriction in solving for auxiliary parameters is that there must be a source for the solution of the auxiliary parameter in the adjustment. Some sources for distance scale might be a set of distances without an auxiliary parameter for scale, position observations, position difference observations, fixed stations, etc.

SIGMA DISTANCE RECORDS

The values of constants on the sigma-distance record should normally be determined from experience. Tables of values are available and should be used consistently by an organization. Factors to be considered are the method of observation, number of repetitions, and centering accuracy as well as a nebulous reliability factor associated with particular observing conditions.

The user is required to define a standard deviation for each distance. The standard deviation computed from data on the sigma distance record includes a constant standard deviation as well as a proportional standard deviation. These two are combined along with the centering errors to compute a standard deviation for the distance. The values for standard deviation include certain undefined systematic errors and should be proportionally correct with other observations. As a result the values chosen should reflect a measure of past experience with similar types of observations.

The program requires that if the sigma-distance record is to apply to a group of distances the type on the sigma-distance record must match the type on the distance record and must precede the record in the data..

The auxiliary parameter number on the sigma-distance record is used to associate this group with an auxiliary parameter as defined on an auxiliary parameter record in the coordinate definition file.

SAMPLE OBSERVATION DEFINITION FILE -- DISTANCES (INTCOBS)

 
 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  52T33   3.0 3.0 0.0 0.0  0.0                                         GEODSCAL
   2T33 629009 LASH            629008 JADE            1   13058.57000   4.934
   2T33 629009 LASH            629007 INDIA           1    9186.62300   4.073
   2T33 629008 JADE            629005 GATE            1   10973.04300   4.454
   2T33 629008 JADE            629007 INDIA           1   11720.41700   4.622 
   2T33 629007 INDIA           629005 GATE            1   12361.63800   4.770
   2T33 629007 INDIA           629006 HOPE            1   11871.18400   4.656
   2T33 629005 GATE            629004 SIFTON          1   15638.83800   5.568
   2T33 629005 GATE            629003 EMBER           1   19655.89500   6.616
   2T33 629005 GATE            629006 HOPE            1   17444.81100   6.032
   2T33 629006 HOPE            629003 EMBER           1   14980.99000   5.403
   2T33 629003 EMBER           629004 SIFTON          1   16562.93800   5.804
   2T33 629003 EMBER           629023 FUNNEL          1   20639.87000   6.880
   2T33 629003 EMBER           629022 HANBURY         1   28169.91400   8.967
   2T33 629004 SIFTON          629022 HANBURY         1   19215.67300   6.498
   2T33 629022 HANBURY         629024 CRITCHELL       1   37952.16200  11.773
   2T33 629022 HANBURY         629023 FUNNEL          1   10878.18500   4.433
   2T33 629022 HANBURY         629021 MARY            1   30723.53700   9.692
   2T33 629023 FUNNEL          629024 CRITCHELL       1   29181.69400   9.254
   2T33 629024 CRITCHELL       629021 MARY            1   38013.85100  11.791


AZIMUTH RECORDS

Azimuth observations are entered into the adjustment on records as described in the data types chapter. The program computes the observation equations for the adjustment in the astronomic coordinate system. The astronomic coordinates are defined by the observed astronomic coordinates, or the computed astronomic coordinates. The astronomic coordinates are computed when there is no astronomic coordinate observation. If the geoid values do not exist they are assumed to be zero in the computation. The input orthometric height is used in the computation of the observation equations.

The azimuth is assumed to be measured as clockwise from north. Column 55 may be used to indicate clockwise from the south. The value of the azimuth will be between zero and three hundred and sixty degrees.

The estimate of standard deviation for the record may be input or computed from a sigma-azimuth record. The type on the sigma-azimuth record is associated with the type on the azimuth observation record. The standard deviation estimate for the observation will be computed from the constants on the sigma-azimuth record if it is included in the file before the observation otherwise it will use the standard deviation on the record. If the standard deviation on the record is also blank an error will be printed and the program will not create the edited observations file.

An auxiliary parameter for orientation may be associated with an azimuth observation by identifying it on an associated sigma-azimuth record. The auxiliary parameter must have been defined on an auxiliary-parameter-definition record in the coordinate definition stage.

SIGMA AZIMUTH RECORDS

The values of constants on the sigma-azimuth record should normally be determined from experience. Tables of values are available and should be used consistently by an organization. Factors to be considered are the method of observation, number of repetitions, levelling error, longitude determination, and centering accuracy as well as a nebulous reliability factor associated with particular observing conditions.

The program requires that if the sigma-azimuth record is to apply to a group of azimuths the type on the sigma-azimuth record must match the type on the azimuth record.

The user is required to define a standard deviation for the azimuth. The sigma azimuth record has three fields for use in determining this standard deviation. The first is a standard deviation of the observation similar to the direction standard deviation. The second is the standard deviation of the level of the instrument and the third is the standard deviation of the determination of the longitude. The standard deviation for the azimuth is computed as a combination of these three as well as the centering errors. These values will reflect certain unknown systematic errors and usually are chosen with some regard to experience.

The auxiliary parameter name on the sigma azimuth record is used to associate this group with an auxiliary parameter definition in the coordinate definition file. The auxiliary parameter must have been defined on an auxiliary parameter definition record in the coordinate definition stage.

SAMPLE OBSERVATION DEFINITION FILE -- AZIMUTHS (INTCOBS)

 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  53t56   0.5 0.0 0.0 0.0 0.0
   3AZM 629003 EMBER           629006  HOPE           352 37  7.96000  1.269


ORTHOMETRIC HEIGHT DIFFERENCE RECORDS

Orthometric height difference observations are entered into the adjustment on records as described in the data types chapter. The program computes the observation equations for the adjustment in the astronomic coordinate system. The astronomic coordinates are defined by the observed astronomic coordinates, or the computed astronomic coordinates. If the geoid values do not exist they are assumed to be zero in the computation. The input orthometric height is used in the computation of the observation equations.

The orthometric height difference is assumed to be measured as positive from the first station on the record to the second.

The estimate of standard deviation for the record may be input or computed from a sigma orthometric height difference record. The type on the sigma orthometric height difference record is associated with the type on the orthometric height difference observation record. The standard deviation estimate for the observation will be computed from the constants on the sigma orthometric height difference record if it is included in the file before the observation otherwise it will use the standard deviation on the record. If the standard deviation on the record is also blank an error will be printed and the program will not create the edited observations file.

SIGMA ORTHOMETRIC HEIGHT DIFFERENCE RECORDS

The values of constants on the sigma orthometric height difference record should normally be determined from experience. Tables of values are available and should be used consistently by an organization. Factors to be considered are the method of observation, number of repetitions, levelling methods, and centering accuracy as well as a nebulous reliability factor associated with particular observing conditions.

The program requires that if the sigma orthometric height difference record is to apply to a group of orthometric height differences the type on the sigma orthometric height difference record must match the type on the orthometric height difference record.

The user is required to define a standard deviation for the orthometric height difference. The sigma orthometric height difference record has three fields for use in determining this standard deviation. The first is a standard deviation of the observation similar to the distance standard deviation. The standard deviation for the orthometric height difference is computed as a combination of the standard deviation of the measurement as well as centering errors. These values will reflect certain unknown systematic errors and usually are chosen with some regard to experience.

SAMPLE OBSERVATION DEFINITION FILE -- ORTHOMETRIC HEIGHT DIFFERENCES (INTCOBS)

    
 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  14LEV 629003 EMBER           629006  HOPE                      5.423      0.012


POSITION OBSERVATIONS

Position observations are input into the program in sets varying from one position to a few hundred. Each set of observations has associated with it a matrix which describes an estimate of the accuracy of the position observations. The program expects position observations in an earth centered cartesian system or X, Y, Z. Positions may be input as latitude, longitude and ellipsoid height and have the program convert them to X Y Z using the ellipsoid as defined by the control record. There is not an indication of the original ellipsoid used in computing the coordinates. WARNING the user must be aware that mixing of coordinates derived from different ellipsoids will contaminate any results obtained and the resulting error may not be detectable in the adjustment. On the other hand one may wish to solve for these differences as systematic errors.

The set can have associated with it up to 7 auxiliary parameters which can be used describe systematic errors such as system rotations, system translations and system scale. The initial value and variance for auxiliary parameters observations included in the set can be constrained using a special use of partially reduced normal equations.

The units of the associated matrix are in metres in the X Y Z cartesian system.

The input matrix may be a variance covariance matrix or a weight matrix. The program will invert a variance covariance matrix into a weight matrix for use in forming the normal equations. A singular variance covariance matrix will cause an error and program execution will stop. A singular weight matrix will not cause problems if there are sufficient observations to make the normal equations non singular.

The program groups the position observation equations together by using the position equation header record (95). The program detects the type of matrix by reading the type field in the position equation trailer record (97). A position equation matrix has a 'o' in column 5 of the trailer record. A variance matrix has a 'v' in column 6. The default for column 5 is 'o' for position observation and 'v' in column 6 for variance.

The program detects UPPER in the columns 7-11 of the trailer record otherwise it reads FULL. For the position observation equation there are three rows and columns for each station. The program will accept the matrix 4 terms per record and will interpret these records in two ways.

POSITION OBSERVATIONS from GEODOP 'TAPE9' SOLUTION FILES

It is possible to read position equations directly from a GEODOP tape9 format file by using the three parameters on the position equation header record. The header record must contain a logical file name for the file (in the default directory with extension of .dat), a code indicating it as tape9 format, and the number of data sets to be read.

The group of data sets may be included as position equations by using a code 95 header record . The auxiliary parameters are defined by including code 94 records between the 95 and 97 code records and will apply to each of the data sets read. The position observations covariance matrix will be converted to a weight by placing a 'v' in column 6 of the 97 trailer record.

EXAMPLE OBSERVATION DEFINITION FILE -- POSITION EQUATIONS (INTCOBS)

   
 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  95dop a doppler survey
  96    629005  GATE                    63 53 49.31804 106 23 56.37197  530.0000
  96    629006  HOPE                    63 55 13.06801 106 45  1.25770  430.0000 
  97pov upper 
     1.0                 0.0                 0.0                 0.0  <-- 6 terms
     0.0                 0.0
     1.0                 0.0                 0.0                 0.0  <-- 5 terms
     0.0
     1.0                 0.0                 0.0                 0.0  <-- 4 terms
     1.0                 0.0                 0.0                      <-- 3 terms
     1.0                 0.0                                          <-- 2 terms
     1.0                                                              <-- 1 term

EXAMPLE READING FROM A GEODOP OUTPUT -- TAPE9 FILE

   
 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  95DOP A DOPPLER SURVEY       DOP75    9    2 
  94DOP A SCARER                SCPR 
  97POV UPPER


POSITION DIFFERENCE OBSERVATIONS

Position difference equations are input into the program in sets varying from one to a few hundred. A set of position difference equation is defined as one where the positional influence of the first station in the set is removed from the set by subtracting it from each of the remaining coordinates. The variance covariance matrix is computed by removing variance equivalent to the variance of the first station from each other station's variance using a Jacobian transformation.

The program groups the position difference equations together by using the position difference equation header record (91). The program detects the type of matrix by reading the type field in the position equation trailer record (97). A position difference matrix has a 'd' in column 5 of the trailer record. A variance covariance matrix has a 'v' in column 6. The default for column 5 is 'd' for position difference and 'v' in column 6 for variance.

The program detects UPPER in the columns 7-11 of the trailer record otherwise it reads full. For the position difference equation the first three rows and columns are missing so that only n-3 terms are read for each row.(n=the number of position observation records times 3) The program will accept the matrix 4 terms per record and will interpret these records in two ways.

  1. UPPER triangular matrix where the diagonal term starts on the first field of a record and enough records are read to input all the terms in a row. Thus a matrix with a size of 5 would have 2 records for the first row, 1 for the second and subsequent rows (6 records).
  2. FULL matrix would read enough records to fill the full n by n matrix starting at the first row and column. Thus a matrix of size 5 would read 25 terms or 7 records.

POSITION DIFFERENCE OBSERVATIONS from GEODOP 'TAPE9' SOLUTION FILES

It is possible to read position equations directly from a GEODOP TAPE9 format file by using the three parameters on the position difference equation header record. The header record must contain a logical file name for the file (the name of the attached file), a code indicating it as TAPE9 format, and the number of data sets to be read. The group of data sets may be included as position difference equations by using a code 91 header record and a 'd' in column 5 of the trailer record.

The auxiliary parameters are defined by including code 94 records between the 91 and 97 code records and will apply to each of the data sets read.

The position observations covariance matrix will be converted to a position difference observations covariance matrix by using a Jacobian matrix. The resultant matrix is then converted to a weight matrix for processing in the adjustment.

EXAMPLE -- POSITION DIFFERENCE EQUATIONS (INTCOBS)

 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  91DOP A DOPPLER SURVEY
  96    629005 GATE                    63 53 49.31804 106 23 56.37197 530.0000
  96    629006 HOPE                    63 55 13.06801 106 45 1.25770 430.0000
  97PDV UPPER 
    1.0                 0.0                 0.0   <-- 3 terms, first sta removed
    1.0                 0.0                       <-- 2 terms
    1.0                                           <-- 1 term

EXAMPLE READING FROM A GEODOP OUTPUT -- TAPE9 FILE

  
 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  95DOP A DOPPLER SURVEY       DOP75    9    2 
  94DOP A SCARER                SCPR  scale aux.parameter
  97PDV UPPER           <--- 'd' in column 5 indicates position difference 


PARTIALLY REDUCED NORMAL EQUATIONS

There is a requirement to divide a large block of normal equations into smaller blocks. There are a number of reasons why the blocks must be treated independently. For example in a national framework adjustment it may be necessary to do a number of adjustments to determine different combinations of scale, rotations,etc., by solving the same set of observations but constraining different combinations of auxiliary parameters. Using the partially reduced normal equation approach, the adjustments can be reduced to the solution of only the auxiliary parameters rather than the full set of normal equations.

The program provides a method of subdividing the normal equations into two groups of unknowns; one where the contributions are fully determined in the block and another group of unknowns that have contributions from another block. This subdivision is done externally to the program through the use of record codes in the coordinate definition stage. The program, using this definition, proceeds to form the normal equations and reduce the normal equations for the first group of unknowns. At this point the Cholesky reduction is complete for the first group and any reduction involving the correlation of the first group with the second group has been done. The first group may be considered to have been 'eliminated' from the adjustment. By 'elimination' is meant there is no requirement for the first group set of normal equation coefficients or right hand side until the second group has been solved in another stage of the adjustment. The program will save the set of normal equations coefficients and right hand side from the partially reduced second group on file 'REDNOR'.

These partially reduced normal equations can now used as pseudo observations in subsequent adjustments. Thus the solution of a large set of normal equations can be found as a series of smaller steps. In the case of data error it will only be necessary to re-reduce the effected blocks.

WARNING


PARTIAL NORMAL EQUATIONS AS CONSTRAINT

Partially reduced normal equations may also be used to enter constraint into the normal equations directly. For example a position can be constrained to it's input value by using a partially reduced normal equation and appropriate weight matrix. In this case what happens is the correction to the coordinate is constrained to some value possibly zero. The size of the associated matrix determines the effect of this constraint. Considering that the input is a weight then the larger the number the greater the effective constraint.

It also possible to enter a constraint equation where the difference between two unknowns is constrained to some value possibly zero. This could be effective where a short line and inaccurate coordinates results in a divergent solution.

The same technique can be used to constrain the correction to an auxiliary parameters to a particular value. Also one can constrain the difference between two different auxiliary parameters of the same type to some value possibly zero.


WEIGHTED STATION ADJUSTMENT EQUATIONS

There may be a requirement to use a set of station coordinates along with the associated covariance matrix as determined by another adjustment as constraint for an adjustment. This will be useful when maintaining a network which will be added to an existing network or when adding a lower order network to an existing network.

The normal equation contribution is computed as the associated weight matrix and the right hand side terms as the difference in the coordinates times the weight matrix. In other respects they are similar to partially reduced normal equations. The most important difference is that they can be iterated. The effect of that iteration upon coordinates not included in the weighted station adjustment set is another question that will not be answered here.

EXAMPLE OBSERVATION DEFINITION FILE -- PARTIALLY REDUCED NORMALS

 
 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  93PRN PARTIAL NORMAL EQ
  92RHS 629005 GATE                0.12345282790    0.3569546     0.000256780
  92RHS 629006 HOPE                0.00012546789   10.8923456     1.000245367
  97RNE UPPER
 0.4273890883482E+04 0.2437562955840E+04-0.1998435290244E+04-0.6225107737697E+03
 0.1426218402441E+04-0.2187795836071E+04 
 0.6821359378078E+04-0.1531313380680E+04-0.1479628114046E+04-0.2907968407862E+05
 0.4460776248055E+04 
 0.1369205364267E+04 0.1146383934886E+04 0.3350223017495E+04 0.5139187627254E+03
 0.1798052821409E+04-0.2781685858370E+03-0.4267060870751E+03 
 0.1857232773571E+04 0.2848964872197E+04 
 0.9436558860551E+03

EXAMPLE OBSERVATION DEFINITION FILE -- EXCHANGE FORMAT

   
93PRN EXAMPLE EXCFMT          BLOCK1    1
97RNE

EXAMPLE OBSERVATION DEFINITION FILE -- WEIGHTED STATION ADJUSTMENT

 ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
  93WSA example station adj
  92WSA 1001 STATION 1001    N29 5959.851957W 89 5959.7343011966.25436
  92WSA 1004 STATION 1004    N30  146.853908W 90 14 8.733804 
  92WSA 1005 STATION 1005    N30 1147.852079W 90 1458.7323851924.27627
  97 PRN UPPER
 0.3231464658313D+03 0.1069390522836D+03 0.9402132082595D+00 0.3233651567116D+03
 0.1062788424866D+03 0.3233707125265D+03 0.1062402801474D+03 0.1146086152309d+01
 0.9217684445842D+02 0.6012589059038D+00 0.1071281573108D+03 0.9195778501081D+02
 0.1071368061730D+03 0.9194486496761D+02 0.1427203619137D+00
 0.2816643143318D+02 0.9354729976487D+00 0.7743717061069D+00 0.8475322539185D+00
 0.7856409979040D+00 0.2816167257509D+02
 0.3235867621137D+03 0.1064668578710D+03 0.3235903361706D+03 0.1064286867044D+03
 0.1140403066601D+01
 0.9174708368797D+02 0.1064752721990D+03 0.9173239878167D+02 0.3154009464255d+00
 0.3235980721251d+03 0.1064364140798d+03 0.1052411358544d+01
 0.9171974252663d+02 0.3266458187867d+00
 0.2817443304929d+02


1.2 READING THE DATA

There are three options for reading the coded file.

Option 1:

Option 2: Option 3:

Option 2

Sample procedure sequence

    cd /data  <-- enter the sub-directory with data file 
    VALDEK    <-- this step validates the data file 
    EDTOBS.
    PERMIN
    ADJCLA

Program VALDEK queries the user for a file containing a GHOST data set. The data set is composed of three parts.

The program reads the file doing edit checks for blunders on each record, placing default values where appropriate. It also determines a sequence number for each coordinate, auxiliary parameter and sigma record. If an error is detected an image of the record is listed followed by a record showing the error. The data is divided into two coded files, one containing the coordinate definition records called BLKCORD (BLocK COoRDinate data) and another containing the observations called INTCOBS (INTerconnecting OBServations). With the exception of the adjustment definition, each record is written to one of the files. The adjustment definition records are used to form an adjustment summary file (adjsumy) which contains a record of the adjustment including sums of observations, constants, dimensions, options, title,etc. This is a binary direct access file which can be listed using program dmpads or modified using updads.

While editing the data the program also creates two other binary files containing the coordinate information (STADATA) and the auxiliary parameter information (NUISPAR). These three files will be used in the next job step which will further edit the data.

Sample Output Listing

    
    NUM OF STATIONS FIXED           =   1
    NUM OF STATION DATA  RECORDS    =  11
    NUM OF STATION JUNCTIONS        =   0
    NUM OF GEODETIC STATIONS(4)     =  11  -1688923.134
    NUM OF GEODETIC STATIONS(5)     =   0   0.
    NUM OF GEODETIC STATIONS(6)     =   0   0.
    NUM OF ASTRO STATIONS   (7)     =   1   -154516.4970
    NUM OF GEOID STATIONS   (9)     =   0   0.  
    TOTAL POSITION DATA             =       -1843439.631
    NUM OF NUISANCE PARAMETERS(94)  =   0   0. 
    NUM OF NUISANCE PARAMETERS(8)   =   0   0.
    NUM OF NUISANCE PARAMETER JUN   =   0
    TOTAL NUISANCE PARAMETERS   =       0.  
    
    NUMBER OF ERRORS DETECTED   =   0
    NUMBER OF WARNINGS  =   0


    SUMMARY OF BLKCORD INPUT  
    ========================
    NUM OF STATIONS FIXED           =   1
    NUM OF STATION DATA  RECORDS    =  11
    NUM OF STATION JUNCTIONS        =   0
    NUM OF GEODETIC STATIONS(4)     =   1
    NUM OF GEODETIC STATIONS(5)     =   0   0.
    NUM OF GEODETIC STATIONS(6)     =   0   0.
    NUM OF ASTRO STATIONS   (7)     =   1   -154516.4970
    NUM OF GEOID STATIONS   (9)     =   0   0.  
    TOTAL POSITION DATA             =       -1843439.631
    NUM OF NUISANCE PARAMETERS(94)  =   0   0. 
    NUM OF NUISANCE PARAMETERS(8)   =   0   0.
    NUM OF NUISANCE PARAMETER JUN   =   0
    TOTAL NUISANCE PARAMETERS       =       0.  
    
    NUMBER OF ERRORS DETECTED       =   0
    NUMBER OF WARNINGS              =   0

Option 3

Sample procedure sequence

    cd /data  <-- enter the sub-directory with data file 
    READCC    <-- this step generates the ADJSUMY file 
    EDTPOS    <-- this step edits the coordinate data 
    EDTOBS.
    PERMIN
    ADJCLA 

Programs READCC,and EDTPOS can be used to perform the first two functions of adjustment definition and coordinate definition. In this case program READCC reads the adjustment definition (title plus control record) from the terminal. The coordinate definition is read from a file called blkcord. This file contains the initial definition of the coordinates as well as the initial definition of the auxiliary parameters. The code 10 record is required if there are fixed stations. The user can edit the initial coordinate definitions without editing the data or alternately does not have to re-edit the coordinate definition when cleaning the data. The observations are edited by the edtobs program from a file INTCOBS.

ENTERING THE DATA INTO THE EDITED OBSERVATIONS FILE

Sample procedure sequence

     cd  /data  <-- enter the sub-directory with data file
     READCC
     EDTPOS
     EDTOBS     <-- this step edits the observations and writes file
     PERMIN
     ADJCLA

Program EDTOBS is used to edit the observations and, if error free, create binary files that are used by the adjustment and analysis programs. At the same time some further editing is done to try to detect larger than normal differences between the input coordinate definition and the observations. The program will compute an observation using the initial coordinates and compare this computed observation against the input one. If the difference is greater than a minimum acceptable difference, a message is printed showing the details. The user has control over the size of the warnings shown by placing the minimum acceptable difference on the control record. Ten times this acceptable difference is used to detect certain errors in the detection of direction sets.

The standard deviation for each record is computed using the sigma records in the file by matching the record codes and the data type (columns 4-6) . It is an error if there is no match and the record does not contain a standard deviation.

The input data records may be entered in a random order with the exception that directions occur in sets where the records are contiguous and position type observation records are order dependent. Similar records are grouped together in a separate file during the editing. In all seven sub-files are formed - directions, distances, azimuths, position observations, position difference observations, partially reduced normal equations and elevation difference observations. Upon completion of the editing without error, these files are concatenated into one file called the edited observations file (editobs), each sub-file separated by a system end of file. For example; a set of direction records (containing all the observation from one occupation of the station) is written as a single binary record to a file containing only direction observations. A set of position observations is written as a single binary record to a file containing only sets of position equations.

The output from the program is saved on a file edtobs.lis.

Warning messages are printed when the comparison of the observations with the coordinate definition exceed certain limits. The warnings are used to indicate data problems however do not stop future program execution.

Error messages are printed when incompatible data is detected. In this case the program flags the adjustment summary file and other programs requiring the editobs will detect the flag and exit.

There are a number of fatal errors which will inhibit the concatenation of the observation files.

  1. Station number with undefined coordinates
  2. Undefined standard deviation
  3. Data misreads or program malfunctions

An image of the data can be listed by using the option on the control record.

A data summary is listed at the end of each job step and can be used to quickly check a complete data set on subsequent runs.

The successful completion of the four files (ADJSUMY, NUISPAR, STADATA, and EDITOBS) is required to continue with the adjustment process. One can continue the process later or continue with the next job step.

The data in the edited observations file is not changed throughout the remaining processes. The original data in the station data file and auxiliary parameter file remains unchanged also, however data is added to these files as the adjustment process proceeds. The contents of these three files can be listed at any time using program LSTEOB, provided ADJSUMY is also attached to the job step.

Sample Output Listing

     NUMBER OF ERRORS DETECTED  =   0 
     NUMBER OF WARNINGS =   0  
     
     SUMMARY OF FILE EDITOBS  
     ============================= 
     
     NUMBER OF DIRECTION OBSERVATIONS   =   11  38  15379258.0  9.500000000 
     NUMBER OF DISTANCE  OBSERVATIONS   =   19  19  368229.83   .9475643-01 
     NUMBER OF AZIMUTH   OBSERVATIONS   =   1   1   1269427.90  1.610361000
     NUMBER OF POSITION EQUATIONS       =   0   0   0.  0.0
     NUMBER OF POSITION DIFFERENCE EQ   =   0   0   0.  0.0 
     NUMBER OF PARTIAL REDUCED N.E.     =   0   0   0.  0.0 
     MAXIMUM SET OF OBSERVATIONS        =   8
     MAXIMUM SET OF POSITION EQUATIONS  =   2


1.3 MINIMIZING THE ADJUSTMENT NORMAL EQUATION MATRIX PROFILE

Sample procedure sequence

    cd /data  <-- enter the sub-directory with data file 
    READCC
    EDTPOS
    EDTOBS
    PERMIN    <-- this step minimizes the normal equation
    ADJCLA

The minimization process is required for two reasons; to try to reduce the problem of storage and computation of the normal equation matrix and to group the unknowns as required for the HELMERT BLOCKING process. The minimization of the normal equation matrix is divided into four processes.

  1. Find the normal equation inter-connections.
  2. Find groups of stations that are interconnected. In the case of the HELMERT BLOCKING the junction stations are considered as two groups of interconnecte stations, the inter-block junctions (code 5) and the global junctions (code 6)
  3. Find a minimum profile within the interconnected group
  4. Compute the normal equation unknowns and optionally list the normal equation inter-connections.

There are three minimization options, one produces a minimum bandwidth , one a minimum profile and the third uses the input order. A minimum bandwidth is one where the distance along a row between the diagonal term and the last non zero term is a minimum for the matrix. A minimum profile is one where the sum of the differences between the diagonal term and the last non-zero term in a column is a minimum.

    ---             -         -
     ---             -       -
      ---             -      -
       ---             -     -
        ---             -    --
         ---             -   --
           ---            -  -
           ---             - -- 
            ---         ---
             --              --
              -               - 
    Bandwidth        Profile

The bandwidth minimum is an attempt to minimize the transport of normal equation records into core memory by sacrificing some central processor time on zero manipulations. The minimum profile should produce a minimum number of records and minimum number of zero manipulations, however because of spikes in the profile there may be more data transport required.

The two options in the program are based on algorithms called CUTHILL-McKee and BANKERS. The CUTHILL McKee algorithm produces a minimum bandwidth using the following process.

  1. Start a list of stations by choosing any station.
  2. Read the inter-connections for that station sorted by the maximum number of connections to each station
  3. Add to the list of stations any stations in the stations inter-connections that is not already in the list
  4. Choose the next station in the list and repeat steps 2-4 until the list is complete
  5. For one iteration, replace the first station in the list with the last station in the list and repeat steps 1 to 4.

The BANKERS algorithm is similar with the exception that inter-connections are placed in a waiting list which is weighted by the number of connections to the station. A station is chosen as the one with the minimum weight. The weight of each other station in the waiting list is decremented by one each time a station in the waiting list is chosen.

The program PERMIN (perform minimization) is used to do the processing. Besides the two options to minimize the profile the program can optimally list the normal equation order with the inter-connections in each sub-file of EDITOBS and its connections within the normal equations.

The program produces a file called PRONEQS (normal equation profile) a binary direct access file which points from the normal equation file and defines the unknown in terms of the station data file or auxiliary parameter file. It is used in subsequent job steps to create normal equation records(HERESI) and to update the station data file and auxiliary parameter file with adjusted values.

The program also adds the normal equation pointers into the station data file and auxiliary parameter file so that given the index one can find the position of the unknowns in the normal equations matrix.

The files stadata, nuispar, proneqs and adjsumy have been updated and if no errors are reported the user can proceed with the next job step adjcla (Classical adjustment)

Sample Output Listing

    PERFORM MINIMIZATION

    NUMBER OF STATIONS =   11
    NUMBER OF NUISPAR  =   11
    NUMBER OF FIXED    =   1
    NUMBER OF JUN.STA. =   0
    NUMBER OF JUN.NPR. =   0


1.4 ADJUSTING THE DATA

Sample procedure sequence

    cd /data  <-- enter the sub-directory with data file
    READCC
    EDTPOS
    EDTEOB
    PERMIN
    ADJCLA    <-- this procedure

The adjustment procedure ADJCLA (classical adjustment) has three options. The program reads from the existing coordinate definition and observation files and creates a normal equation file (normeqns) and a normal equation inverse file (norminv)

  1. Standard adjustment with iteration to a preset convergence. (control record column 8 [1] or blank)
  2. Design adjustment where the normal equations are reduced but no solution is computed.(control record column 8 [2])
  3. Partially reduced normal equations where the normal equations are reduced to the beginning of the junction unknowns. (the junction unknowns are placed at the end of the normal equations by the minimization process)(control record column 8 [3])

The standard adjustment will iterate to a preset convergence (control record) reading from the existing files and updating the station data file and auxiliary parameter file with the adjusted results. The variance factor is computed after each iteration and compared with the last variance factor to detect divergence.

The program will optimally list the observation equations, normal equations, and corrections for each iteration. Some of the statistics are saved in the adjustment summary file.

A user can now save the updated files or continue to list the residuals. A set of 04 coordinate records can be obtained on a file ADJCORD by executing program STDTO4 (station data to 04). If at this stage the program ADJCLA is executed again the preset conditions of number of iterations and convergence criterion still apply, although the files will contain updated coordinates.

ADJUSTMENT PROGRAM OPERATION

The program reads the file produced by the minimization process and divides the normal equations into convenient records. The records are called HERESI records and contain a 6 term header describing the record, a list of pointers for each of the columns included in the record and a vector containing the coefficients for each column. The zero terms above the last non-zero term are not stored. The pointers in the second part of the record are used to determine where each column begins in the vector of coefficients.

The edited observations file is read for each type of observation and the partial normal equations added to the normal equations for each record. The indices in the edited observations file are independent of the normal equation indices. Each normal equation index must be found in the either the station data file or the auxiliary parameter file. The terms are either added directly into a heresi record or are placed in a buffer matrix to wait until the heresi record is read into core. When the buffer is full the heresi record in core is exchanged for the one with the largest number of terms in the buffer.

Once all the observations have been added to the normal equation file, the file is checked for zero or negative terms. At this stage all three adjustment options are the same. From the reduction onward all three are different.

The forward portion of the CHOLESKY reduction is done to a preset row and column. For standard and design adjustments a full reduction takes place, however for a partially reduced adjustment the process is stopped at the preset row as determined in the minimization process.

  1. DESIGN adjustment - the program computes the normal equation inverse for the next job step of the analysis.
  2. PARTIALLY REDUCED adjustment - the program will exit after writing a copy of the junction coordinates to a file COORDS and the partially reduced normal equations to a file REDNOR(or prnfmt). These two files are all that are necessary to go forward in a block adjustment, however for the back solution the normal equation file, edited observations file, station data file, auxiliary parameter file, profile file and adjustment summary files are all required. A user has the option of regenerating the files at a later date.
  3. STANDARD adjustment - The program will now compute the solution for the normal equations and add the correction to each coordinate or auxiliary parameter. The program proceeds to compute the variance factor for this iteration.

    Four tests are done to determine the stage of adjustment.

    1. CONVERGENCE - The maximum correction to any coordinate is compared with a preset maximum. If the maximum exceeds the preset value the test passes.
    2. DIVERGENCE - The variance factor is compared with an existing variance factor and if there has been an increase of less than 10% the test passes.
    3. ITERATION - The number of times the adjustment has iterated is compared with a preset maximum. If the number is less than the preset number the test passes.
    4. TIME LIMIT - The amount of time the present iteration has taken is compared with the amount of time remaining. If enough time remains the test passes.

If all the four tests pass, the program will iterate the adjustment again, otherwise the process stops. If the solution has converged or the maximum number of iterations has been completed the program will compute the profile inverse.

BACK SOLUTION OF PARTIALLY REDUCED NORMAL EQUATIONS

Sample procedure sequence

     cd ../block1  <-- sets the default sub-directory
     BAKSOL        <-- this step
     COMRES

One can define a tree structure where a PARENT block is defined as one having any number of sibling offspring, but a SIBLING can have only one parent. The PARENT block contains the solution of two or more sets of adjusted partially reduced normal equations. The SIBLING block contains the 'eliminated' portion of the original block as well as the partially reduced portion of the adjustment.The solution of the sibling block is found having transfered the solution from the parent block

Program BAKSOL reads the parent block solution and transfers the portions of the solution vector required to complete the back solution to the sibling block. Once the adjusted values of the junction unknowns are inserted into the partially reduced right hand side record, the back solution can be completed for the sibling eliminated coordinates and auxiliary parameters. The updated solution can now be added to the station data file (STADATA) and to the auxiliary parameter file (NUISPAR).


1.5 LISTING THE RESIDUALS

Sample procedure sequence

     cd ../data  <-- sets the default sub-directory
     LISRES      <-- this step

The residual listing program (lisres) can be used at any time to list the residuals. If there has been a successful edit of the observations, the program will list the residuals computed from the input coordinates. If there has been a successful iteration of the adjustment, the residuals will be computed using the most recent values of the coordinates. It is up to the user to interpert the results

COORDINATE LISTING

The coordinates and adjusted values are listed first by LISRES. If there is information included in the adjustment to describe the geoid at any point, these values are also listed. In addition if the adjustment determines any updates to the geoid these are also listed. For example if doppler observations are included along with a geoid height and orthometric elevation, a new geoid height will be listed.

AUXILIARY PARAMETERS

The initial and adjusted values of the auxiliary parameters are listed along with their standard deviation, if computed by the adjustment.

DIRECTION RESIDUALS

Each set of directions is listed including the observation, the input standard deviation, the residual, the normalized residual and the set orientation.. Each residual has the effect of the direction orientation removed. The residual is normalized by multiplying the residual by the square root of the weight. The normalized residual is added to the histogram accumulation and is tested against the TAU statistic. Any normalized residual exceeding this value is flagged as suspect.

A histogram of the residuals as well as the accumulation of the contributions to the sum VtPV is listed at the end of the direction listing.

DISTANCE RESIDUALS

Each distance is listed including the observation, the input standard deviation, the normalized residual and the auxiliary parameter, if defined..Each residual has the effect of any auxiliary parameter removed. The residual is normalized by multiplying the residual by the square root of the weight. The normalized residual is added to the histogram accumulation and is tested against the TAU statistic. Any normalized residual exceeding this value is flagged as suspect.

A histogram of the residuals as well as the accumulation of the contributions to the sum VtPV is listed at the end of the distance listing.

AZIMUTH RESIDUALS

Each azimuth is listed including the observation, the input standard deviation, the residual and the normalized residual. Each residual has the effect of any auxiliary parameter removed. The residual is normalized by multiplying the residual by the square root of the weight and added to the histogram accumulation. The normalized residual is tested against the TAU statistic and any exceeding this value is flagged as suspect.

A histogram of the residuals as well as the accumulation of the contributions to the sum VtPV is listed at the end of the azimuth listing.

POSITION EQUATION RESIDUALS

The residual output for position equations is divided into three parts. The first part lists the auxiliary parameters as well as their standard deviation. The second part lists the coordinates along with their corrections and standard deviations. The third part lists the observations equations along with the corrections, contributions from the auxiliary parameters,and the normalized residuals. Each normalized residual is tested against the outlier test and flagged if suspect.

The normalized residuals are contributed to a histogram and listed along with the chi-square statistic at the end of the position observation listing.

POSITION DIFFERENCE EQUATION RESIDUALS

Each set of position difference equations is listed including the observation, the adjusted coordinate, the residual and the difference residual. Each residual has the effect of any auxiliary parameter removed. Each auxiliary parameter that is included in the computation of the residual is listed.

The normalized residuals are contributed to a histogram and listed along with the chi-square statistic at the end of the position diference observation listing.


1.6 ANALYSIS OF RESULTS

There are two programs that can be used to analyze the results as well as program comres. program confel will produce confidence region analysis and program compos will compare two sets of coordinates.

PROGRAM CONFEL

Sample procedure sequence

       cd ../data  <-- sets the default sub-directory
       CONFEL      <:-- this procedure 

Program confel is best used in an interactive mode where the user is queried for various options and may select to repeat certain processes after having looked at the results. The program has three basic options.

  1. List the station confidence regions relative to origin.
  2. List the three dimensional confidence regions.
  3. List the two dimensional confidence regions

The first produces a list of station coordinates their standard deviations and confidence region axis. This can be examined to see the size of the confidence in relation to the origin of the adjustment.

For the second and third option the user is given the option to compute all the relative confidence regions or to limit the selection of pairs. This limit is by a selection of pairs within a radius and/or a selection of station pairs. The program will report the number of pairs selected and will query the user to continue or repeat the selection process.

The second option produces relative confidence regions in terms of azimuth, distance and height as well up to three axis of the confidence ellipsoid. The size as well as the orientation of the three axis is given.

The third option produces a list of confidence regions in the plane of the local geodetic system in terms of azimuth and distance as well as a comparison to the classifications of 20, 50, 100, 300 and greater than 300 ppm. The program can be asked to list only the confidence regions that fail the classification of the station as input from the coordinate record. The user can interactively change the input classification and also save the classification in the station data file. The program lists the station name along with the maximum line classification for each station. The user is given the option to reassign this classification to all or selected stations.

The output of the program will be in the default directory as confel.lis The user may wish to save the station coordinates along with the reassigned classification using program stdto4.

PROGRAM COMPOS

Sample procedure sequence

       cd ../BLOCK1      <-- sets the default sub-directory
       COMPOS   <-- this procedure 

Procedure compos can be used to compare two sets of coordinates for relative distortion. If the difference for each coordinate between each set of coordinates represented as a vector in space, then the relative distortion can be described as the vector difference between the two coordinates divided by the inverse distance. If two adjustments provide a consistent set of coordinates although different, then the relative distortion will be zero. This knowledge is useful when deciding how to transform one set of coordinates to fit the other.

The procedure queries the user for two sets of coordinates to compare. It saves any coordinates from the second set where the station number matches a station number in the first set. It then computes the difference between each pair and lists it to the terminal and to a file. The program then queries the user to list all the relative distortions or to select station pairs within an input radius. If the user selects to list all the pairs then the program examines all the relative distortion between all the stations. At the end of the comparison the program lists a summary of the distortions that are less than 20, 50,100,and 300 ppm and greater than 300 parts per million. If the user selects to limit the selection process, the program queries for a radius and then for a minimum distortion. For example a user may wish to look for relative distortions between neighboring stations that are greater than 50 ppm. Considering the maximum line length of 30 kilometers, the user would choose a radius of 40 kilometers and a minimum relative distortion of 50 ppm.

The user is given the option of repeating the selection process with changed selection criterion or to change the input classifications of the stations before repeating the process.

The output is on the default sub-directory in a file COMPOS.LIS.


2. GHOST DATA FILE DESCRIPTIONS


2.1 GHOST INPUT DECK (COMPILE)

The input to GHOST routine VALDEK consists of three parts.

  1. The title and control records.
  2. The initial coordinate and auxiliary parameter definition
  3. The observations or records pointing to files of observations.

The title is contained on an 80 character record and is used to identify the adjustment on the top of most pages of the output listings. The control record is used to control options in the adjustment and to initialize some default values. The chosen options are listed as the first page of the valdek listing. The results of the control record and title are stored in the adjustment summary file along with other values.

The coordinate records (codes 4, 5, 6 ,7, 8, 9 and 94) define the initial values to be given to the coordinates and auxiliary parameters. The first value encountered is kept as the initial value and any subsequent values compared against the existing values. Any undefined values will be replaced by values on subsequent records.

The coordinate definition consists of the following records

Comment records which have a non blank character in column 1

The observations consist of following records.

Program VALDEK reads and edits the compile file or the user entered file and generates 5 files - ADJSUMY, STADATA, NUISPAR, BLKCORD, and INTCOBS.


2.2 BLOCK COORDINATE FILE(BLKCORD)

This file is similar to the first part of the GHOST deck and contains the initial coordinate records as well as the geoid records, the astro records and the auxiliary parameter identification record. The first value encountered is kept as the initial value and any subsequent values compared against the existing values. Any undefined values will be replaced by values on subsequent records.

The file consists of the following records

The records may have appended to them the indices as determined from the initial data verification program VALDEK. Program EDTPOS can be used to read and edit the BLKCORD file and generate 2 files STADATA, and NUISPAR described below. The file ADJSUMY will be updated to contain information about this file.


2.3 INTERCONNECTING OBSERVATIONS FILE (INTCOBS)

This file contains the observations in random format with the exception that directions and position observations are in sets.

The observations consist of following records.

The station indices may be appended to the record from the initial data verification program VALDEK. Program edtobs reads and edits the INTCOBS file and generates the file EDITOBS described below. The file ADJSUMY will be updated to reflect the input observations and the files STADATA and NUISPAR may be updated if information such as position observations are input.


2.4 STATION DATA FILE (STADATA)

This direct access binary file contains information about the station such as the number and name, input coordinate value, astro coordinate values, deflection values, adjusted coordinate values, normal equation pointers and covariance data. The file can be listed using program LSTEOB or can be modified or viewed using program UPDSTA. Modifications to the file are made under program control as the adjustment progresses. For example a record is kept of the most current adjusted coordinates as well as the initial values. Certain fields from the input coordinate record are maintained for identification on output.


2.5 AUXILIARY PARAMETER FILE (NUISPAR)

This direct access binary file contains information about the auxiliary parameters such as the identification, initial value, adjusted value, normal equation pointers and covariance. The file can be listed using program LSTEOB or viewed on the screen using updsta. Modifications to the file are not permitted except under program control.


2.6 ADJUSTMENT SUMMARY FILE (ADJSUMY)

This direct access binary file contains information about the adjustment including indicators to tell what processes have been completed, the processes the program will complete and the output expected. As well the file contains counters and constants either used in the adjustment or to identify the data set. The file can be listed using program DMPADS and certain changes can be made using program UPDADS.


2.7 EDITED OBSERVATIONS FILE (EDITOBS)

This sequential binary file contains the observations written in binary form, including the observation indices which point to either the STADATA or NUISPAR file. The file itself is divided into 7 partitions or sub-files each containing a particular type of observation. Each partition contains a series of binary records, each record being a logical set of observations. A logical set of observations could be a set of direction observations from a station, a distance observation, or a set of position observations. The record organization is the same for the first three and last partitions and the same for the middle three partitions. The difference in the organization of the middle three partitions is that position observations are considered in threes with an unobserved quantity being flagged as undefined.

The partitions are as follows

  1. Direction observations
  2. Distance observations.
  3. Azimuth observations.
  4. Position observations.
  5. Position difference observations
  6. Partial reduced normal equations files and weighted station files
  7. Elevation difference observations

The file can be listed using program lsteob if associated files stadata, adjsumy, and nuispar are made available to the program. File editobs is generated by program EDTOBS and is not changed by other programs, however modifications can be made to the file using program updeob.


2.8 NORMAL EQUATION PROFILE FILE (PRONEQS)

This direct access file contains a record for each unknown in the normal equation file. Each binary record contains a pointer to a record in either the stadata file or the NUISPAR file as well as the normal equation type, the maximum number of rows and columns associated with this unknown and the number of the heresi record containing the normal equation coefficients. The first 4 terms of the record are generated by program permin and the final term added by program ADJCLA.


2.9 NORMAL EQUATION FILE (NORMEQN)

This direct access file contains a series of normal equation records in binary form. The record consists of three parts namely the record constants, diagonal term pointers and normal equation coefficients or right hand side, original diagonal terms, or corrections. The file NORMEQN is generated by program ADJCLA and is also used in the analysis programs and for the back solution of the block adjustment (BAKSOL and COMINV).


2.10 NORMAL EQUATION INVERSE FILE (NORMINV)

This direct access file contains a series of normal equation records in binary form. The record consists of three parts namely the record constants, diagonal term pointers and inverse normal equation coefficients. The file NORMINV is generated by program ADJCLA and is also used in the analysis programs and for the inverse solution of the block adjustment (BAKSOL and COMINV).


2.11 PARTIALLY REDUCED NORMAL EQUATION FILE (REDNOR)

This coded file contains the junction portion of the partially reduced normal equation file. During the Helmert block solution of a set of normal equations the contribution to the overall normal equation file for the terms that are contained in more than one block, are transferred between blocks using this file.

The format of the file is similar to the position equation file of the interconnecting-observations-file but instead of the coordinates the partially reduced right hand side is transferred on the code 92 record and the coefficient matrix is the partially reduced junction area of the block. During the forward solution the parent solution consists of the sibling partially reduced normal equations as the observations.

The file is produced by the program ADJCLA and is controlled by code 8 and 9 on the control record. An alternate method of producing the REDNOR file, if the adjustment process has reduced the normal equation to the proper level, is program PRNNEQ


2.12 EXCHANGE FORMAT FILE (PRNFMT)

The exchange format file consists of three sub-files

  1. The equation definition sub-file
  2. The coefficient sub-file
  3. The original diagonal term sub-file

The equation definition sub-file contains a record for each unknown in the partially reduced normals . The record contains the unknown definition, the unknown identification, the unknown initial value and for coordinate unknowns the observed or derived astro values and geoid ellipsoid separation. The coordinate values are in decimal degrees.

The coefficient sub-file contains one record for each coefficient in the partially reduced normal equations that is different than zero. As well there is a record for each of the partially reduced right hand side terms as well as the partial sum VtPV. Each record consists of two pointers and the coefficient, corresponding to an i j index in a matrix. The last term in the sub-file which has indices of one greater than the number of unknowns defined in the first file is the sum VtPV term for the set of partially reduced equations.

The original diagonal term record contains a record for each diagonal term in the partially reduced normal equations. This term is a copy of the original sibling file normal equations diagonal term before any reduction process has begun.

The user reads the file using a code 91 record which names the file and has a one in column 41. The record is included in the INTCOBS file in the default sub-directory.

The file is produced by program ADJCLA when a 2 is placed in column 9 of the control record having chosen a partial reduction in column 8.

This file is an alternate to file REDNOR and is chosen for exchanging data between different organizations or when the partially reduced normals are sparse.


3. GHOST PROGRAM DESCRIPTIONS

  • VALDEK Validate the compile data file
  • READCC Read the adjustment options from the terminal
  • EDTPOS Edit the BLKCORD coordinate definition file
  • EDTOBS Edit the INTCOBS observation file
  • PERMIN Minimize the normal equation profile
  • ADJCLA Reduce and solve the normal equations
  • LISRES List the results
  • Save the adjusted coordinates
  • Confidence region analysis
  • Weighted station selection
  • Other programs


    3.1 GHOST DATA SET VALIDATION (VALDEK)

    This is the initial procedure that will read from a data file as indicated by the user. The data file is formatted as a GHOST input data file consisting of three sections. The first section has the title and control record, the second section the coordinate and auxiliary parameter initial definition, and the third section either the complete set of observations or a record or records pointing to an alternate data file.

    During the validation stage the records in the data file are checked against pre-defined ranges. Each record is validated separately. Any items falling outside the pre-defined ranges are flagged as warnings. Any item that is required and missing is flagged as an error and a flag set in the adjustment summary file so that other programs attempting to use the files can be stopped.

    Both a BLKCORD and an INTCOBS file are generated and saved in the current working directory . It may not be necessary to validate these files for subsequent processing, since the files are sequential 90 character coded files and modifications can be made using standard text editing techniques. For maximum benefit the original order of the coordinate records should not be changed, so that the observation editing can use the station pointers on the observation records between columns 80 and 90.

    An ADJSUMY, a STADATA and a NUISPAR file are produced for subsequent processing. The ADJSUMY file contains a summary of the processing done to date as well as options for the next steps. The STADATA file contains the initial value of the station coordinates as well as other details from the coordinate definition records. The NUISPAR file contains initial value information about the auxiliary parameters.

    The output from program VALDEK is in the current working directory as VALDEK.LIS. Warnings and errors are listed as they occur. A detailed list of observations can be listed by putting a 1 in column 10 of the control card.

    This program is usually followed by the EDTOBS program which will edit the records and check the observations by comparing them against observations derived from the initial coordinates. The values in the ADJSUMY file can be seen using DMPADS or changed using UPDADS. The values in the station data file and the auxiliary parameter file can be listed using lsteob or interactively viewed or changed using UPDSTA.


    3.2 EDIT CONTROL RECORD (READCC)

    This program can be used to read the control records. The program initiates the ADJSUMY file with certain default values as well as definitions from the control records. The first control record contains the adjustment title. The second allows the user to easily set some options as described on the output file or in the documentation. The options allow the user to modify some of the steps within the various processes as well as to set default values in the adjustment summary file ADJSUMY.

    The options from the control record are printed on the file READCC.LIS.

    The program is followed by the EDTPOS program. The user can further view the options chosen by using the DMPADS program or change the options using program UPDADS.


    3.3 EDIT BLOCK COORDINATE DATA (EDTPOS)

    This program reads the block coordinate data set (BLKCORD) and creates the station data file (STADATA) and auxiliary parameter file (NUISPAR).

    Each adjustment has a requirement for an initial definition of the coordinates as well as auxiliary parameter values. The values are input using the coordinate definition records in a file called BLKCORD. The one exception is that if a coordinate is undefined in the BLKCORD file but defined in the observation file (INTCOBS), the first observed value that is encountered will be chosen as the initial value. The program chooses the first value as the initial value but will check any subsequent records for the same coordinate and replace values where the existing values are undefined. For example a code 5 will replace a code 4 in the station data record for that station. If the elevation is undefined on the first record the program will use the elevation from another record .

    The records are keyed by the code and the station number. The code is used to identify the type of data and establish junction points for a Helmert block adjustment. The station number is used to uniquely identify the data and as an indirect index to the station data file. Each data field is checked for gross errors and against any existing values.

    The adjustment summary file (ADJSUMY) is updated with record sums.

    The output from the program is on file EDTPOS.LIS. The user can optionally list an input image by placing a 1 in column 10 of the control record or using program UPDADS to change the listing option. The totals in the block coordinate summary can be used to quickly verify that an identical data set has been used for different runs of the same data.

    The program is preceded by the READCC program and followed by the EDTOBS. program.


    3.4 EDIT INTERCONNECTING OBSERVATIONS (EDTOBS)

    Once the adjustment files ADJSUMY, STADATA and NUISPAR have been generated by the coordinate edit program (EDTPOS) or the validation program (VALDEK), one can proceed with the observation edit program. This step adds data into the above three files and generates a binary version of the observations, sorted according to type into seven sub-files or partitions. This file is named EDITOBS and is only created when no errors have been detected in the observations.

    The data keys on the record are the code, observation type, station number "from" and station number "to". The record code is used to identify the type of data so that various data checks can be performed with the data , as well as to identify the sub-file for the data in the editobs file. The station number "from" and station number "to" are used to locate the station data records with associated coordinate information. The observation type is used to find records containing the standard deviation estimates for the group of records (sigma records) as well as the scale auxiliary parameters for distance observations and orientation auxiliary parameter for azimuth observations. Each observation is checked against an observation derived from the initial coordinate values. Any errors or observations outside a pre-selected range are listed along with their derived calculations. A user can use this data to detect differences between the initial coordinate definition and the observation. Listing of warnings and errors can be controlled by defining a minimum threshold value on the control record for each type of observation.

    Position type observation records are input as groups of records consisting of a header record, some observation records and/or auxiliary parameter definitions, a trailer record and weight or covariance matrix records. Various types of data have been defined. The groups can be either a position equation in a cartesian coordinate system, a position difference equation in a cartesian coordinate system, a station adjustment group in the local coordinate system or a set of partially reduced normal equations. Alternate input files have been defined for doppler and iss types of data. Partially reduced normal equations are input as a group of records (REDNOR) or from an alternate file prnfmt.

    The user can optionally list an input image. The interconnecting observation summary is listed and may be used to quickly identify a data set. The output is stored on a file EDTOBS.LIS in the current working directory.

    The data summaries can also be seen using programs DMPADS and UPDADS.

    The initial coordinate values can be seen using UPDSTA and modifications made to the STADATA and NUISPAR files if required.

    A quick summary of the data can be printed using LSTEOB.

    A residual listing using initial coordinate values can be made using COMRES. Care should be exercised interpreting these residuals if the coordinates are not consistent.

    The first step in the adjustment procedure is executed next using program PERMIN. This procedure configures the normal equation matrix to compress non-zero values towards the diagonal, thus providing a method of eliminating unnecessary zero computations. This seemingly overhead can reduce the time to compute the normal equation matrix from a cube power of unknowns to a linear value. There is no need to repeat this process during an adjustment iteration. The second step in the adjustment procedure is executed by program adjcla. The adjustment can be iterated by using a preset number of iterations on the control record or by repeating the ADJCLA program.


    3.5 FIND A MINIMUM NORMAL EQUATION PROFILE (PERMIN)

    This program is used to determine the normal equation ordering for the adjustment unknowns which gives a minimum profile. The purpose is to try to eliminate as many zero terms from the normal equation reduction computations as possible. A sparse matrix is defined as one having a large percentage of zero terms. The efficient processing of a sparse matrix requires that the zero areas are identified such that they can be bypassed in the computations. It is usually inefficient to identify individual terms, but one can take advantage of a reordered matrix where the non-zero terms are grouped together.

    One such method is to reorder the rows and columns of the matrix so that the sum of the differences between the diagonal term and the top non zero term in each column is a minimum. This produces a profile where some columns are much higher than others and is called a spiked profile. Another method is to try to minimize the maximum difference between the diagonal term and the last non zero term. This produces a profile where the column height is more consistent than the spiked profile. This will be referred to as a banded profile.

    The Cholesky algorithm is suited to the reduction of these reordered matrices, as the zero areas can be identified and bypassed thus saving computations. The reduced normal equations will have the same profile, however zero terms below the top non zero term may become non zero and are called "fill-in terms". The spiked profile will produce the least fill-in terms but may require wider access of terms in the matrix. If the matrix is stored in core, this is not a problem, however many adjustments will not fit in the space available and must be reread from auxiliary storage.

    The program stores the normal equation profile in a file (PRONEQS) which can be used to identify the station or auxiliary parameter represented by the unknown. At the same time each station is linked to the normal equation file by a set of pointers in the station data file (STADATA) and the auxiliary parameters in the auxiliary parameter file(NUISPAR).

    The results of the program are listed in the file PERMIN.LIS. The inter-connection summary can be listed showing the number of inter-connections from each station or auxiliary parameter, as well as the stations or auxiliary parameters to which it is connected. The program will always list stations where the number of connections is below 2. The ADJSUMY file is modified to include the minimization process.

    The minimization process can be either a minimum profile (Bankers algorithm), minimum bandwidth (Cuthill-McKee algorithm) or full matrix (no minimization) as chosen on the control record. The minimum profile generally suits the Heresi algorithm whereas the minimum bandwidth might suit a blocked type of solution. The full matrix is only used for special requirements as it is usually more expensive to compute.


    3.6 PERFORM LEAST SQUARES ADJUSTMENT (ADJCLA)

    This program is the heart of the adjustment process and will normally consume most of the computer time. If the minimization is done properly, the processing time will be proportional to somewhere between a linear and square of the number of unknowns. The proportion being dependent upon the sparseness of the matrix and the effectiveness of the minimization algorithm.

    The program reads the adjustment summary file to ensure the initial processes have been completed and to extract the numbers required to define the variable array and record sizes. The edited observations file (EDITOBS) is read and the normal equations are created by adding each observation equation in turn. The normal equations are stored in a file called NORMEQN as a series of "HERESI" (Hanson's Esoteric REduction, Solution and Inversion subroutine) records. Each HERESI record contains the normal equation coefficients above the diagonal to the last non zero term as well as a set of diagonal term pointers and some constant terms describing the record. Each observation equation is squared to form a partial normal equation which is then added to the normal equations. If the HERESI record size exceeds the space available, the program uses disc storage to save the different HERESI records. If the term in the partial normal equation belongs to the HERESI record in core , the term is added to it. If the term belongs to another record the term is saved in a queue. When the queue is full the program will read the record with the most terms in the queue. The completed normal equation file is checked for zero or negative diagonal terms usually indicating missing data. The program will list each record along with its size, the number of columns in each record as well as a number of other statistics that a user can use to determine the sparseness of the matrix and the efficiency of the minimization algorithm.

    The program now either partially or fully reduces the normal equations. For the standard adjustment the fully reduced normal equations are used to compute a solution and the process is iterated until convergence, divergence or the projected time required exceeds the time allowed. The files STADATA and NUISPAR will contain the adjusted results, file NORMEQN the Cholesky root and file NORMINV the inverse terms under the profile.

    For the block adjustment the unreduced portion of the partially reduced normal equations can be saved either as a REDNOR (93-97) junction block or in the Exchange Format (PRNFMT) for further combination and processing. The option is chosen by using columns 8 and 9 of the control record or by using program UPDADS before running PERMIN and ADJCLA.

    The fully reduced set of normal equations are produced for a design analysis. The solution is not computed but the inverse terms under the profile are computed for subsequent use in the error analysis procedure.

    The user can optionally list the observation equations, normal equations, and the corrections for each pass by putting a 1 in columns 16, 17 and 18 respectively of the control card for VALDEK. The program lists the variance factor estimate for each iteration as well as the 10 minimum Googe numbers. When problems occur the Googe numbers can be used as an indication of network weakness, and thus identify suspect observations.

    The listing will be written to a file ADJCLA.LIS in the current working directory. The user can examine current results by using programs LSTEOB, UPDSTA or COMRES. If the file norminv has been produced, the user can list the confidence regions for each station and for each station to station pair by using program CONFEL.


    3.7 FIND THE HELMERT BLOCK BACK SOLUTION (BAKSOL)

    The normal equations produced by Control survey networks are normally sparse matrices. One can compute the solution directly by a reduction process such as Cholesky or Gauss. The problem with this is that a sparse matrix may become a full matrix unless something is done. There are two ways to reduce the number of computations required, by banding the matrix so that the terms are rearranged towards the diagonal and by breaking the network into blocks such that each block can be dealt with more or less independently. The second method requires a further subdivision into two where the inter-connections to other blocks is restricted to one sub-division and all remaining inter-connections are in the other sub-division. The first subdivision will be referred to as a junction block and will contain all stations or auxiliary parameters that have a contribution to the overall normal equations coming from another block. The remaining stations or auxiliary parameters form the main part of the block and have inter-connections only to stations within the one block.

    The stations or auxiliary parameters in the junction block are indicated by placing a code 5 or code 6 on the coordinate record and a code 8 on the auxiliary parameter record. The code 5 indicates a junction with another block formed by dividing a network into blocks. The code 6 indicates a station that must remain a junction as long as there is data defining it within the block.

    The block process then proceeds by reducing each normal equation block to the junction block using the Cholesky algorithm. These junction blocks are saved and can be treated as partial normal equations in a parent block. This process can be considered as similar to the formation of normal equations by addition each observation independently. The combination of these junction blocks with the original blocks can be visualized by using a diagram where the bottom row are the original blocks, the next row the combination of junction blocks and the row above that combinations of subsequent junction blocks. Careful selection of junctions will reduce the size and complexity of the block diagram and the block adjustment. The combination of junctions are referred to as a parent block and the blocks contributing to the parent the sibling blocks.

    One can consider the Helmert blocking strategy as forming a tree where the top leaf is the final parent block and the lowest leaves are the initial level blocks. Each parent can have one or many siblings. A sibling becomes a parent to lower level blocks.

    The procedures are set up to take advantage of the sub-directories in storing the blocks. The final parent is contained in the root directory with each sibling in a sub-directory to the parent. A parent can have any number of siblings however a sibling can only have one parent. Blocks that have no siblings are the original blocks where normally the observations are stored.

    Using this tree like structure the adjustment proceeds as follows. Each parent block is defined and programs READCC and EDTPOS are executed for each block. This will produce files ADJSUMY, STADATA and NUISPAR. The topmost parent will be defined as a standard adjustment on the control card but the lower blocks will be defined as a block adjustment by using columns 8 and 9 on the control record. At the bottom level the programs EDTOBS, PERMIN and ADJCLA are executed which produce files EDITOBS, PRONEQS, NORMEQN and either REDNOR or PRNFMT. Once all the sibling blocks are reduced for a parent the user proceeds to the parent sub-directory and executes assemble. Assemble will request the names of the contributing sub-directories and perform the next stage of the block adjustment by combining the lower level junction blocks and partially reducing the resulting normal equations. The batch procedure for assemble requires a file (sibling) containing a record with each sibling sub-directory name in the parent sub-directory.

    The first part of the solution is found when the final parent adjustment has been done. This solution is only for the final parent block, however, and program BAKSOL is used to compute the solution for all the sibling blocks. Starting at the top parent then program baksol is executed for each parent in turn working your way down and across the tree. The program transfers the solution vector to the sibling and completes the solution of the sibling block. This process is repeated until all sibling blocks are solved.

    The program will update the STADATA and NUISPAR files with the adjusted coordinates and auxiliary parameters. The user can optimally list the corrections for each pass. The results will be on file BAKSOL.LIS.

    Subsequent passes can be performed by repeating ADJCLA for the bottom level blocks and then assemble procedures followed by baksol procedures for each parent block in turn as above.

    Once the adjustment has been iterated to a satisfactory convergence level, program COMINV may be used to compute the block profile inverse. Once the inverse has been completed the user can produce any confidence regions within the block using program CONFEL.


    3.8 LIST ADJUSTMENT RESIDUALS (LISRES)

    The residuals can be listed using program LISRES. This program reads the EDITOBS file along with the STADATA , NUISPAR and ADJSUMY files and lists the adjustment results. The first part lists the input along with the adjusted coordinates as well as the input and derived geoid information along with any auxiliary parameters that have been requested. The next part lists each type of observations in turn along with the residuals and any auxiliary parameters that have been solved for that observation. The standardized residual is compared against a computed "Tau" test and standardized residuals exceeding that test are flagged as outliers. Each observation contributes a standardized residual to a histogram of the type of observations as well as the total set. Following each type of observation a histogram for that type of observation is drawn. At the same time the Chi-Square statistical test is computed for each type of observation. The sum of the contribution to the variance factor is listed for each type of observation.

    The program will list the direction observations followed by the distances and azimuths observations. These are followed by the position observations as well as position difference observations and elevation difference observations.

    For each set of position equations or position difference equations the program lists any auxiliary parameters as well as the adjusted station coordinates along with their corrections and standard deviations. The observations are listed along with the residuals, the standardized residuals, the contribution of each auxiliary parameter to the residual, the outlier test and outlier flag if necessary.

    The output listing is stored in a file called LISRES.LIS in the current working directory. A user will normally peruse the listing looking for anomalous residuals and outlier flags.

    If LISRES is executed before the adjustment is complete the results will be unpredictable, however if a consistent set of coordinates have been input the residuals will show where the coordinates and observations disagree. Note: Any disagreement above a preset limit is also listed on input in the EDTOBS.LIS file


    3.9 LIST EDITED OBSERVATION FILES (LSTEOB)

    The STADATA, NUISPAR and EDITOBS files can be listed by using program LSTEOB. Various required parameters are read from the ADJSUMY file. The files are listed in more or less unformated style by translating the binary files into a readable coded format. This program would normally be used in debugging the data as the other programs produce a more readable format.

    The station coordinate information is listed as the input coordinates along with the adjusted coordinates and geoid information. The azimuth and distance between each input and adjusted coordinate is listed. The coordinates are listed as geographic as well as cartesian values along with the astro values and any input geoid separations.

    The observations are listed as observed values along with the input standard deviation and variance. The pointers to the normal equations are also listed for each observation.

    The output is in the current working directory in the LSTEOB.LIS file.


    3.10 LIST ADJUSTMENT SUMMARY FILE (DMPADS

    The contents of the adjustment summary file (ADJSUMY) can be using program DMPADS. Each item from the adjustment summary file that is different from the default value is listed along with a short description of the value. The list is useful to examine the values contained in the file for debugging purposes. Some values can be altered by using program UPDADS. For example the user can change the minimization from a Bankers algorithm to no minimization (full matrix) by placing an "N" in character position 6 of the adjustment options variable and rerunning the PERMIN and ADJCLA programs.

    The output is listed to the screen. Further identification of variables can be found in the data dictionary for the adjustment summary file.


    3.11 UPDATE THE ADJUSTMENT SUMMARY FILE (UPDADS)

    Some of the contents of the adjustment summary file (ADJSUMY) can be modified using program UPDADS. The user is asked for an item number and the value to which it will be changed. The program lists the original value followed by the new value. The user is queried as to whether the change is correct. The user can accept the change or not. The program is exited with a -1 item number. The user can view any particular value by entering the item number and not accepting the change.

    Certain values cannot be changed in this way as the subsequent procedures depend upon the value being correct.


    3.12 UPDATE THE STATION DATA FILE (UPDSTA)

    The contents of the station data file (STADATA) can be viewed and modified using program UPDSTA. The user can list the station data file and the auxiliary parameter file to the screen and also to a file. Modifications can be made to elements of the files by identifying the specific record by a station number or a auxiliary parameter number. The user can read new coordinates or changes to existing ones from an alternate file. The station data file contains current adjusted values along with the original input ones. The user can change the input values to the current adjusted values and also the current values back to the input values.

    Care should be exercised in using this program as the adjustment will be modified without the safeguard of edit checks in the editing process, on the other hand changes can easily be made thus eliminating rerunning the observation edit procedures.


    3.13 PRODUCE CODE 4 RECORDS FROM THE STATION DATA FILE (STDTO4)

    Standard GHOST coordinate (code 4) records can be produced from the adjusted coordinate values in the station data file (STADATA).

    The user is given the option of using the original elevation or using the computed one. The user is also given the option of producing the code 9 and code 7 records.


    3.14 compute the confidence ellipsoids (CONFEL)

    This program reads information in the current working directory and produces confidence ellipsoids as requested. The program reads from the normal equation file (NORMEQN), the normal equation inverse file (NORMINV), the adjustment summary file (ADJSUMY, the station data file (STADATA) and the normal equation profile file. The program will compute any inverse values outside the profile up to the full inverse using the profile inverse and Cholesky factor for any pair of stations.

    The user can list results using the line classifications of 20, 50, 100, and 300 as listed in the standards for classifications or can list the three dimensional ellipsoids.

    The user can control the number of station pairs by choosing an appropriate radius and/or choosing station to station pairs as required. The option of the full set of station to station ellipsoids is also available but may require some time to complete. On the other hand the user can choose to list only one ellipsoid or to list only the ones that fail to meet the station classification as listed on the coordinate record. The user is queried interactively for the various options.


    3.15 compute the Helmert block partial inverse (COMINV)

    The back solution of the Helmert blocks using program BAKSOL produces only the solution of the normal equations as well as updating the station coordinates and auxiliary parameters. The user can compute the block inverse by using COMINV in a similar fashion by starting with the parent level and proceeding through the sibling sub-directories as required.

    The user is queried for a sibling sub-directory to compute. All the siblings at one level can be computed during the same session. The program uses the files from the parent and sibling files and produces an inverse file for the sibling. The normal equation file (Cholesky factor) is reordered in the junction area to match that of the parent block. As well the columns of the reduced part that belong to the junction are reordered as well. This inverse can then be used by the CONFEL and COMRES programs to list residuals and confidence regions as required.

    Confidence regions between blocks are not available at this time.

    For batch processing the program reads the sibling sub-directories from a file (sibling)


    3.16 Compute a weighted station adjustment file (SELSTA)

    The user may wish to use a weighted station approach to constrain a portion of the network. This procedure will create a weighted station file by retrieving the adjusted coordinates from the station data file (STADATA) and their corresponding weight matrix from the inverse of the normal equations file (NORMINV). The user is asked to choose a group of stations either by a window, radius or a list of station numbers. The program will retrieve the covariance matrix for this set of stations, invert it to a weight matrix and output the results to a file. This file can then be read as special type of partially reduced normal equation set (code 93) where the coordinates are used to compute the rhs terms. The rhs terms are computed as the difference between the weighted station coordinates minus the initial coordinates times the weight matrix. These then are treated as partially reduced normal equations.

    The user is given the option of naming the weighted station file.

    The output is in the default sub-directory as SELSTA.LIS.