A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding
From: Antoine Couëtte
Date: 2010 Nov 25, 23:16 -0800
You are fully right : averaging will only decrease random error and is NOT sufficient juste per se to discard an "outlier" or "flier" as Mike Burkes subsequently mentionned it.
To make a long story short, after personnally using the painstaking "manual slope plotting" method up until the early 80's, with the arrival of "smart" calculators - I chose the HP41 family then - I have (successfully I think) attempted to avoid manual plotting while still remaining (sufficiently I hope) cautious about outliers.
Automatic linear correlation coefficient + expected/actual slope comparison can give you reliable indications about the presence/absence of an outlier in each individual observation data set. Taking the average of your (retained) observations enables avoiding manual curve plotting since - as I can recall - the average of 2 sets of numbers belongs to their linear least squares approximation (Mathematicians on duty here please confirm).
Regarding Manual Plotting, and with the exception of any obvious "flashing outlier" which you are expected to detect and remove anyway, I have often noticed - thanks to GPS - that what might apparently look as an 'outlier' may not always be one actually and that discarting it at this early stage might not always be the best course of action.
Accordingly from personal practice and not just theory, Manual Plotting is not always "the" magic tool kit.
In the case when Manual Plotting is avoided, one should also remember that if there remains one definite "hidden" outlier which has escaped vigilance - i.e. a lack of warning from an "immediate computation toolkit" (correlation coefficient, slopes comparison or other) - then its negative effect/spurious input will be (greatly) reduced in the overall position result especially if it is feasible to shoot between 15 and 20 observations on 4 or 5 different bodies for a morning or evening fix.
As a consequence, after many years of setting and using an "immediate computation toolkit" option which enabled me to discard data at an early stage (and which IS also quite time consuming on a HP41), I now simply use plain averaging, while still remaining cautious about "flashing outliers". STILL, I have always kept a data discarting option for the later stage of the computations.
For the final position computation derived from both DR Position and LOP's, I first manually plot every (averaged) LOP, and this is where I have retained the option of "data discarting" through discarting any (averaged) LOP rather than earlier discarting any individual observation part of an averaged LOP. However and for the same reasons as above - and thanks to M. GPS again - I generally keep avoiding discarting LOP's.
Since I always record my data on paper, it would be easy to rework them with the time consuming "immediate computation toolkit", and accordingly remove a particular observation early in the computation stage but I no longer see any practical advantage to it.
I will simply conclude through one of your remarks : " (either its wrong or all the others are) " ... Yes, observations could be all wrong and statistics will not derive any good position from wrong data. In other words, statistics can only be (a little/much/very much) "better" as long as and only if your observations are already "good".
As we keep practicing it, CelNav is both a Science - definitely so ! - and it also remains some kind of an Art, which makes it absolutely Wonderful then.
Thank you again for your very true remark about clever data discarting, which certainly has to be taken in account sooner or later in the computation process, and
Antoine M. "Kermit" Cou�tte
NavList message boards and member settings: www.fer3.com/NavList
Members may optionally receive posts by email.
To cancel email delivery, send a message to NoMail[at]fer3.com