Welcome to the NavList Message Boards.

NavList:

A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding

Compose Your Message

Message:αβγ
Message:abc
Add Images & Files
    Name or NavList Code:
    Email:
       
    Reply
    Re: Rejecting outliers
    From: Peter Hakel
    Date: 2010 Dec 31, 17:19 -0800
    George wrote:

    And I used the word "magic" to describe that procedure, because nowhere,
    that I can recall, has Peter Fogg explained, in numerical terms that we
    might agree on (or otherwise) what his criteria are for accepting some
    observations and rejecting others.

    ----------------

    Reply to George from PH:

    Recently I mentioned the weighted least squares method (complete with an example) in part because it provides precisely what you ask, a numerical characterization of how important individual data points are in the determination of the fit.  I think that this procedure allows the computer to mimic what Peter Fogg does with his eyes when he plots the slope.  It is not a perfect analogy because he gets his slope from additional info (DR) while the least squares does the best it can from the data alone.  Nevertheless, the weights are formally given as the reciprocal of the data variance at each UT.  Since the variance is really unknown in this case, it is estimated from the distance between the data point and the best available fit.  The procedure thus becomes iterative.  This is an answer to your question, albeit only by analogy from the other Peter. :-)

    Eq(1):      weight = 1 / variance = 1 / (standard deviation squared)

    where,

    Eq(2):      standard deviation ~ | H_actual_data - H_fit_prediction |

    The validity of Eq(2) can be debated but it is pretty much the best we can do in the absence of additional information.
    Eq(1), however, is standard weighted least squares:

    Eq(3):      chi_squared = Sum[ weights * ( H_actual_data - H_fit_prediction )^2 ]

    The fit is calculated so that chi_squares is minimized.

    In the example that I created (uniformly spaced UTs) I got:

    Time id       Ho         weight (spreadsheet column BL)
    #1               10         360000.0
    #2               20         360000.0
    #3               30         360000.0
    #4               66                   0.0004
    #5               50         360000.0

    Data point #4 was completely rejected by the procedure while the remaining data points contribute to the result equally.

    One of the reasons why I attached a spreadsheet is to allow people to enter their own data and experiment with the algorithm.  If something doesn't work right, I want to know about it. :-))

    ====================

    Reply to Geoffrey from PH:

    For any set of measurements to yield a sufficiently accurate result we rely on the assumption that "outliers" occur infrequently enough for them to be spotted among the "good" data.  This allows for their elimination by the trend-recognizing/enforcing procedure of your choice, be it the slope technique, weighted least squares, or anything else.  The infrequency of such outliers relies on the associated probability distributions having lean tails and being centered around the "correct" values.  Such probability distributions can be associated with competent navigators armed with good utensils. :-)

    Imagine for a moment a set of highly precise measurements (negligible random error) which have all been shifted by the same amount due to a missing index or semidiameter correction.  In this case, ALL data points are outliers; yet they neatly follow a trend so there is no statistical basis for questioning their accuracy.  Recognizing such a systematic error requires information external to this data set; maybe from DR whose latitude is off by what looks like the Sun's semidiameter, or from Peter Fogg's precomputed slope ("assuming the DR is reasonable," he quite rightly points out).

    As I am writing this, Peter Fogg's replies appeared, so he beat me to the above argument and also to the one following. :-)

    I see CelNav as a sufficiently mature field in which new fundamental rules are extremely unlikely to be discovered.  This is different from particle physics when an outlier might represent good new, hitherto unknown, physics rather than being in error.  That is why I think we can assume that the vast majority of data collected by a competent navigator will be good and thus the removal of the rare outliers is justifiable.


    Peter Hakel


    ----------------


    Geoffrey wrote:

    I have to say that I share George's disquiet about the notion of rejecting outliers simply because they do not seem to fit with the other data.

    Perhaps it is that, like George, I have a background as an experimental physicist, and that the notion of rejecting some data simply because it does not sit neatly with the rest of the data is an anathema. Experimental data is usually messy and experience shows that a lot can be learned from consideration of the possible causes of outliers. Simply ignoring outliers as "bad data", without which the data set would look a lot prettier and be a lot more impressive in the publication, can come back to haunt one in the end when someone (usually oneself) repeats the experiment....

    Frank said that the rejection of outliers was quite acceptable and directed me to look (for example) at Chauvenet. I promised I would and I did. (Volume two, page 558, "Criterion for the rejection of doubtful observations") It seems to be a Chi Squared test based on two or more purely random, Gaussian, distributions. Chi Squared tests are useful if the data cannot be repeated - such as for observations of a rare astronomical phenomena or a space borne experiment - and you are trying to wring the last bit of precision from the data. But applying such a statistical sledge hammer to a set of five or six sextant altitude sightings is - I respectfully submit - hardly worthwhile. The navigator's time would be better spent taking another round of sights to force better precision on the mean than applying a statistical eraser to doubtful data.

    Even if taking more sights is not practical, outliers should not be discarded unless a good reason presents itself as to why they should be discarded.  The consequence may be a rather more open cocked hat or a fix of somewhat looser precision than one would like. But better that than discarding "bad data" and risk a false sense of security from the resulting tight fix.


    Geoffrey Kolbe









    File: 115086.average.xls
       
    Reply
    Browse Files

    Drop Files

    NavList

    What is NavList?

    Get a NavList ID Code

    Name:
    (please, no nicknames or handles)
    Email:
    Do you want to receive all group messages by email?
    Yes No

    A NavList ID Code guarantees your identity in NavList posts and allows faster posting of messages.

    Retrieve a NavList ID Code

    Enter the email address associated with your NavList messages. Your NavList code will be emailed to you immediately.
    Email:

    Email Settings

    NavList ID Code:

    Custom Index

    Subject:
    Author:
    Start date: (yyyymm dd)
    End date: (yyyymm dd)

    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site