Welcome to the NavList Message Boards.

NavList:

A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding

Compose Your Message

Message:αβγ
Message:abc
Add Images & Files
    Name or NavList Code:
    Email:
       
    Reply
    Re: That darned old cocked hat
    From: George Brandenburg
    Date: 2010 Dec 12, 13:03 -0800

    Hi Frank,

    I basically agree with your analysis, except that I think it depends on how you compute the error on the MPP when using assumed uncertainties from the current observations.

    To illustrate the issue let's consider 3 LOPs forming a triangle with certain angles. As we have learned the MPP is the Symmedian point of the triangle (thank you NavLIst!). The error ellipse associated with the MPP depends on the uncertainty in measurement (or width) of the the LOPs and on the shape of the triangle. Now if we scale the triangle up in size keeping its shape the same we will get the same MPP AND the same error ellipse. In other words the error ellipse and the associated probability contours do not depend on the size of this triangle.

    But how can this be? Let's assume we have found the MPP by some form of Least Squares Fit. Then we perform an error propagation from the input quantities (the LOPs) to the fitted parameters (the coordinates of the MPP). But the position of the MPP only depends on the shape of the triangle and the uncertainty in LOP measurement - as already noted it does not depend on the scaled size of the triangle. By the same token the error ellipse size does not depend on the triangle size.

    But surely if the LOPs intersect near a point we have made "better" measurements than if the LOPs miss intersecting by a wide margin? This is true, but it is quantified by getting a lower chi squared (chisq) from the fit when the LOPs closely intersect. If we had only two LOPs intersecting we would have no idea how well we did. But by measuring an additional LOP we have over-constrained the problem and we can now calculate not only the MPP coordinates, but also an associated chisq characterizing the quality of the measurement. In this case we have one "degree of freedom" in the fit, and if we repeat this measurement many times the distribution of chisq for all these measurements should peak at the value one, namely the number of degrees of freedom (NDF). (If we were to combine N LOPs to find the MPP, the NDF would be equal to N-2, and the chisq distribution would be centered on this value, but that goes beyond what we need here.)

    So the output of our three LOP measurements consists of the coordinates of the MPP (and it's associated error ellipse) plus the chisq for the fit. If chisq is near NDF (one in this case) then we know we did an average quality measurement. If chisq is very small then we either really lucked out or we badly overestimated the uncertainty in our LOP measurements. Finally if chisq is much greater than NDF than we either did a very sloppy measurement or we grossly underestimated our LOP measurement uncertainty. In the latter case it might be a good idea to add additional LOPs to the fit or redo the measurement completely.

    Now let's return to the error ellipse and associated probability contours, which I've said don't depend on the the scaled size of the triangle, or more specifically they don't depend on the value of chisq. I believe this is the best way to represent the outcome. In particular the calculated error ellipse depends only on what you were capable of achieving given your instrument and your choice of sights. And the value of chisq nicely complements this by giving you a measure of how well you carried out the measurement.

    Although I think it's a good idea, some chisq fit algorithms combine the fit parameter errors and the chisq so that the errors reflect not only the inherent uncertainty in the method but also the quality of the fit. This is done by scaling the size of the error ellipse by sqrt (chisq/NDF). In our case this means that the probability contours will scale with the size of the LOP triangle, as Frank has suggested below. The possible advantage of this method is that it "salvages" a bad measurement by assigning larger errors to it. But at the same time it will "reward" a fortuitously small chisq measurement with unrealistically small output errors. (This is sometimes remedied by the kludge of only scaling the output errors when chisq > NDF.)

    In the end I think the best procedure is to keep the inherent uncertainty in your measurement separate from the quality of the measurement. Formally this would mean calculating an error ellipse from input measurement errors to see how well you could do, and taking note of chisq which tells you how well you actually did. In practice it would mean knowing what the approximate error "width" is of an LOP that you measure, and when you measure three LOPs noting whether the size of their intersection triangle is consistent with this width. At that point, as has been noted by several on NavList, you are probably safe in taking any point near the center of the triangle as your fix!

    I'm sorry to have gone so long on this, but I thought it was time to inject "quality of fit" into the discussion.

    Cheers,
    George B


    Frank you wrote:

    If the current set of observations consists of a dozen of more sights, then it's probably better to get the s.d. from the sights since they will reflect current conditions. But if you have four or five observations, it's probably better to set the s.d. to some reasonable value based on previous sights or maybe some sort of weighted average based on previous experience and the current set of sights. Now, what happens if I choose to set the standard deviation from the current observations when there are only THREE sights like this triangle case? Then, clearly, a small triangle implies a low standard deviation while a large triangle implies a correspondingly larger standard deviation. Since the error ellipses are scaled by the standard deviation, this means that the error ellipses scale with the size of the triangle. I don't think that it's appropriate to get the standard deviation from the current set of observations when they are so few in number, but if we do, it creates a sort of scale invariance: a small triangle implies a small error ellipse. And NOTE: many navigational software packages DO get the s.d. from the sights even when there are only three LOPs (they're following the rules in an old book from HMNAO a bit too slavishly). I haven't worked this out, but it seems to me that the integrated probability within any triangle will remain the same when this choice is made. If instead we treat the standard deviation of the observations as an INPUT to the problem, which I think is by far the better choice with a small number of LOPs, then the probability ellipses will have a FIXED size (different shapes and orientations, of course) with various different sizes of triangles. And in that case, the integrated probability over the triangle --the probability of being inside and not outside will vary from one triangle to another. But in the long run, the probability of being inside ANY triangle formed from three LOPs will average out to 25%.

    ----------------------------------------------------------------
    NavList message boards and member settings: www.fer3.com/NavList
    Members may optionally receive posts by email.
    To cancel email delivery, send a message to NoMail[at]fer3.com
    ----------------------------------------------------------------

       
    Reply
    Browse Files

    Drop Files

    NavList

    What is NavList?

    Get a NavList ID Code

    Name:
    (please, no nicknames or handles)
    Email:
    Do you want to receive all group messages by email?
    Yes No

    A NavList ID Code guarantees your identity in NavList posts and allows faster posting of messages.

    Retrieve a NavList ID Code

    Enter the email address associated with your NavList messages. Your NavList code will be emailed to you immediately.
    Email:

    Email Settings

    NavList ID Code:

    Custom Index

    Subject:
    Author:
    Start date: (yyyymm dd)
    End date: (yyyymm dd)

    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site