Welcome to the NavList Message Boards.


A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding

Compose Your Message

Add Images & Files
    Re: Lat/Lon by "Noon Sun" & The Noon Fix PROVE IT
    From: George Huxtable
    Date: 2009 Apr 15, 23:58 +0100

    Jim Wilson wrote-
    A difference between my method and Frank's is that I establish two slopes
    of the altitude time line, rather than look at the entire curve.
    Accordingly, I need a few more measurements early and late, since I can't
    establish accurate slopes from the data you presented. For this latitude,
    I would start 30 minutes before estimated LAN, and establish the second
    slope from measurements in the same altitude range as the first set. Can
    you give me this data? It's work for me to read Excel, so the data in
    tabular format would be fine.
    Additionally, I read the declination around noon UT on December 21, 2008
    as S23�26.4'. I translate that to -23.434 decimal. Did I miss something?
    Perhaps I should just use what you gave. It's not a problem.
    Let's deal with the last bit first. I'm not sure that I understand the 
    problem here. The decimal equivalent of 23� 26.4' is exactly 23.440�. I 
    quoted, and used, 23.442�. I don't see how Jim makes it to be 23.434�.
    Now for the more difficult question. Each scan is different, and peaks in a 
    different place, and the scatter of 1' rms pushes the points up and down 
    quite unpredictably.  For that reason, I can't see an easy way to define, 
    from the simulated data itself as it comes in, just where those wings of the 
    distribution are going to fall, in order to give them the close attention 
    that Jim is requesting. It's a difficulty that would also bedevil data 
    collection in real-life, for similar reasons.
    What I could easily do is to "collect" simulated data at uniform closer 
    intervals, throughout the one-hour observation period, paying the price of 
    having to put out a big wodge of data for Jim to handle. He could then, 
    using hindsight, discard the readings that he considered irrelevant, to 
    limit his workload, but in doing that he needs to be careful that he isn't 
    giving himself an unfair advantage. And note that by taking lots of 
    close-spaced observations, you can always reduce statistical scatter, by the 
    effects of averaging, at the expense of making the data-collecting more 
    Would Jim be able to handle a set of 61 observations at 1-minute intervals, 
    because it's no problem for me to generate them. Or would they be too 
    indigestible?Or would 31, spaced by 2 minutes, be more realistic? Or even 
    21, with 3-minute gaps? Tell me what you need, Jim, and I will try to meet 
    contact George Huxtable, at  george@hux.me.uk
    or at +44 1865 820222 (from UK, 01865 820222)
    or at 1 Sandy Lane, Southmoor, Abingdon, Oxon OX13 5HX, UK.
    Navigation List archive: www.fer3.com/arc
    To post, email NavList@fer3.com
    To unsubscribe, email NavList-unsubscribe@fer3.com

    Browse Files

    Drop Files


    What is NavList?

    Join NavList

    (please, no nicknames or handles)
    Do you want to receive all group messages by email?
    Yes No

    You can also join by posting. Your first on-topic post automatically makes you a member.

    Posting Code

    Enter the email address associated with your NavList messages. Your posting code will be emailed to you immediately.

    Email Settings

    Posting Code:

    Custom Index

    Start date: (yyyymm dd)
    End date: (yyyymm dd)

    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site