[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Datastream #IZJ-689237]: Additional Datafeeds



Hi Jeff,

re:
> Well, it APPEARS that the data "holes" that we were experiencing are now
> gone.

Super!

> I did burn Whistler down to bare metal and start over last Thursday. I
> used CentOS 5.2, LDM 6.7.0 and Gempak 5.11.4.  I finished that on Friday and
> let it run through the weekend.  When I go in and look at everything, it
> appears that the parts that were previously missing are now there, plus we
> appear to have a lot more products in general coming in.  I'm still waiting
> for confirmation from Dr. Zehnder before I get to enthusiastic though.

Sounds good.

> One
> thing that I still need to fix yet is logging. LDM is writing to /var/lo
> /messages and I just want it to write to ~logs/ldmd.log. I also still need to
> trim what we're requesting, as well as what's being processed and saved.

The LDM log setup should be quick to fix.  It is important to get logging
correct, of course, so that problems can be effectively investigated.

> One weird thing though, and maybe it's due to differences between versions,
> but I still get holes - different holes, but holes - when I use the earlier
> versions of gempak to view the data.  If I use 5.10.4 or 5.11.1, it has
> missing grids.  With 5.11.4, it's much more complete.

This is likely due to more definitions being included in newer GEMPAK tables.

> I really don't think
> that was my problem, previous to the rebuild, because I was decoding and
> viewing with the same version of gempak on both machines and still having the
> problem.

I agree.

> Anyway, thanks for all of the help.

No worries.

> I'm not going to say that I won't bother
> you about this anymore until I get a definitive answer back from the Chair.

:-)

> Plus, I still need to trim my feed requests, pare down my pqact files, etc.

Hmm... The test we setup here was to request the bare minimum set of data needed
to test Garp.  If you ran a different test, then your results should be 
different
than ours.  Given that there was some question about how large your network
pipe is/was, it makes sense to start with a minimal set of data being ingested
and then proceed from there.

Again, here is the feed requests we used for our testing:

# The following request line was used for LDM testing
REQUEST FSL2|UNIDATA ".*" idd.unidata.ucar.edu

FSL2|UNIDATA translates into the union of FSL2, HRS, IDS|DDPLUS, and UNIWISC.

> I'm sure I'll probably still run into something else. :-)

I hope not!

> Thanks again.

No worries.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: IZJ-689237
Department: Support IDD
Priority: Normal
Status: Closed