[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[CONDUIT #CYE-443810]: Re: GFS and GRIB1 in CONDUIT?



> Steve,
> 
> I have one more question. I want also to request NAM data. But it
> seems that if I add NAM requests for some fields only, like I did for
> GFS, the regex is too long to be accepted by ldm. Is it ok then to
> have a second request to the same host with a totally different regex
> ?
> 
> Thanks,
> 
> Christian

Christian,

Yes, you may have multiple request lines for disjoint requests of the same feed.
The confusion with the LDM occurs when data sets overlap since one of the 
requests
will be rejecting some data as duplicates, and that throws off the 
autoswitching algorithm
when you also have redundant upstream hosts.

Steve Chiswell
Unidata User Support


> 
> 2007/4/5, Unidata CONDUIT Support <address@hidden>:
> > Christian,
> >
> > I went byte by byte in your ldmd.conf request entry and didn't find any
> > lurking control or hidden characters.
> >
> > You could certainly have a problem with multiple request lines if one had a
> > subset of another, but there was only one uncommented request line for 
> > conduit
> > in what you sent me previously.
> >
> > If you now have multiple request lines, then just avoid one being a subset 
> > of the other.
> > The LDM 6.6 release will keep track individually of the last product 
> > arriving from
> > a request line to help avoid confusion on reconnects, but over lapping 
> > requests
> > will also create problems with the autoshifting from primary and alternate 
> > mode
> > since the statistics being calculated individually by each rpc.ldmd on 
> > success of
> > inserting a received product into the queue would be skewed.
> >
> > In your case, the one request line status products should be disjoint from 
> > the individual
> > prod/gfs.*param entries you have.
> >
> > Since the ldmd.conf you previously sent is no longer what you are using, 
> > I'll
> > throw that one away. Let me know if you have any future trouble though.
> >
> > Steve Chiswell
> > Unidata User Support
> >
> >
> >
> > > Steve,
> > >
> > > I don't know exactly why it works now (it is why you have noticed with 
> > > your
> > > notifyme commands that I was receiving GFS in GRIB1 in my pqact queue), 
> > > but
> > > here are the changes I have done:
> > >
> > > - Remove the status in the request regex in ldmd.conf
> > > - Put 2 requests with the same regex for CONDUIT to nsf and uiuc.
> > >
> > > After these modifications, I am receiving again the GFS GRIB1 data.... I
> > > don't know why.
> > >
> > > It is to be noted that the only data I was able to receive was data that 
> > > was
> > > available in status (GRIB2 only). Only ruc, gefs and gfs/0.5 : no nam or 
> > > gfs
> > > 1.0...
> > >
> > > So, problem solved, but for an unknown reason! I see three possibilities 
> > > (or
> > > interaction among them) :
> > > 1- Problems when status is in the regex
> > > 2- Problems with only one request line or with two conduit requests with 
> > > not
> > > the same regex
> > > 3- Weird invisible characters in the request line
> > >
> > > Christian
> > >
> > > 2007/4/3, Unidata CONDUIT Support <address@hidden
> > > >:
> > > >
> > > > Christian,
> > > >
> > > > I checked out your pqact.conf file and all tabs etc look ok. I did watch
> > > > the
> > > > GFS data arrive on your computer through the notifyme command, so the 
> > > > data
> > > > is certainly
> > > > in the queue for pqact to act on.
> > > >
> > > > The exec line for pqact in ldmd.conf is there, so I can't see
> > > > why you aren't FILE'ing any of the status messages or GRIB1 products.
> > > >
> > > > If you would provide a log in to ldm on kepler, I'll see if I can spot
> > > > anything there.
> > > >
> > > >
> > > > Steve Chiswell
> > > > Unidata User Support
> > > >
> > > >
> > > >
> > > > > Steve,
> > > > >
> > > > > I did everything you suggested without any difference with the
> > > > > previous behavior.
> > > > > I am running LDM 6.6.2. I am sending you my pqact.conf and ldmd.conf
> > > > > in attachment. Maybe you will see something I don't. If you want
> > > > > access to my ldm user, just let me know!
> > > > >
> > > > > Thanks again,
> > > > >
> > > > > Christian
> > > > >
> > > > >
> > > > address@hidden>:
> > > > > > Christian,
> > > > > >
> > > > > > I ran a notifyme command to kepler.sca.uqam.ca and you certainly 
> > > > > > have
> > > > the
> > > > > > .status files in your queue on that machine. I also witnessed the 
> > > > > > gefs
> > > > data
> > > > > > arriving on your machine. There are then two possibilities why the
> > > > FILE action
> > > > > > you have below is not producing the file output to disk since the
> > > > regex does
> > > > > > match the products currently in your queue:
> > > > > >
> > > > > > 1) You have a typo in your pqact.conf file that is preventing pqact
> > > > from
> > > > > > reading past a certain point in your file. That would be evidenced 
> > > > > > if
> > > > you
> > > > > > had other lines following this FILE action that were also failing.
> > > > > >
> > > > > > Check your pqact.conf using the "ldmadmin pqactcheck -p pqact_conf"
> > > > > > as well since it might show any errors, though you should also look
> > > > for
> > > > > > any blank lines that are not commented out in your conf file.
> > > > > >
> > > > > > Also, double check the FILE line to ensure it begins with a <TAB>
> > > > character, and
> > > > > > has <TAB>s before -close and the output file name.
> > > > > >
> > > > > >
> > > > > > 2) Ensure that you have run "ldmadmin pqactHUP" or restarted your 
> > > > > > LDM
> > > > since the last
> > > > > > time you modified your conf file.
> > > > > >
> > > > > >
> > > > > > Steve Chiswell
> > > > > > Unidata User Support
> > > > > >
> > > > > >
> > > > > > > Steve,
> > > > > > >
> > > > > > > I have run more tests, I and I am even more puzzled.
> > > > > > > Using this command, which uses the same regex as my ldmd.conf line
> > > > (I
> > > > > > > copied it from my ldmd.conf):
> > > > > > > notifyme -vxl - -f CONDUIT -o 10000 -h idd.cise-nsf.gov -p
> > > > > > >
> > > > "(status|prod/gfs.*PRMSL|prod/gfs.*HGT/500|prod/gfs.*HGT/1000|prod/gfs.*APCP|prod/gfs.*RH/2.m|prod/gfs.*TMP/2.m|prod/gfs.*TCDC/atmos_col|prod/gfs.*HGT/0C|prod/gfs.*UGRD/10_m_above_gnd|prod/gfs.*VGRD/10_m_above_gnd|prod/gefs.*TMPK/850)"
> > > > > > >
> > > > > > > I do get all the products in INFO output. But I dont get any GRIB1
> > > > > > > status messages even with this pqact.conf action:
> > > > > > > #
> > > > > > > # CONDUIT STATUS
> > > > > > > #
> > > > > > > CONDUIT         ^.status\.(.*) [0-9][0-9][0-9][0-9][0-9][0-9]
> > > > > > > FILE    -close  data/models/conduit/status/\1
> > > > > > >
> > > > > > > Quite strange... any idea what's wrong??
> > > > > > >
> > > > > > > Christian
> > > > > > >
> > > > > > > 2007/3/28, Steve Chiswell <address@hidden>:
> > > > > > > > Christian,
> > > > > > > >
> > > > > > > > GFS is available in GRIB1 format at present for the 1 degree
> > > > (F000-F180)
> > > > > > > > and 2.5 degree (F193-F384) grids. Grib2 format for the 0.5 
> > > > > > > > degree
> > > > grid
> > > > > > > > (F000-F180).
> > > > > > > >
> > > > > > > > The file names follow the naming conventions on:
> > > > > > > > http://www.unidata/data/conduit/ldm_idd/gfs_files.html
> > > > > > > >
> > > > > > > > The GEMPAK pqact actions I use and provide for the GRIB1 data 
> > > > > > > > are:
> > > > > > > > # 1.0 degree GFS data
> > > > > > > > # 2.5 degree GFS data
> > > > > > > > CONDUIT prod/gfs.*pgrb[^2]
> > > > > > > >         PIPE    decoders/dcgrib2 -d
> > > > data/gempak/logs/dcgrib2_CONDUITgfs.log
> > > > > > > >         -e GEMTBL=/home/gempak/NAWIPS/gempak/tables
> > > > > > > >
> > > > > > > > The [^2] pattern above restricts the above to just the grib2
> > > > products and
> > > > > > > > note the grib2 0.5 degree products.
> > > > > > > >
> > > > > > > > If you aren't seeing these products, be sure to check your
> > > > ldmd.conf request line
> > > > > > > > to ensure you have an appropeiate pattern for the data you are
> > > > expecting,
> > > > > > > > as well as any pattern used in your exec line for pqact.
> > > > > > > >
> > > > > > > > Steve Chiswell
> > > > > > > > Unidata User Support
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, 2007-03-28 at 20:56 +0200, Christian Pag� wrote:
> > > > > > > > > Hello all,
> > > > > > > > >
> > > > > > > > > I have had network problems in the past few months with our 
> > > > > > > > > ldm
> > > > > > > > > server, which turns out to be related to the firewall and
> > > > --state NEW
> > > > > > > > > arguments in the iptables configuration. Now that it is 
> > > > > > > > > fixed, I
> > > > am
> > > > > > > > > reactivating CONDUIT data flow. However, I don't receive 
> > > > > > > > > anymore
> > > > GFS
> > > > > > > > > GRIB1 CONDUIT data: I am asking for status data, and no GRIB1
> > > > data is
> > > > > > > > > showing up for GFS. Has something changed in the meantime ?
> > > > > > > > >
> > > > > > > > > Many thanks
> > > > > > > > >
> > > > > > > > --
> > > > > > > > Steve Chiswell <address@hidden>
> > > > > > > > Unidata
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > Ticket Details
> > > > > > ===================
> > > > > > Ticket ID: CYE-443810
> > > > > > Department: Support CONDUIT
> > > > > > Priority: Normal
> > > > > > Status: Closed
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > > Ticket Details
> > > > ===================
> > > > Ticket ID: CYE-443810
> > > > Department: Support CONDUIT
> > > > Priority: Normal
> > > > Status: Closed
> > > >
> > > >
> > >
> > >
> > > --
> > >
> > >
> >
> >
> > Ticket Details
> > ===================
> > Ticket ID: CYE-443810
> > Department: Support CONDUIT
> > Priority: High
> > Status: Closed
> >
> >
> 
> 


Ticket Details
===================
Ticket ID: CYE-443810
Department: Support CONDUIT
Priority: Critical
Status: Closed