[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDD #EXZ-787119]: ad



Hi Patrick,

re: your ldmd.conf file
> http://wn.hamweather.net/wn/files/patrick/tom/ldmd.conf

OK, right off of the top of my head I would say that the GFS request
for CONDUIT data needs to be split.

re:
> our network manager feels that we are not even close to capacity,
> but i will as always trust your judgment and make any adjustments
> you feel are appropriate :)

He may be correct.  Consider the CONDUIT latency plot for wn1.hamweather.net:

http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+wn1.hamweather.net

You can see that the largest latencies occur every 6 hours when GFS data
dominates the CONDUIT content.

re:
> i've never had any problems with gfs, only ruc2 displays the issues,
> this also happened when i worked at various universities over the years.

I know of no reason that one type of data in CONDUIT would be delivered
slower than other types of data in CONDUIT ** except ** when the volume
in the one type of data is large, and the REQUEST(s) for that type of
data are "singular" (meaning not split into mutually-exclusive subsets).
The reason for this is that the products in a REQUEST are sent in a serial
fashion.  If there are a lot of them on a single feed the ones sent "later"
have to wait until the ones sent "earlier" are delivered.  If one can
split the requests into mutually-exclusive subsets, then the sending of
the elements of one subset will not be slowed by the elements in another
subset.

re:
> this is the sample i showed before:
> http://wn.hamweather.net/wn/files/patrick/ruc2.png

OK.  We can see the latency effect from the rtstats displays.

My recommendation is to start taking a hard look at the RUC2 products.
In particular, notice the sequence number that will be shown at the
end of the Product IDs.  For instance:

notifyme -vl- -f CONDUIT -h bigbird.tamu.edu -o 3600 -p ruc2

 ...
Apr 28 15:05:12 notifyme[847] INFO:    14659 20110428140651.181 CONDUIT 227  
data/nccf/com/ruc/prod/ruc2a.20110428/ruc2.t13z.pgrbf15.grib2 
!grib2/ncep/RUC2/#000/201104281300F015/OMEG/60-30 hPa PDLY! 000227
Apr 28 15:05:12 notifyme[847] INFO:     9394 20110428140651.182 CONDUIT 232  
data/nccf/com/ruc/prod/ruc2a.20110428/ruc2.t13z.pgrbf15.grib2 
!grib2/ncep/RUC2/#000/201104281300F015/TMPK/120-90 hPa PDLY! 000232
Apr 28 15:05:12 notifyme[847] INFO:     9294 20110428140651.186 CONDUIT 246  
data/nccf/com/ruc/prod/ruc2a.20110428/ruc2.t13z.pgrbf15.grib2 
!grib2/ncep/RUC2/#000/201104281300F015/PRES/0 - FRZH! 000246
Apr 28 15:05:12 notifyme[847] INFO:    27569 20110428140653.322 CONDUIT 264  
data/nccf/com/ruc/prod/ruc2a.20110428/ruc2.t13z.bgrb20f15.grib2 
!grib2/ncep/RUC2/#000/201104281300F015/OMEG/15 - HYBL! 000264
Apr 28 15:05:12 notifyme[847] INFO:    22826 20110428140651.183 CONDUIT 237  
data/nccf/com/ruc/prod/ruc2a.20110428/ruc2.t13z.pgrbf15.grib2 
!grib2/ncep/RUC2/#000/201104281300F015/RELH/150-120 hPa PDLY! 000237
 ...

You should be able to use the sequence numbers to create a set of
patterns that split the content into mutually-exclusive pieces.
Here is one idea:

REQUEST CONDUIT "(ruc2.*[09]$)" bigbird.tamu.edu
REQUEST CONDUIT "(ruc2.*[18]$)" bigbird.tamu.edu
REQUEST CONDUIT "(ruc2.*[27]$)" bigbird.tamu.edu
REQUEST CONDUIT "(ruc2.*[36]$)" bigbird.tamu.edu
REQUEST CONDUIT "(ruc2.*[45]$)" bigbird.tamu.edu

Examination of this set of REQUEST lines splits the CONDUIT RUC2
content into subsets by sequence number which over the long haul
should be equal in numbers of products.

By the way, I would cut the redundant requests you have down to
a smaller number. Yes, the extra connections should not add
that much overhead on your end, but they do add overhead.  Plus,
redundantly requesting from bigbird and sasquatch at TAMU really
should do you no good.  Gerry et. al. at TAMU are good about
assuring that all machines are up at the same time, so there
should be no great benefit from requesting from two of his
machines... the path of the packets from those machines to yours
will be the same, so your redundancy is minimized.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: EXZ-787119
Department: Support IDD
Priority: Normal
Status: Closed