[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDD #QCL-983945]: size of products in product queue



Hi Jerry,

re:
> I am toying with the idea of using the ldm to distribute modis level-1
> and level-2 hdf files "mostly" within our own network, but some files
> may go to external users.  Some of these files are on the order of 800
> MB in size.  Most are smaller, but many still in the 100 MB size
> range.  By gzipping a file, I can get an 800 MB file down to under 200
> MB.

In the data distribution tests we ran for TIGGE (THORPEX Grand Global Ensemble),
we moved files of 10, 20, 30, 40, and 60 MB with no problems.  The keys
to being able to do this are:

- sufficiently large LDM product queues (the TIGGE tests were run between
  64-bit machines each of which had at least a 12 GB queue)

- good, high speed network connectivity

> We would like to create these files in one location, and allow other
> machines that need the files receive them using ldm.  The receiving
> machines may be 2-5 different machines locally, and another few
> machines external to our network.   They all need many of the same
> "full" hdf files.

Sounds doable especially within SSEC.

> We also plan on using ADDE, ftp, scp etc., but are looking for an easy
> to maintain method to share these files efficiently in addition to the
> others mentioned.
> 
> 1. Do you think the ldm is well suited to this type of application?

Yes if the above conditions are met.

> 2. Other than having a very large product queue size, are there any
> things that should be configured to do this?

It depends.  How many files are we talking about?  If you have a lot
of files to move, the downstreams should be configured to request
them in multiple ldmd.conf 'request' lines.  Splitting the request
into a small number (between 5 and 10) should strive to move equal
amounts of data in each connection.

> 3. Is the EXP feed the appropriate feed to use?  Because of their
> size, we have no intention (right now) of putting these out on
> the IDD.

Yes, use the EXP feed type.  Also, structure the product IDs to
identify not only what the data are, but what project they are
from.  Matthew Lazarra and the Antarctic IDD crew have nicely
structured their product IDs in the manner that we recommend
(although they tie all elements together with periods ('.') instead
of spaces; either works fine, but I would have used spaces).
The overriding principle in creation of the product headers is
to:

- be able to distinguish between different 'sets' of data in the
  EXP feed type

- allow end users to intelligently choose the subset of data that
  they want

> 4. Any other tips, recommendations, cautions etc? Or is this just a
> bad idea? :)

I don't think this is a bad idea at all.  In fact, I will gladly kibitz
on the setup to help get you going.

> Thanks, much.

No worries.

Cheers,

Tom
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: QCL-983945
Department: Support IDD
Priority: Normal
Status: Closed


NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.