[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: choking a TDS request





John Caron wrote:
Steve Hankin wrote:
Hi John,

For the upcoming GHRSST meeting (which I am not attending, myself) I've volunteered to provide a slide about the importance of aggregation. Apologies if this is a question that I should know ... but is it possible to configure TDS to "choke" or restrict the size of a request? This becomes an issue (I'm sure you know) if a user tries to extract a long time series at a point from a sequence of aggregated (huge) satellite grids.

    thanks in advance - Steve


Hi Steve:

3.17 has no way to restrict the request.

In 4.0 we limit the size of opendap requests to 500 Mb. We need to make this user-settable. It appears you would like to be able to limit an aggregation request by the number of files in an aggregation?
Thanks.  This is the information that I need for the slides.

Your question: Yes -- limiting the number of files consulted in an aggregation is probably the most common need. Intuitively that looks like the key problem when the files are individually compressed as they currently are for GHRSST. An approach that seems appealing to me would be to offer a Java method that the TDS implementor could over-ride, so that custom data choking algorithms could be employed. For example I could well imagine some kind of weighting algorithm between subset size and maximum number of aggregated files -- e.g. a request for a point time series can touch up to 30 files, but an XYT subset that is 200x200 points in XY (and requires decompressing multiple netCDF4 "chunks") might be limited to touching a smaller number of time steps.

   - Steve