Muqun (Kent) Yang wrote:
At 02:41 PM 12/16/2003 -0700, John Caron wrote:
Robert E. McGrath wrote:
On 2003.12.16 15:05 Ed Hartnett wrote:
Another question relating to chunking - if we don't need it (i.e. for
a dataset with no unlimited dimensions), do we still chunk it?
Or is it better to leave it contiguous?
(With the mental reservation that only chunked datasets will be able
to take advantage of compression, when we get to that feature.)
Chunking can greatly improve performance on any partial I/O: only the
chunks that cover the request need to be read. For large datasets, you
don't want to read the whole thing in to memory to pick out a subset.
Again, chunking controls the units that will be read/written to the
disk: if the dataset is much larger than reasonable read/writes, then
chunking can control this.
On the other hand, there is overhead for chunking, so you may want to
not use it. E.g., for a small dataset that would fit in a single
are you saying that if a dataset isnt chunked, you have to read the
entire thing into memory before you subset?
Please pardon me if I don't follow the trace of your discussion. Here
is my opinion.
Of course, you don't have to read the entire thing into memory before
you subset. However, for evil subsetting, chunking will make the
performance much decent than non-chunking. You may check
http://hdf.ncsa.uiuc.edu/UG41r3_html/Perform.fm2.html#149138 for more
ok, i understand what robert was saying now. i think the 4 modes should
cover all possibilities. i suspect that the most common case is