On Tue, Apr 24, 2007 at 11:31:20AM -0600, John Caron wrote:
> Could you give use case(s) where the limits are being hit?
> I'd be interested in actual dimension sizes, number of variables, whether
> you are using a record dimension, etc.
I'm just the intermediary, but I've copied two users who ran into this
Here's what Greg had to say:
As a quick answer to the question, we use netcdf underneath our
exodusII file format for storing finite element results data. If
the mesh contains #nodes nodes and #elements elements, then there
will be a dataset of the size #elements*8*4 (assuming a hex element
with 8 nodes, 4 bytes/int) to store the nodal connectivity of each
hex element in a group of elements (element block). Assuming 4GiB,
this limits us to ~134 Million elements per element block which is
large, but not enough to give us more than a few months breathing
In both cases, they want to stick with a single netcdf file as they
scale to very large machines and problems. They also like the
stability of the CDF-2 format: there's no metadata corruption if a
process dies in the middle of a write.
If we (parallel-netcdf) make changes to either the API or file format
to overcome CDF-2 limitations, would the serial-netcdf folks be
interested in that work?
Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF
Argonne National Lab, IL USA B29D F333 664A 4280 315B
To unsubscribe netcdfgroup, visit: