Re: 4GigB variable size limit

Hi John,

We'd like to push for both limits to be lifted. We definitely need variable sizes greater than 4 GigB and we believe applications capable of using 231 for a dimension are not far off. If I'm doing my calculations correctly a 231 dimension size would limit a double precision one dimensional variable to roughly 17 GigB. (Please note I'm representing the "Towards Petascale Computing" group.) Without a parallel IO library which can handle these huge datasets, the HPC community will just return to plain old Fortran IO because it's fast, but will lose portability or go to the case where a user must write out thousands of broken up netcdf files or variables. Neither of which is a good solution. To us, lifting the 4 GigB variable size limit, but keeping the 231 dimension size limit is just a temporary hold over. We think it is in the best interest of the netCDFand pnetCDF community to prepare for the coming increased application data and IO needs.

Katie



John Caron wrote:
Hi Katie:

It sounds to me like you're talking about the 4G total size limit on a variable. Allowing that limit to be 2^64 seems reasonable. Allowing individual dimension lengths to be greater than 2^31 is a bigger deal, since array indexes are limited to 32 bit signed ints (at least in Java). Im not sure if you are requesting that. It sounds like unstructured meshes might push that limit someday, but do you have another use case for that?



Katie Antypas wrote:
Hi Everyone,

I'm jumping into the discussion late here, but coming from a perspective of trying to find and develop an IO strategy which will work at the petascale level, the 4 GigB variable size limitation is a major barrier. Already a 1000^3 grid variable can not fit into a single netcdf variable. Users at NERSC and other supercomputing centers regularly run problems of this size or greater and IO demands are only going to get bigger. We don't believe chopping up data structures into pieces is a good long term solution or strategy. There isn't a natural way to break up the data and chunking eliminates the elegance, ease and purpose of a parallel IO library. Besides the direct code changes, analytics and visualization tools become more complicated as datafiles from the same simulation but of different sizes would not have the same number variables. Restarting a simulation from a checkpoint file on a different number of processors would also become more convoluted.

The view from NERSC is that if Parallel-NetCDF is to be viable option for users running large parallel simulations, this is a limitation that must be lifted...

Katie Antypas
NERSC User Services Group
Lawrence Berkeley National Lab

===============================================================================
To unsubscribe netcdfgroup, visit:
http://www.unidata.ucar.edu/mailing-list-delete-form.html
===============================================================================

==============================================================================
To unsubscribe netcdfgroup, visit:
http://www.unidata.ucar.edu/mailing-list-delete-form.html
==============================================================================


  • 2007 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: