Hi Greg, > As you probably know, the exodusII finite element database has been > using netcdf as the underlying storage model for several years. We > have always had to use a modified netcdf since we use lots of > dimensions and variables. Currently, the minimum needed > modifications to run exodusII on top of a netcdf is that we increase > NC_MAX_DIMS to 65536 and NC_MAX_VARS to 524288. Do you need that many shared dimensions, or are the great majority of the dimensions simply used once for one associated variable. If the latter is the case, it sounds like you really need anonymous dimensions. If each of the many dimensions is shared by more than one variable, however, then anonymous dimensions wouldn't really be of much use. Can we assume you want to stay with the netCDF-3 classic data model? Is most of the software for exodusII written in C/C++, or is much of it in Fortran? > In the past, this local modification was not much of a big deal > since exodusII was almost exclusively used internally to Sandia and > we could easily provided a modified netcdf along with the exodusII > libraries for all users. This has been changing with more and more > external users using exodusII as output/input from/to their finite > element codes and exodusII being in products such as paraview, > ensight, patran... This makes it more difficult to make sure that > exodusII gets linked to a netcdf with the increased NC_MAX_* > defines. > > Would it be possible to make the above NC_MAX_DIMS and NC_MAX_VARS > changes to the default netcdf? I'm not sure what the downsides of > this are. I've looked at the source code and there are some data > structures that are allocated with these values, so it would cause > memory use to increase, but I'm not sure if the increase would be > noticeable or in-the-noise... Besides memory, there is one other subtle issue, but I don't know how important it is. We know there are users that write a netCDF file while reading it in one or more other threads or processes. If the writer adds a dimension or variable after a reader inquires how many dimensions or variables there are, the reader can allocate insufficient space for a subsequent call to get dimension IDs or variable IDs. This doesn't happen if the reader just declares and statically allocates enough space for NC_MAX_* dims or vars, as they know the writer can't exceed that. So no race conditions occur and no more sophisticated locking or synchronization is necessary. It seems to me if we are going to increase NC_MAX_DIMS and NC_MAX_VARS by a factor of 64, as you suggest, it might be better to eliminate their use completely in the C/Fortran library (as they are already absent from the Java library), but leave them defined for backward compatibility. They were originally specified to make Fortran programs easier to write, before allocatable arrays were possible in Fortran. In the C library, this would involve removing checking for too many dims or vars when a new one is added. Also we would just malloc the space needed in ncdump, ncgen, and nccopy (if we aren't already doing that). That would allow any netCDF client program to write files with any number of dims and vars, with no changes to the constants, but readers that used constant dimensions would still be a problem. Would that suit your purpose as well as increasing the values of these constants? Or are there lots of programs that require NC_MAX_DIMS and NC_MAX_VARS all over the code that would still break? --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: IOD-794288 Department: Support netCDF Priority: Normal Status: Closed
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.