Liz, > Unfortunately the links don't work. But, I'm very interested in > following up on this. What should I use as a search phrase to find the > blogs? Oops, sorry about that. I neglected to copy the files to the right directory, but the links to the .ppt and .pdf pversions of the presentation should work now. The blogs aren't available yet, I've just committed to writing them over the next couple of weeks. The first blog entry will appear here: http://www.unidata.ucar.edu/blogs/developer/ and it will be tagged with "chunking", "netcdf", and "performance". --Russ > I just modified my FORTRAN to read the variable all longs/all lats > but for 1 time step, then fill my holding array with just the points I want, > and then process from there as before. I loop on time and depending on the > number of lats/longs I get timings of ~13 sec for 361/720 files and ~30 sec > for 576/1152 files or around 8 minutes per month to read all 17 files. So > that is an incredible boost from ~30 sec per point per month once you're > processing over 10 points. The limiting reagent will now be how big can I > make the holding array (i.e. how many points). > > Thank you for following up with me, I look forward to reading your > AMS presentations. > > Regards, > Liz > > -----Original Message----- > From: Unidata netCDF Support [mailto:address@hidden > Sent: Monday, January 14, 2013 12:34 PM > To: address@hidden > Cc: address@hidden > Subject: [netCDF #UXF-258603]: NetCDF 4.2 for Windows > > Hi Liz, > > I've just come back from the AMS meeting in Austin, where I made a > presentation > that directly addresses the problem you're describing: > > http://www.unidata.ucar.edu/presentations/Rew/ams-2013-rew-fixed.ppt > http://www.unidata.ucar.edu/presentations/Rew/ams-2013-rew-fixed.pdf > > You might want to look at it and then at the blog entries I'll be writing > over > the next week or two that have more detail about the use of nccopy for > rechunking > data to deal with these kinds of big data issues. > > I'd like to find out if these approaches work as well for your example as > they did > for the 38 GByte example reanalysis dataset I played with in preparing this > talk ... > > --Russ > > > As a side question, do you think version 4.2 fixes what appears to > > be some sort of memory leak when doing lots of reads where I'm not > grabbing > > consecutive chunks? I'm reading in 17 classic netCDF files that each > > contain 1 meteorological variable (they are the CFSR variables like > > temperature, dewpoint, etc. from UCAR's page). The 1.83 GB file contains > 3 > > dim, longitude, latitude, and time, where time is the unlimited dimension. > > I read in all dates for 1 point, i.e. 1 longitude, 1 latitude, all time > > steps using the following line: > > > > status = nf90_get_var(ncFile,ncVarID,sData, > > & start=(/LDim1L,LDim2L,LDim3L/), > > & count=(/iCount1,iCount2,iCount3/)) > > > > sData is a 1-D array allocated to 745. LDim1L is a number for the > longitude > > index, e.g. 577. LDim2L is a number of the latitude index, e.g. 160. > > LDim3L is 1. iCount1 and iCount2 are each 1 and iCount3 is 744 (1 time > step > > per hour for 0101 to 0100 of the next month). I'm holding all 17 files > open > > and then grabbing 1 variable from each file for 1 point then cycling to > the > > next point and grabbing all the variables for that point, etc. The first > > dozen or so reads of this takes 0.02 sec as part of a subroutine that also > > gets ncVarID. After that, this read/subroutine call takes 4-7 seconds > using > > the netcdf 3.5.1 library. Using the precompiled 4.1.1 version I have, it > > takes roughly the same amount of time, but I get no reads of 0.02 sec > (they > > start with the 4-7 sec versions). I also tried using nccopy (v 4.2 from > > download below) to dump new versions of the netcdf files with NO unlimited > > dimension since I saw something online about unlimited dimensions that > could > > have an effect. Surprisingly, that made things much worse, on the order > of > > 7-9 sec per read. > > > > This might not seem so bad, but I'm trying to read 10 years (i.e. > > 120 months) at 76000 grid points. At the optimal 4 sec that's 4 sec *17 > > variables *120 months > 2 hours per point. Even just 10000 points would > > take 2.5 years of elapsed time, which is unacceptable. > > > > My options don't look so good, I suspect reading 1 time step for all > > points would likely improve things since that is how the file is actually > > archived, but I wouldn't be able to hold an entire file in memory since > > these arrays are 1152 x 576 x 744 (~500,000,000). Even just 1 time step > > would be rather large and then I'd need to hold all of my selected points > in > > memory as well instead of processing just 1 point at a time because I > > couldn't have more than 450 files open at once total (FORTRAN limit). I > > suspect shrinking the netcdf files into smaller groups of grid points (say > > all longitudes but only groups of 100 latitudes or a 6 way split per file) > > might help. Another option would be to leave netcdf format and go to > > regular direct access or even a straight PC binary format with or without > a > > subset of points. > > > > Do you have any users that have run into this similar problem? Is > > there a sweet spot of number of open netcdf files (I tested with 6 and got > > the above results), number of points per file, and/or number of calls to > > nf90_get_var? I'm open to any suggestions. Please let me know if you > need > > any additional information. > > > > Regards, > > Liz > > > > > > -----Original Message----- > > From: Liz Orelup [mailto:address@hidden > > Sent: Monday, January 07, 2013 11:31 AM > > To: 'address@hidden' > > Subject: RE: [netCDF #UXF-258603]: NetCDF 4.2 for Windows > > > > Hello Ward, > > > > Thank you for the clarification. The web page didn't indicate that these > > pre-built libraries were C only. I look forward to the FORTRAN release. > I > > hope it looks the same with the exe install - that was really nice! > > > > Regards, > > Liz > > > > -----Original Message----- > > From: Unidata netCDF Support [mailto:address@hidden > > Sent: Wednesday, January 02, 2013 5:24 PM > > To: address@hidden > > Cc: address@hidden > > Subject: [netCDF #UXF-258603]: NetCDF 4.2 for Windows > > > > Good afternoon, > > > > > Hello, > > > > > > > > > > > > I download the 32-bit NetCDF 4.2 exe from this page: > > > > > > http://www.unidata.ucar.edu/software/netcdf/win_netcdf/ > > > > > > and tried linking it as I have successfully done with an old v. 4.1.1 > > > in my FORTRAN code, but I get 158 LNK2019 errors looking for NF_XXXX > > > unresolved externals. Sneaking a text peek at the netcdf.lib file > > > (which is about half the size of the 4.1.1 one), I noticed there are > > > only nc_XXXX stubs and no NF_XXXX stubs. I think this means that this > > > library is not FORTRAN supported and only for C users. Do you have a > > > version for Windows FORTRAN users? > > > > You are correct that the version you downloaded is only the C libraries; > > starting with netCDF 4.2, netCDF fortran (and C++) support is maintained > in > > > > separate libraries. The source code for the fortran library is available > > at: > > > > http://www.unidata.ucar.edu/downloads/netcdf/netcdf-fortran/index.jsp > > > > Currently, we don't have a version of the fortran libraries with > > Windows support, but our goal is to introduce this support as quickly as > > possible. Fortunately, the work which went into bringing the netCDF 4.2 C > > libraries to Windows will go a long way towards enabling Windows > > support for the fortran libraries. > > > > I hope this information helps; I'm sorry there isn't a fortran library yet > > for > > download, but there will be an announcement made as soon as we have a > > beta version available. > > > > > Thanks and Happy New Year! > > > > > > > Thank you, you too. > > > > -Ward > > > > > Regards, > > > > > > Liz > > > Liz Orelup > > > Meteorologist/Programmer > > > http://www.oceanweather.com <http://www.oceanweather.com/> > > > Ph: 203-661-3091 > > > Email: address@hidden > > > > > > > > > > Ticket Details > > =================== > > Ticket ID: UXF-258603 > > Department: Support netCDF > > Priority: Normal > > Status: Open > > > > > > > > > > Russ Rew UCAR Unidata Program > address@hidden http://www.unidata.ucar.edu > > > > Ticket Details > =================== > Ticket ID: UXF-258603 > Department: Support netCDF > Priority: High > Status: Closed > > > > Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: UXF-258603 Department: Support netCDF Priority: High Status: Closed
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.