Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
The netCDF Operators NCO version 4.3.7 are ready. http://nco.sf.net (Homepage) http://dust.ess.uci.edu/nco (Homepage "mirror") The current release is a milestone in NCO interoperability. NCO now supports HDF4 files natively. This was last true in 2001! NSF-funded harmonization of netCDF with HDF made this possible. With few exceptions, all NCO features work natively on HDF4 files. One can now often avoid the tedious step of creating intermediate netCDF files to perform, e.g., running averages on. ncks produces more faithful conversions of HDF4->netCDF than ncl_convert2nc. Interoperability feature #2 is that ncks will produce complete XML (NcML) translations of netCDF (3 and 4) and HDF (4 and 5) files. Finally, this release brings fuller netCDF4 support to ncrename. NCO must be built with netCDF 4.3.1 or later to take advantage of ncrename's group renaming feature. Work on NCO 4.3.8 is underway and includes improved netCDF4 support for more NCO operators (ncatted), and improved support for HDF4 files. Enjoy, Charlie "New stuff" in 4.3.7 summary (full details always in ChangeLog): NEW FEATURES: A. HDF4: First build/install netCDF with the --enable-hdf4 flag: http://www.unidata.ucar.edu/software/netcdf/docs/build_hdf4.html Then build NCO. Try NCO on HDF files. It should work! ncks --hdf4 modis.hdf modis.nc Until certain issues in the netCDF library are fixed (possibly as soon as netCDF 4.3.2), one must use the new --hdf4 switch to tell NCO that a file is in fact an HDF4 file. The need for this switch will go away eventually. Hopefully soon. NCO can now convert HDF4 to netCDF4 files: ncks --hdf4 fl.hdf fl.nc This produces a more faithful conversion than ncl_convert2nc since it preserves all native HDF data types (e.g., unsigned bytes): http://nco.sf.net/nco.html#hdf4 http://nco.sf.net/nco.html#ncl_convert2nc B. ncrename accepts full group paths to variables and groups. Previously ncrename worked only on the root group of netCDF4 files. Now you can go to town and rename everything! ncrename -a /g1/lon@units,lon@new -g /g1/g1g2,new in.nc http://nco.sf.net/nco.html#ncrename C. ncdismember accepts the CF Convention version number as an optional fourth argument, e.g., ncdismember ~/nco/data/mdl.nc /data/zender/nco/tmp cf 1.3 http://nco.sf.net/nco.html#ncdismember D. XML: ncks --xml now prints variable metadata _and_ data in NcML. Please give us feedback on this capability. One related new feature is the --xml_no_location switch which turns off the location tag, ncks --xml --xml_no_location -v time ~/nco/data/in.nc http://nco.sf.net/nco.html#xml E. Add --mrd = --multiple_record_dimension switch to ncecat, ncpdq. Invoking this switch allows these operators to increase the number of record dimensions in a variable as a natural or incidental consequence of processing netCDF4 files. http://nco.sf.net/nco.html#mrd F. Operators perform arithmetic on atomic types NC_BYTE, NC_UBYTE. NCO never performed arithmetic on 8-bit integers (bytes) before. This is because we never had datasets that used this type. We encounter more and more byte-data. NASA, for example, tends to store data with two significant digits as bytes rather than shorts. This makes perfect sense. G. ncks allow finer-grained control of copying data/metadata. When copying/subsetting/appending files (as opposed to printing them), the copying of data, variable metadata, and global/group metadata are now turned OFF by -H, -m, -M, respectively. This is the opposite sense in which these switches work and have always worked when printing a file. Thus: -H turns off copying and turns on printing of data, -m turns off copying and turns on printing of variable metadata, -M turns off copying and turns on printing of group metadata. This allows people to easily replace data or metadata in one file with data or metadata from another. http://nco.sf.net/nco.html#xmp_att_var_cpy BUG FIXES: A. CDL-mode now protects special characters with backslashes http://nco.sf.net/nco.html#cdl B. Fix ncecat bug reading variables whose IDs change across files C. Fix ncap2.exe Windows-native bug that prevented file move D. Fix ncwa -b behavior. Versions 4.3.3-4.3.6 of ncwa -b preserve the averaged dimensions as degenerate dimensions (i.e., of size 1) when the -b switch is given. However, record dimensions were inadvertently converted to fixed dimensions in the output, making the output files unsuitable as input files to ncrcat etc. The workaround is to use ncks --mk_rec_dmn time on the output files. The solution is to upgrade ncwa to version 4.3.7, where the bug has been fixed. KNOWN ISSUES NOT YET FIXED: This section of ANNOUNCE reports and reminds users of the existence and severity of known, not yet fixed, problems. These problems occur with NCO 4.3.7 built/tested with netCDF 4.3.1-rc4 snapshot 20131007 on top of HDF5 hdf5-1.8.9 with these methods: cd ~/nco;./configure --enable-netcdf4 # Configure mechanism -or- cd ~/nco/bld;make dir;make all;make ncap2 # Old Makefile mechanism A. NOT YET FIXED netCDF4 library fails when renaming dimension and variable using that dimension, in either order. Works fine with netCDF3. Hence coordinate renaming does not work with netCDF4 files. Problem with netCDF4 library implementation. Demonstration: ncks -O -4 -v lat_T42 ~/nco/data/in.nc ~/foo.nc ncrename -O -D 2 -d lat_T42,lat -v lat_T42,lat ~/foo.nc ~/foo2.nc # Breaks with "NetCDF: HDF error" ncks -m ~/foo.nc 20130724: Verified problem still exists Bug report filed: netCDF #YQN-334036: problem renaming dimension and coordinate in netCDF4 file Workaround: Use ncrename twice; first rename the variable, then rename the dimension. B. NOT YET FIXED (would require DAP protocol change?) Unable to retrieve contents of variables including period '.' in name Periods are legal characters in netCDF variable names. Metadata are returned successfully, data are not. DAP non-transparency: Works locally, fails through DAP server. Demonstration: ncks -O -C -D 3 -v var_nm.dot -p http://thredds-test.ucar.edu/thredds/dodsC/testdods in.nc # Fails to find variable 20130724: Verified problem still exists. Stopped testing because inclusion of var_nm.dot broke all test scripts. NB: Hard to fix since DAP interprets '.' as structure delimiter in HTTP query string. Bug report filed: https://www.unidata.ucar.edu/jira/browse/NCF-47 C. NOT YET FIXED (would require DAP protocol change) Correctly read scalar characters over DAP. DAP non-transparency: Works locally, fails through DAP server. Problem, IMHO, is with DAP definition/protocol Demonstration: ncks -O -D 1 -H -C -m --md5_dgs -v md5_a -p http://thredds-test.ucar.edu/thredds/dodsC/testdods in.nc 20120801: Verified problem still exists Bug report not filed Cause: DAP translates scalar characters into 64-element (this dimension is user-configurable, but still...), NUL-terminated strings so MD5 agreement fails D. NOT YET FIXED (NCO problem) Correctly read arrays of NC_STRING with embedded delimiters in ncatted arguments Demonstration: ncatted -D 5 -O -a new_string_att,att_var,c,sng,"list","of","str,ings" ~/nco/data/in_4.nc ~/foo.nc 20130724: Verified problem still exists TODO nco1102 Cause: NCO parsing of ncatted arguments is not yet sophisticated enough to handle arrays of NC_STRINGS with embedded delimiters. E. NOT YET FIXED Report correct chunking and compression information for HDF4 files Demonstration: ncdump -h -s MOP01-20121231-L1V3.34.10.hdf ncks -m MOP01-20121231-L1V3.34.10.hdf 20131007: Verified problem still exists Cause: some libnetCDF library functions fail on HDF4 file inquiries. Bug report filed: netCDF #HZY-708311 ncdump/netCDF4 segfaults probing HDF4 file Tracking tickets NCF-272, NCF-273 "Sticky" reminders: A. Pre-built, up-to-date Debian Sid & Ubuntu packages: http://nco.sf.net#debian B. Pre-built Fedora and CentOS RPMs: http://nco.sf.net#rpm C. Pre-built Windows (native) and Cygwin binaries: http://nco.sf.net#windows D. Pre-built AIX binaries: http://nco.sf.net#aix E. Did you try SWAMP (Script Workflow Analysis for MultiProcessing)? SWAMP efficiently schedules/executes NCO scripts on remote servers: http://swamp.googlecode.com SWAMP can work command-line operator analysis scripts besides NCO. If you must transfer lots of data from a server to your client before you analyze it, then SWAMP will likely speed things up. F. NCO support for netCDF4 features is tracked at http://nco.sf.net/nco.html#nco4 NCO supports netCDF4 atomic data types, compression, chunking, and groups. G. Reminder that ncks, ncecat, ncbo, ncflint, and ncpdq work on many common HDF5 datasets, e.g., NASA AURA HIRDLS HDF-EOS5 NASA ICESat GLAS HDF5 NASA SBUV HDF5... -- Charlie Zender, Earth System Sci. & Computer Sci. University of California, Irvine 949-891-2429 )'(
netcdfgroup
archives: