Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
I would assume that the cost of writing large float arrays on the cray comes from: 1) converting the floats to ieee format 2) the xdr overhead, especially array loops. On point one, there might be a system library to do this fast. I also heard a rumor that Cray is going to change to ieee format (???) On point two, it would be a worthwhile optimization to replace the xdr_array() routine with something that is optimized for floats. I belive that the standard routine will make (at least) one function call for every array element. Combining these two ideas would mean that you would add some specialized code to xdr_array(), eg in psuedo-code: if (type == float) { system_convert_to_ieee( my_array, out_array, n_elems); place_in_xdr_buffer( out_array, length); } > (My conversion program takes about twice the time on the C90 compared > to a HP workstation.) I assume this is the ieee conversion, unless HP optimized xdr_array(). What is time difference of netcdf write and binary write for typical large scientific datasets? Is it dominated by xdr_array() ? It might be generally useful to avoid the function call / element overhead on all machines. Does Unidata want to investigate / support such a mod?
netcdfgroup
archives: