[netcdfgroup] reducing read time for large data

All

The question came up if you have data that is in a smaller file but the same amount of data as a larger file (say both dimensioned 3600,1800,24levels,12months,40years), would it be quicker to read? I am not talking about netCDF4 compression at all but instead (I assume) variable type. So, if I wanted to reduce read time, should I store the data as byte or integer if that is possible, keeping the precision of the data? Or does the variable type that is used to store the data not matter in netCDF as far as read speed? I am using linux if that matters.

Thanks for any advice.

Cathy Smith

--
----------------------------------------------
NOAA/ESRL PSD and CU CIRES
303-497-6263
https://www.esrl.noaa.gov/psd/people/cathy.smith/

Emails about data/webpages may get quicker responses from emailing
esrl.psd.data@xxxxxxxx


  • 2020 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: