Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
> I read a single (lat, lon) point from a file at a time for the whole available time period Yes, that pattern is why your reads are so slow. Look at how the data is laid out: float u10m(time=5088, latitude=103, longitude=122); So, if you're reading all values for a single (lat, lon) point, you are accessing the array like so: u10m[0][lat][lon] u10m[1][lat][lon] u10m[2][lat][lon] ... u10m[time-1][lat][lon] Those values are physically located far apart within the file, preventing effective buffering due to the non-sequential reads. Chunking won't help you here; you'll need to select the data layout that suits your read pattern. This might work better: float u10m(latitude=103, longitude=122, time=5088); Cheers, Christian On Wed, Feb 18, 2015 at 11:51 PM, Antonio Rodriges <antonio.rrz@xxxxxxxxx> wrote: > P.S. > > And that speed (~700 ms) is both for chuncked and unchunked data so I > decided that is was "slow" for the chunked file since it is considered > more optimized for this read pattern. >
netcdf-java
archives: