What Cache Size Should be Used to Read AR-4/AR-5 3D Data?

A question that has puzzled the greatest minds of history...

The not-yet-checked-in script nc_test4/run_bm_cache.sh tests reading a sample 3D data file with different sized caches.

Because of a weird increase in time for horizontal reads for 16MB cache size, I re-ran the test twice more to make sure I got the same results. And I did. No explanation why 16 MB works so poorly.

The current netCDF-4 default cache size is 4MB (which does fine), but I note that the original HDF5 default of 1 MB does even better. Perhaps I should just leave this cache alone as a default choice, and give users the HDF5 settings...

bash-3.2$ ./run_bm_cache.sh
*** Benchmarking pr_A1 file pr_A1_256_128_128.nc with various HDF5 chunk caches... cs[0] cs[1] cs[2] cache(MB) deflate shuffle read_time_ser(us) 256   128   128   0.5       0       0       1291104 256   128   128   1.0       0       0      1298621 256   128   128   4.0       0       0       1306983 256   128   128   16.0      0       0       1472738 256   128   128   32.0      0       0       1497533
cs[0] cs[1] cs[2] cache(MB) deflate shuffle read_hor(us) 256   128   128   0.5       0       0       2308 256   128   128   1.0       0       0       2291 256   128   128   4.0       0       0       2453 256   128   128   16.0      0       0       11609
256   128   128   32.0      0       0       2603

SUCCESS!!!

bash-3.2$ ./run_bm_cache.sh 
*** Benchmarking pr_A1 file pr_A1_256_128_128.nc with various HDF5 chunk caches...
cs[0] cs[1] cs[2] cache(MB) deflate shuffle read_time_ser(us)
256   128   128   0.5       0       0       1290340
256   128   128   1.0       0       0       1281898
256   128   128   4.0       0       0       1306885
256   128   128   16.0      0       0       1470175
256   128   128   32.0      0       0       1497529
cs[0] cs[1] cs[2] cache(MB) deflate shuffle read_hor(us)
256   128   128   0.5       0       0       2298
256   128   128   1.0       0       0       2292
256   128   128   4.0       0       0       2335
256   128   128   16.0      0       0       11572
256   128   128   32.0      0       0       1841

SUCCESS!!!

bash-3.2$ ./run_bm_cache.sh 
*** Benchmarking pr_A1 file pr_A1_256_128_128.nc with various HDF5 chunk caches...
cs[0] cs[1] cs[2] cache(MB) deflate shuffle read_time_ser(us)
256   128   128   0.5       0       0       1298650
256   128   128   1.0       0       0       1298636
256   128   128   4.0       0       0       1565326
256   128   128   16.0      0       0       1497482
256   128   128   32.0      0       0       1497529

cs[0] cs[1] cs[2] cache(MB) deflate shuffle read_hor(us)
256   128   128   0.5       0       0       2303
256   128   128   1.0       0       0       2287
256   128   128   4.0       0       0       2280
256   128   128   16.0      0       0       11584
256   128   128   32.0      0       0       1830

SUCCESS!!!

Comments:

Post a Comment:
Comments are closed for this entry.
Unidata Developer's Blog
A weblog about software development by Unidata developers*
Unidata Developer's Blog
A weblog about software development by Unidata developers*

Welcome

FAQs

News@Unidata blog

Take a poll!

What if we had an ongoing user poll in here?

Browse By Topic
Browse by Topic
« March 2024
SunMonTueWedThuFriSat
     
1
2
3
5
6
7
8
9
10
11
12
13
14
15
16
17
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today