Re: [netcdfgroup] slow reads in 4.4.1.1 vs 4.1.3 for some files

  • To: Charlie Zender <zender@xxxxxxx>
  • Subject: Re: [netcdfgroup] slow reads in 4.4.1.1 vs 4.1.3 for some files
  • From: Chris Barker <chris.barker@xxxxxxxx>
  • Date: Tue, 13 Dec 2016 09:54:04 -0800
if I understand the question, this is read times -- is this in fact with
exactly the same files? In which case the chunking is already set.

less performant decompression??

sounds like the binary search is needed.

Is EVERYTHING else the same? disk system, OS, etc?

-CHB


On Tue, Dec 13, 2016 at 9:21 AM, Charlie Zender <zender@xxxxxxx> wrote:

> Hello Simon,
>
> Since both files are netCDF4 compressed that
> means they use chunking. My wild guess is that
> different chunking defaults cause the observed
> change in dumping time. You can see the
> chunk sizes employed with ncdump -s or ncks --hdn,
> and you can play with the chunk sizes/policy
> with either.
>
> Charlie
> --
> Charlie Zender, Earth System Sci. & Computer Sci.
> University of California, Irvine 949-891-2429 )'(
>
>
> _______________________________________________
> NOTE: All exchanges posted to Unidata maintained email lists are
> recorded in the Unidata inquiry tracking system and made publicly
> available through the web.  Users who post to any of the lists we
> maintain are reminded to remove any personal information that they
> do not want to be made public.
>
>
> netcdfgroup mailing list
> netcdfgroup@xxxxxxxxxxxxxxxx
> For list information or to unsubscribe,  visit:
> http://www.unidata.ucar.edu/mailing_lists/
>



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker@xxxxxxxx
  • 2016 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: