[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[netCDF #YLT-173676]: Opening of large files



Hello Andrea,

If you can provide the code you're working with, I might be able to provide 
more effective help; there are a lot of ways to process the file, with varying 
levels of efficiency. In general, effectively accessing the file depends on the 
size and shape of the underlying chunks (if the data is chunked).  
https://www.unidata.ucar.edu/blogs/developer/entry/chunking_data_why_it_matters 
has a good overview of why this matters.  

Have a great day,

-Ward 

> Dear NetCDF support team,
> 
> I would have a question regarding the opening of “large” WRF output files 
> with the netCDF libraries (about around 5.8 GB per timestep).
> Until now I opened them in matlab as it very efficient, I was wondering if 
> there are best practices for opening these datasets with the c++ api.
> I am able to map the matrices to arrays in memory but doing it as I am doing 
> it, is relatively less efficient than the matlab version.
> 
> I will need to open the velocities and compute the gradients and other post 
> processing steps for every point (669x669x127 grid), so the performance of 
> the code itself is a bit of a concern for me.
> 
> Thank you and have a great day,
> Andrea Paris
> 
> 
> 


Ticket Details
===================
Ticket ID: YLT-173676
Department: Support netCDF
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.