Hi Daniel, Simply put, you're accessing the disk too much. Touching the disk is death for performance and you're making many small reads and writes. You need to structure your program in such a way that you minimize your I/O operations. That'll probably mean reading/writing more data at a time. We go to great pains to do this ourselves. See FileWriter2 [1], where we copy data from one NetcdfFile to another in as large of chunks as possible (50 MB by default) using the FileWriter2.ChunkingIndex class. A potential quick-and-dirty fix is to open your files using NetcdfFile.openInMemory(String) [2]. As the name implies, the entire file is sucked into memory, allowing you to read in whatever manner you wish without accessing the disk further. Of course, if the sizes of the NetCDF files you're working with are greater than your free RAM, that method won't work. Cheers, Christian [1] https://github.com/Unidata/thredds/blob/b731bcb45b6e10b7e6102e97a9ef35e9fef43c93/cdm/src/main/java/ucar/nc2/FileWriter2.java#L394 [2] https://github.com/Unidata/thredds/blob/b731bcb45b6e10b7e6102e97a9ef35e9fef43c93/cdm/src/main/java/ucar/nc2/NetcdfFile.java#L788 Ticket Details =================== Ticket ID: GCR-931481 Department: Support netCDF Java Priority: Normal Status: Closed =================== NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.