Hi Sebastian, > I recently stumbled across a performance problem when using netCDF + > HDF5 + (Collective) MPI I/O: > > If the application has a great I/O load imbalance (some processes > reading only a few values while others read several thousand values) I > get a dramatic performance decrease on our GPFS file system. > > Reading several thousand values with all processes is done within > seconds. However as soon as I introduce the load imbalance, it takes > several minutes. > > Anybody experiences the same problems? Any advices? We haven't seen this problem reported before, but I've forwarded your question to another developer who has more parallel I/O expertise than we do at Unidata. Either he or I will let you know if we come up with anything that might help ... --Russ Russ Rew UCAR Unidata Program address@hidden http://www.unidata.ucar.edu Ticket Details =================== Ticket ID: ABG-853837 Department: Support netCDF Priority: Normal Status: Closed
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.