Due to the current gap in continued funding from the U.S. National Science Foundation (NSF), the NSF Unidata Program Center has temporarily paused most operations. See NSF Unidata Pause in Most Operations for details.
On Thu, Jul 22, 2010 at 9:02 AM, Jeff Lake - Admin <admin@xxxxxxxxxxxxxxxxxxxx> wrote: > also.. > is it just me or when scour is running do others see your loads and memory > usage go through to roof?? > the loads every once and a while would hit 1.5 with maybe 7-7.5 Gb ram used > during normal usage > when scour ran loads would bounce from 4 to 6 and max out all 12Gb of ram > > so I wrote a simple bash script to do essentially the same thing. The loads > and memory usage barely move with it running > after switching to my NEXRAD2 directory > these are run >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > find K*/K* -type f -mmin +120 -exec rm {} \; > > find P*/P* -type f -mmin +120 -exec rm {} \; > > find N*/N* -type f -mmin +120 -exec rm {} \; > > find T*/T* -type f -mmin +120 -exec rm {} \; > > find R*/R* -type f -mmin +120 -exec rm {} \; >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > I do it this way as each radar directory has a php file, > and yes my scour is altered to delete by minutes old.. > scour handles NEXRAD3 with no problem, but > kills me when doing NEXRAD2.. > ideas?? Do a "top" during a scour run and you'll probably see that it's an IO issue more than a CPU issue. When I've had slower disks and journaling file systems as the main data store, scour would essentially bring down the system each night. I went so far as to mess with the find/rm's like you, but went back to the original scour after upgrading disks and converting to a non-journaling file system. Scour is using a variation of the find command with the "xargs" form of 'rm'. According to xargs/find/rm documentation it should be less CPU intensive than your format above that essentially forks an "rm" for every file. However, the IO should be the same. -Tyler
ldm-users
archives: