[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20040609: feeds off of (new)thelma



>From:  Mike Schmidt <address@hidden>
>Organization:  UCAR/Unidata
>Keywords:  200406090722.i597Mr0A001713

Mike,

>I see that motherlode is feeding 8 feeds from thelma;
>
>request NIMAGE  ".*" thelma.ucar.edu PRIMARY
>request NNEXRAD ".*"    thelma.ucar.edu PRIMARY
>request CRAFT   ".*"    thelma.ucar.edu PRIMARY
>request WMO     ".*"    thelma.ucar.edu PRIMARY
>request UNIWISC ".*"    thelma.ucar.edu PRIMARY
>request DIFAX   ".*"    thelma.ucar.edu
>request FSL2    "^FSL\.NetCDF\.NOAAnet\.windprofiler\..*" thelma.ucar.edu
>request CONDUIT ".*"    thelma.ucar.edu PRIMARY
>
>I'll change the to emo.

I don't know if the load really started dropping on (new)thelma
after you switched motherlode off to emo, or if it was heading down
because of me moving SGSU, UNL, and CU/CIRES (NOAA) off, but the
load average now looks manageable:

For the "files", here is a record of the load levels starting
at the time I started to move feeds off:

20040609.0707 161.62163.64156.86  108  17 125   4299 5407M 377M   17   0
20040609.0714 145.55156.91156.35  102  17 119   4317 5404M 376M   22   0
20040609.0713 147.29157.59156.57  101  17 118   4317 5405M 376M   23   0
20040609.0709 161.64163.34157.06  100  17 117   4283 5410M 375M   21   0
20040609.0705 163.10163.49155.25  114  17 131   4273 5412M 375M   15   0
20040609.0708 161.14163.34156.96  100  17 117   4283 5409M 376M   18   0
20040609.0710 160.76162.50157.82  100  17 117   4294 5412M 375M   20   0
20040609.0715 136.23152.84154.95  100  17 117   4329 5401M 372M   21   0
20040609.0708 161.03162.82157.37  110  17 127   4305 5397M 371M   15   0
20040609.0716 124.91146.78152.66  101  17 118   4385 5379M 369M   21   0
20040609.0717 115.36139.84149.77  100  17 117   4437 5358M 368M   26   0
20040609.0718 109.86134.32147.24  100  17 117   4482 5340M 368M   24   0
20040609.0719 105.95128.51144.27  100  17 117   4551 5308M 368M   26   0
20040609.0720 104.33124.28141.83  101  17 118   4623 5276M 371M   28   0
20040609.0721 107.03121.42139.71  101  17 118   4664 5265M 369M   23   0
20040609.0722 108.50119.22137.70  100  17 117   4749 5242M 368M   24   0
20040609.0723 106.96116.91135.70  100  17 117   4798 5227M 368M   26   0
20040609.0724 105.13114.69133.72  100  17 117   4859 5207M 368M   29   0
20040609.0725 102.12112.06131.49  100  17 117   4911 5189M 369M   30   0
20040609.0726 102.01110.18129.56   93  16 109   4967 5189M 354M   24   0
20040609.0727  94.59106.80127.16   92  16 108   5025 5187M 352M   23   0
20040609.0728  80.72100.77123.73   92  16 108   5079 5161M 352M   27   0
20040609.0729  72.66 94.86120.12   92  16 108   5134 5150M 347M   22   0
20040609.0730  49.77 84.95115.01   92  16 108   5188 5135M 341M   10   0
20040609.0731  26.19 71.98108.48   92  25 117   5246 4963M 349M    5   1
20040609.0732  19.55 61.71102.45   92  24 116   5307 4813M 350M    6   1
20040609.0733  18.29 53.78 96.98   92  25 117   5367 4658M 351M    5   1
20040609.0734  19.29 47.68 92.01   92  25 117   5427 4507M 351M    5   1
20040609.0735  18.35 42.27 87.21   92  25 117   5487 4369M 351M    4   1
20040609.0736  18.80 38.06 82.76   92  26 118   5547 4254M 352M    5   1
20040609.0737  18.45 34.57 78.68   92  26 118   5607 4097M 353M    5   1
20040609.0738  17.80 31.41 74.64   92  26 118   5667 3969M 351M    4   1
20040609.0739  18.26 29.16 71.09   92  26 118   5727 3839M 351M    4   1
20040609.0740  18.11 27.34 67.74   92  26 118   5787 3714M 352M    4   1
20040609.0741  21.64 26.69 64.86   92  26 118   5847 3611M 352M    4   1
20040609.0742  19.04 25.16 61.89   92  26 118   5907 3492M 351M    4   1

The key things I see from the uptime log are:

- number of downstream feeds has dropped
- number of upstream feed requests has increased: 17 -> 26 ?
- the number of processes in a WAIT state dropped: 28 -> 4

(new)thelma appears to be alot more responsive now which is a relief.

I am tempted to stop and restart (new)thelma to do the following:

- remove the two CRAFT requests that are not viable anymore
  (remove requests to 129.15.194.23[12])

- perhaps switch the CRAFT request to a single one from emo
  which is feeding from Purdue in a test mode

- remove the redundant feed of CONDUIT data from tgsv32

If I made the second two of these changes, the number of data requests
would drop by 13.  This could have a significant impact on the idle
time available to (new)thelma which is currently hovering at about 1%.

Thoughts?

Tom
--
+-----------------------------------------------------------------------------+
* Tom Yoksas                                             UCAR Unidata Program *
* (303) 497-8642 (last resort)                                  P.O. Box 3000 *
* address@hidden                                   Boulder, CO 80307 *
* Unidata WWW Service                             http://www.unidata.ucar.edu/*
+-----------------------------------------------------------------------------+