[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[McIDAS #XRR-599672]: FW: Please remove port 23 access from shu.cup.edu, and add ports 22 & 112.



> O.K. I think we have feeds trimmed down to a reasonable size(correct me if 
> I'm wrong),

The realtime statistics being reported by SHU show a reasonable average amount 
of
data being received.

http://www.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume?shu.cup.edu

Data Volume Summary for shu.cup.edu

Maximum hourly volume   2348.128 M bytes/hour
Average hourly volume    822.102 M bytes/hour

Average products per hour      66208 prods/hour

Feed                           Average             Maximum     Products
                     (M byte/hour)            (M byte/hour)   number/hour
NEXRAD2                 268.352    [ 32.642%]      358.047    14163.261
HDS                     176.182    [ 21.431%]      429.760    15054.500
CONDUIT                 156.176    [ 18.997%]     1532.144     1274.261
NNEXRAD                  84.134    [ 10.234%]      121.316    10328.783
NIMAGE                   83.692    [ 10.180%]      210.545        7.239
IDS|DDPLUS               22.742    [  2.766%]       28.486    25216.609
UNIWISC                  21.858    [  2.659%]       28.264       25.957
DIFAX                     5.611    [  0.683%]       22.902        6.870
FSL2                      1.893    [  0.230%]        2.086       21.174
GEM                       1.451    [  0.176%]       22.248      100.174
NLDN                      0.011    [  0.001%]        0.058        9.413

The time series graph of volumes shows that there is a peak every 6 hours which 
is
completely dominated by the CONDUIT data that you are ingesting:

http://www.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume?shu.cup.edu+GRAPH

Latency plots for the various feeds show that you are by-and-large receiving 
data
in a timely manner.

> and I'm trying to cleanup our system.

If the latencies you are seeing are acceptable to you (again, they seem to be 
OK, but not
fantastically low), and if the data you are receiving represents the set of 
information
that folks feel is needed, then I think you are where you should be.

> It looks like we want CONDUIT for sure, and I've trimmed it down some, but 
> one thing I
> realize is that we don't seem to be processing CONDUIT into a file.  Is that 
> correct?

I don't know what pqact.conf actions you have implemented.  At the time that I 
last
looked at your machine, you had no processing setup for CONDUIT data.

> Do we need to file CONDUIT for McIDAS to use?

No.  When you get around to installing GEMPAK, you will want to start procesing 
the
CONDUIT data.  Since I saw an email from Chad that indicated that you should 
hold
off on the GEMPAK installation for awhile, you may want to stop your CONDUIT 
ingestion.
Then again, you may want to keep ingesting it so that your IT folks will get 
used to
the volume of data flowing into your department :-)

> Could you be a little clearer about the "WMO" feed.  What is WMO, data?

Here goes:

- First, all of the feed names are mnemonics

- second, some feed names actually represent the union of other feeds. We refer
  to these as compound feeds.  The mnemonic 'WMO' is one of the compound feeds.
  It is the union of all products contained in the 'IDS|DDPLUS' (global 
observational
  data) and 'HDS' (NCEP model output, almost all of which is in GRIB1).

> and do I need HDS

I believe so, yes.

> I'm not sure why ldmping doesn't work,

Hmm... Yes, I see that an 'ldmping' from the LDM on my workstation, 
yakov.unidata.ucar.edu,
to your machine, shu.cup.edu, does not work:

[ldm@yakov ~]$ ldmping shu.cup.edu
Oct 24 20:56:32 INFO:      State    Elapsed Port   Remote_Host           
rpc_stat
Oct 24 20:56:32 INFO: Resolving shu.cup.edu to 158.83.74.22 took 0.000543 
seconds
Oct 24 20:56:42 ERROR: SVC_UNAVAIL  10.001530    0   shu.cup.edu  
h_clnt_create(shu.cup.edu): Timed out while creating connection

The cause of this kind of a failure is typically related to a firewall setting 
on the
receiving LDM's side (i.e., CUP).

> but is it necessary?

No, or, at least, not if everything is running OK for you.  If, however, you 
have problems,
our ability to help you troubleshoot them is severely diminished by our not 
being able
to do an 'ldmping' and/or 'notifyme' to your machine.

> I can't seem to get notifyme working either, but is it necessary?

I verified that I can not do a 'notifyme' to shu either.  My comment is the 
same as for
'ldmping'.

> How can I install McIDAS on other workstations on campus, and make the data 
> available from
> Shu?

You can install McIDAS on whatever machines you want (Unix, Linux, or MacOS-X). 
 The procedure
will be mostly the same as it was on shu.cup.edu:

- create a 'mcidas' account
- download McIDAS to the ~mcidas directory
- set values in the shell-specific configuration files (e.g., .chsrc for C and 
T shells;
  .bash_profile for Bash, etc.)
- 'source' the shell-specific configuration file to set needed environment 
variables
- unpack the McIDAS distribution using 'mcunpack'
- CD to the ~mcidas/mcidas2007/src directory
- build the McIDAS-X part of the distribution:

  cd ~mcidas/mcidas2007/src
  make
    -- OR --
  make mcx

  Both of these build the McIDAS-X portion of the distribution.  There is no 
need
  to build the McIDAS-XCD portion of the distribution since the decoding is 
being
  done on shu.cup.edu.

- install the newly build code:

  make install

Some differences in what is needed in a client build from a server build:

- in the client build, you do not need to define ADDE datasets as these will be
  read from shu.cup.edu

- you do not need to run the McIDAS configuration script 'mcxconfig'.  Instead,
  you can create the Client Routing Table entries needed to locate datasets
  on shu.cup.edu:

  <as 'mcidas'>
  cd ~mcidas/data
  cp DATALOC.BAT LOCDATA.BAT

  -- edit LOCDATA.BAT and set the names of the machines you want your users
     to go to for the various datasets

- make the client routing table entries active:

  cd ~mcidas/workdata
  batch.k LOCDATA.BAT

- do the needed configuration for each user account that you want to run McIDAS:

  - create the user account
  - create the directories ~/mcidas and ~/mcidas/data in the account as the
    the owner of the account
  - modify the user's shell-definition file to include needed McIDAS definitions
    (see the Unidata McIDAS-X Users Guide for the 1-2-3 on how to do this)
  - log off and log back on and verify that the user can start a McIDAS session
    that uses the code built as 'mcidas' and client routing table entries 
defined
    by 'mcidas' (see above)

Setting up a new McIDAS user should take on the order of 5 minutes.

Cheers,

Tom
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: XRR-599672
Department: Support McIDAS
Priority: Normal
Status: Closed