[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

19990714: help with mcidas-xcd (cont.)



>From: Zuo Dong Zheng <address@hidden>
>Organization: CCNY
>Keywords: 199907011430.IAA14251 McIDAS make IRIX compilers

Zuodong,

re: strongly recommend pressing unused disk into service
>i have repartitioned the the unused harddisk c0t2d0 to one slice as c0t2d0s0
>and mounted to /scratch2.

Fantastic.  Great news!

>but how do i combine it with /scratch ?

Here is what I did on halo:

as 'root'

o chmod 775 /scratch2
o mkdir /scratch2/xcd
o chown ldm /scratch2/xcd
o chmod 775 /scratch2/xcd

as 'ldm'

o ldmadmin stop
o cd /scratch/mcidasd
  mv *.IDX *.XCD *.RAP *.RAT HRS.SPL *.IDT *.DAT GRID* MDXX* SCHEMA 
/scratch2/xcd
  cd /scratch2/xcd
  ln -s /scratch/mcidasd/SYSKEY.TAB SYSKEY.TAB

as 'mcidas'

o cd ~mcidas/data
o <edit LOCAL.NAM and change REDIRECTion entries for most non-imagery
  products to be /scratch2/xcd; this will tell the XCD decoders to
  write their output in /scratch2/xcd)
o cd workdata
  te.k XCDDATA \"/scratch2/xcd
  redirect.k REST LOCAL.NAM
  decinfo.k SET DMGRID ACTIVE

o as 'ldm'
  ldmadmin start

The effect of the above was to:

o have all image products in the Undiata-Wisconsin datastream (MCIDAS
  feed in ldmd.conf and pqact.conf) written to the /scratch/mcidasd
  directory
o have all of the XCD generated products written to the /scratch2/xcd
  directory
o the changes to LOCAL.NAM (the set of McIDAS REDIRECTions) were to:
  o tell the XCD decoders where to write their output data files
  o tell the 'mcidas' session where to find the data files
o tell XCD to start decoding GRIDded data (the decinfo.k SET DMGRID ACTIVE
  directive activates the GRID data monitor, DMGRID, which, in turn,
  runs the GRID decoder)

Each user that wants to run McIDAS-X sesions should:

o have his/her own mcidas/data directory
o run ~mcidas/userdata/admin to copy the three files that they will
  need in their mcidas/data directory from the ~mcidas/mcidas7.5/data directory
o do the following the first time that they start a McIDAS-X session:

  REDIRECT REST LOCAL.NAM

  This restores the new REDIRECTions I defined in ~mcidas/data/LOCAL.NAM
  to their environment.  Again, this only needs to be done once since
  the results will be written into their file ~user/mcidas/data/LWPATH.NAM
  (replace 'user' by whatever the user's login name is)

When I logged on, there was only 64 MB of space left in /scratch.  AND this
was even without GRIDded data being decoded!

I turned the GRID file decoding so that Ward can have access to
the model output data in the IDD.  We need to continue to monitor disk
space to make sure that /scratch2 doesn't fill up.  If it looks like it
might, then we will need to redistribute some of the data files in
/scratch2/xcd to /scratch/mcidasd.  My first choice would be to move
the MDXX files and SCHEMA (they have to be moved together!).  If
this were done, then the REDIRECTions in LOCAL.NAM would, once again,
have to be modified, and the LDM would have to be stopped and restarted.

As I logoff, the disk space situation looks decent:

halo{ldm}% df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t3d0s0     143927   14129  115408    11%    /
/dev/dsk/c0t3d0s6     551518  302245  194123    61%    /usr
/proc                      0       0       0     0%    /proc
fd                         0       0       0     0%    /dev/fd
/dev/dsk/c0t3d0s3     143927   46084   83453    36%    /var
/dev/dsk/c0t0d0s0     480815  418337   14398    97%    /export/home
/dev/dsk/c0t0d0s1     481823  103932  329711    24%    /opt
/dev/dsk/c0t1d0s0     963662  386894  480408    45%    /terascan
/dev/dsk/c0t5d0s0     962226  649531  216475    76%    /opt/apps
/dev/dsk/c0t5d0s5     402709      71  362368     1%    /var/mail
/dev/dsk/c0t5d0s1     962226  313112  552894    37%    /mcidas
/dev/dsk/c0t5d0s4     673182   40553  565319     7%    /usr/local/ldm
/dev/dsk/c0t5d0s3     962226  251031  614975    29%    /scratch
swap                   63304     304   63000     1%    /tmp
/export/home/students/wxnut
                      480815  418337   14398    97%    /home/wxnut
/dev/dsk/c0t2d0s0    1618589  563036  893703    39%    /scratch2

Between /scratch and /scratch2 you have about 1.4 GB of disk available
for decoding of GRIDs.  This will be enough given that the scouring
that was setup will keep one day's worth of GRID files and given
that the request for HRS data in ~ldm/etc/ldmd.conf is for a subset
of the gridded data available in the IDD.

Tom

>From address@hidden  Thu Jul 15 10:55:27 1999

what more can i say.  super thanks.

zuodong