[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Question about Format/organization of the data (fwd)



Hi Nancy, 

Keep the archived data in a directory structure just like the real-time
data. Then when invoking GEMPAK an arguement can be added to indicate
which dir tree to access.

I have attached a sample script, albeit slightly old, to aid you in your
endeavor. I am pretty sure all the variables are correct, but I did need
to make some changes so be careful, may just want to use as a template.

Good luck,

-Jeff
____________________________                  _____________________
Jeff Weber                                    address@hidden
Unidata Support                               PH:303-497-8676 
COMET Case Study Library                      FX:303-497-8690
University Corp for Atmospheric Research      3300 Mitchell Ln
http://www.unidata.ucar.edu/staff/jweber      Boulder,Co 80307-3000
________________________________________      ______________________

On Tue, 8 Oct 2002, Ron Murdock wrote:

> Jeff -
> 
> Would you please respond to this query?
> 
> Ron
> 
> ---------- Forwarded message ----------
> Date: Mon, 07 Oct 2002 11:52:42 -0700
> From: Nancy Selover <address@hidden>
> To: "'address@hidden'" <address@hidden>
> Subject: Question about Format/organization of the data
> 
> Hello,
>       We have LDM Unidata and Gempak which we use for real-time data.  The
> data organization, in terms of subdirectories, is very important in order
> for Gempak to find the requested products.  I have a question about how the
> Case Study data are organized in the datasets.  I assume the data are
> viewable in Gempak, and if so, are the files organized in the same sort of
> subdirectories like the real-time data?  Or, are the Case Studies all put
> together in one big subdirectory?
>       The reason I ask is that we are going to be archiving our real-time
> data, and storing the tarred and GZipped files on another machine which has
> a large capacity.  We are going to install Gempak on that other machine for
> users to view the archived data.  Since we will also be ordering some of
> your Case Study datasets, I would like to use the same organizational
> structure for my archive as you have for yours to simplify the procedure for
> the users.
> 
> Thank you for your advice.
> 
> Nancy
> 
> Nancy J. Selover
> Asst. State Climatologist
> Office of Climatology           tel:  480-965-0580
> Arizona State University      fax: 480-965-1473
> Tempe, AZ  85287-1508      e-mail: address@hidden
> 
> 
> 
#!/bin/csh  -f
#----------------------------------------------------------------
#
# ntl.case
#
# This script initializes the NAWIPS environment for viewing
# case study data (rather than real-time data).  
#
# Usage:
#      ntl.case  PATH 
#
# where PATH is the path to the top level of the case study data 
# directory tree (for example :/usr1/metdat/11mar93)
#
# Log:
# COMET         ??      Original version of script called ntl.csh
# P.Bruehl      11/95   New version
#----------------------------------------------------------------

# Check for argument

if ( $#argv != 1 ) then
 echo " "
 echo "You must specify the PATH to the top level of the data directory tree" 
 echo "Example ntl.csl /usr1/metdat/11mar93"
 echo " "
 exit
endif

# Check if there is another copy of NTL running.

rm -f ntl.file
/bin/ps -ef | grep ntl | grep -v ntl_reader | grep -v grep | grep -v ntl.case  
> ntl.file

if ( -e ntl.file) set ntlproc = `cat ntl.file`
if ( $#ntlproc != 0 ) then
 echo " "
 echo "WARNING! Another copy of NTL is running on your system.  You may not" 
 echo "run two copies of NTL on the same display device (HP monitor)." 
 echo -n "Do you want to continue? (y or n)"
 set resp = $<
 if ( $resp == "y" | $resp == "Y" ) then
   echo "Warning, NTL may not start up" 
   echo " "
 else   
   echo "OK. Please exit from first copy of NTL before running ntl.case again"
   echo " "
   exit
 endif
 rm ntl.file
endif 

# Start initialization of NTL

echo " "
echo "Initializing NTL for Case Study Data located in $1"

# Set top level data directory
setenv METDAT $1
setenv NADATA $METDAT

shift

# Set underlying data environment variables based on $METDAT

# Model data 
setenv MODEL            $METDAT/gempak/model/avn

# Satellite data
setenv SAT              $METDAT/images/sat
setenv NASAT            $METDAT/images/sat
setenv RAD              $METDAT/images/radar
# Observational data 
setenv NALDM            $METDAT/gempak
setenv NALDMSN          $METDAT/gempak/upperair
setenv SAO_GEM_DATA     $METDAT/gempak/surface
setenv TEXT_DATA        $METDAT/weather

# Insert GEMPAK variables here...
setenv GEMDATA          $METDAT/gempak

# Ntrans meta files
setenv  NTRANS_META     $METDAT/meta

# Afos emulator
setenv AFOS_DATA        $METDAT/raw/afos


# Re-set the NWX_TABLES directory to point at the tables/
# subdirectory in the NWX data directory (if it exists).

if ( -d $TEXT_DATA/tables )  then
        setenv NWX_TABLES $TEXT_DATA/tables
endif

# Start up NTL in the background
ntl -s 20 $* &