[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Datastream #QOK-427508]: NPS: New "manager", Date problem



Hi,

re:
> Thanks for your information this last week as I begin to log into machines 
> here at NPS
> as an LDM manager for the first time.

No worries.

re:
> I'd like to shift gears a bit and help get the ball rolling faster on our 
> transition
> here at NPS from our relatively limited ingest into linux machines on campus 
> to a larger
> ingest on a new Amazon platform.

OK.

re:
> The IT/cybersecurity department here on campus requires,
> of course, specifics on the technical aspects of what this Amazon 
> instance/storage would
> require before we can spin it up.

Of course.

re:
> Do you have, or can you direct us toward a person or
> website, that has specifics that we would need in order to ingest and store a 
> useful
> (maybe a year's worth?) archive?

The volume of data flowing in the IDD is a moving target, so any estimate made 
today
will be wrong tomorrow.  That being said, you could do some "back of the 
envelope"
calculations by extrapolating the average volume of data available in the 
datastreams
desired per hour and then multiplying by the number of hours in a year.  For
instance, a randomly chosen real-server back end of our idd.unidata.ucar.edu
top level IDD relay cluster can be found in:

http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/siteindex?cluster2.unidata.ucar.edu

The link at the lower left of this page will generate a snapshot list of
the volume of data flowing in each of the datastreams that this node is
handling:

Cumulative volume summary
http://rtstats.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume?cluster2.unidata.ucar.edu

This looks like:

Data Volume Summary for cluster2.unidata.ucar.edu

Maximum hourly volume 100803.397 M bytes/hour
Average hourly volume  65493.578 M bytes/hour

Average products per hour     523876 prods/hour

Feed                           Average             Maximum     Products
                     (M byte/hour)            (M byte/hour)   number/hour
SATELLITE             14892.385    [ 22.739%]    19959.534     6659.762
CONDUIT               11783.612    [ 17.992%]    33345.747   105400.619
NGRID                 10437.259    [ 15.936%]    14778.178    64434.857
NEXRAD2                9293.592    [ 14.190%]    11192.046   102743.286
NOTHER                 6984.060    [ 10.664%]    10042.353    12000.310
NIMAGE                 3973.837    [  6.068%]     9144.665     3507.000
FNMOC                  3181.508    [  4.858%]    11549.530     8347.071
NEXRAD3                2975.614    [  4.543%]     3511.048   127587.762
HDS                    1369.725    [  2.091%]     1754.030    44231.143
GEM                     306.029    [  0.467%]     3307.336     1779.452
FNEXRAD                 119.299    [  0.182%]      134.965      104.548
UNIWISC                  85.293    [  0.130%]      142.226       49.857
IDS|DDPLUS               71.118    [  0.109%]       84.169    46548.738
EXP                      12.109    [  0.018%]       18.775      126.310
LIGHTNING                 8.033    [  0.012%]       15.598      354.429
GPS                       0.105    [  0.000%]        1.102        0.929

As you can see, there are hourly average and hourly peak volumes listed.

re:
> We realize that the recent transition to GOES-16/17 is
> changing all of the calculations, so making this transition now seems like 
> good timing.

The volume of data flowing in the IDD changes all of the time as new model 
output
is added, new satellite imagery is added, higher resolution radar data becomes
available, etc.  The IDD truly is a fire hose, and its stream is continually 
growing.

re:
> By the way, Ryan xxxxx has been here a few years and has already made some 
> headway
> in getting this transition going. You may recall earlier correspondence with 
> him, and
> I've cc'ed him. As of now, the two of us are working together on this project 
> to help
> cover this transition to Amazon during any other temporary projects that 
> require weeks
> worth of field work out of the office.

OK, sounds good.

re:
> Thanks for any technical advice you have,

No worries.

Do you/NPS have a road map for where you are heading wrt cloud computing?  If 
yes, are
you willing to share it?

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: QOK-427508
Department: Support Datastream
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.