[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[THREDDS #WWA-524146]: Thredds-aws



I only restarted the machine about 30 minutes to an hour ago, so if anything 
happened before then to get it working, it wasn't me.

Looking right now, everything seems fine. Are you seeing problems currently on 
your end?

For that kind of workflow, I don't see any alternatives. What I'm seeing, 
though, is multiple machines from udel.edu accessing the machine, and at least 
one of them does not appear to be only downloading Ref. Just wondering if 
there's some room to improve efficiency.

What machine are you connecting from?

Ryan

> Hi Ryan,
> 
> Thanks for your help. I got kicked off the server again this evening, I'm
> not entirely sure what's happening.
> 
> I am 'cat'ing a full day's worth of scans at one station, and only
> downloading one scan at a time and looking solely at the reflectivity and
> processing that. But I know the greatest time suck of each iteration is the
> call to the thredds server. I also am parallelizing the process so that
> four nodes are calling the server, so this also could be an issue.
> 
> Any help/guidance is appreciated.
> 
> Thanks again,
> Dan
> 
> address@hidden> wrote:
> 
> > Daniel,
> >
> > I bounced the machine again, and I'll try to keep an eye on it. Before
> > rebooting, all I saw was that the machine was busy, maybe hung?
> >
> > The server should be fine with what you're doing, so I'm not sure where
> > the problem lies.
> >
> > To explore potential work arounds: Are you downloading subsets of the
> > data? Or are you downloading entire files?
> >
> > Ryan
> >
> > > I am doing significant data processing via this server over the next
> > couple
> > > of days and assuming I am pulling a lot of power from the server. Is
> > there
> > > something I can do to help prevent this so that my program can continue
> > > running?
> > >
> > > Thanks so much,
> > > Dan
> > >
> > > On Wed, Oct 10, 2018 at 8:57 AM Daniel Moore <address@hidden> wrote:
> > >
> > > > Hello,
> > > >
> > > > I believe your thredds-aws server is currently down at the following
> > > > address:
> > http://thredds-aws.unidata.ucar.edu/thredds/radarServer/nexrad
> > > > /level2/S3/
> > > >
> > > > Thank you,
> > > >
> > > > On Fri, Sep 21, 2018 at 12:07 PM Daniel Moore <address@hidden> wrote:
> > > >
> > > >> Hello,
> > > >>
> > > >> I believe your thredds-aws server is currently down at the following
> > > >> address:
> > > >>
> > http://thredds-aws.unidata.ucar.edu/thredds/radarServer/nexrad/level2/S3/
> > > >>
> > > >> Thank you,
> > > >> --
> > > >> *Daniel P. Moore*
> > > >> M.S. Geography (Atmospheric Science), *2019*
> > > >> Department of Geography
> > > >> University of Delaware
> > > >>
> > > >
> > > >
> > > > --
> > > > *Daniel P. Moore*
> > > > M.S. Geography (Atmospheric Science), *2019*
> > > > Department of Geography
> > > > University of Delaware
> > > >
> > >
> > >
> > > --
> > > *Daniel P. Moore*
> > > M.S. Geography (Atmospheric Science), *2019*
> > > Department of Geography
> > > University of Delaware
> > >
> > >
> >
> >
> > Ticket Details
> > ===================
> > Ticket ID: WWA-524146
> > Department: Support THREDDS
> > Priority: High
> > Status: Open
> > ===================
> > NOTE: All email exchanges with Unidata User Support are recorded in the
> > Unidata inquiry tracking system and then made publicly available through
> > the web.  If you do not want to have your interactions made available in
> > this way, you must let us know in each email you send to us.
> >
> >
> >
> 
> --
> *Daniel P. Moore*
> M.S. Geography (Atmospheric Science), *2019*
> Department of Geography
> University of Delaware
> 
> 

Ticket Details
===================
Ticket ID: WWA-524146
Department: Support THREDDS
Priority: High
Status: Open
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.