[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LDM #MFT-584849]: Errors from noaaportIngester process



Hi Justin,

Sorry I wasn't available to respond to your inquiry yesterday, but the
snow beckoned me (along with a LOT of other front rangers!) :-)

re:
> We run four RedHat 6 Linux systems that receive data from a SBN dishes via
> a DVB signal, 2 are in College Park, MD and the other 2 are in Boulder CO.
> 
> We have found that our systems in Boulder are missing over 100,000 products
> a day compared to our College Park system. Looking at our noaaportIngester
> log on one of the Boulder systems for channel 1 we are seeing these types
> of errors:
> 
> Feb 06 17:59:30 noaaportIngester[9189] WARN: Gap in packet sequence: 
> 130361016 to 130361019 [skipped 2]
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: OOPS, start of new product 
> [10174607 ] with unfinished product 10174606
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: Product definition header 
> version 1 pdhlen 16
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: PDH transfer type 71
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: header length 52 [pshlen = 36]
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: blocks per record 1 records per 
> block 1
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: product seqnumber 10174607 
> block number 0 data block size 1761
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: product header flag 1
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: prodspecific data length 0
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: bytes per record 0
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: Fragments = 0 category 1
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: next header offset 0
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: original seq number 0
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: receive time 1517939967
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: transmit time 1517939969
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: run ID 0
> Feb 06 17:59:30 noaaportIngester[9189] ERROR: original run id 0

Errors like these are typically indicative of noise in the ingest.  They
have nothing to do with the proper functioning of 'noaaportIngester'.

Question:

- are you logging the stats provided by the Novra S300N receiver that
  the two Boulder-based systems are connected to?

  We log a variety of parameters once per minute that we find useful
  when we need to diagnose ingest/noise problems:

  Novra S300N receiver Satellite/tuner:

  Frequency
  Symbol Rate
  ModCod
  Gold code
  Input Stream Filter
  Input Stream ID
  Signal Lock
  Data Lock
  Uncorrectable Rate
  Packet Error Rate
  Carrier to Noise C/n
  Signal Strength

- did you see a marked improvement/degradation when your NOAAport
  dish was moved from SES-1 to Galaxy 28?

  We saw a significant increase in Carrier to Noise (a good thing):
  values that ranged in the mid-high 15s improved to mid-17s.  This
  was accompanied by a drop in the number of errors we were seeing
  in our ingest.

  FYI: the UCAR NOAAPort dish is in the front of our Foothills Lab
  building 2 here in Boulder.

re:
> We are seeing over 100,000 of these types of error messages in Boulder
> compared to College Park which sees a few thousand and both of our systems
> in Boulder are seeing the same number of errors, over 100,000.

Again, these kinds of errors indicate noise in the ingest.

re:
> There seems to be some problem with the DVB signal being broadcast which
> our virtual systems in Boulder are listening to (we are also running these
> over a virtual system in College Park). I've included one of our network
> administrators (Anuj) who is familiar with the actual physical equipment
> configuration.
> Do you know of, or have you seen, what can cause this type
> of behavior/performance? 

I don't know of a reason why doing the ingest within a virtual machine would
be problematic UNLESS there are performance issues with the VM (being on
an overloaded host, etc.), or there being a problem with the LDM queue
being on a mounted drive that is very slow.  We had another site whose
attempts to run an LDM in a VM were fraught with errors, and the resolution
of their problem was to _not_ put the LDM queue on mounted drive.

re:
> Loosing over 100k products has had quite an impact
> to our processing in Boulder and we've set up cross feeding the LDM systems
> between College Park and Boulder to not lose any data.

Yes, losing that many products would make the ingest unusable.

By the way, we "power" the NOAAPort components of our Internet Data Distribution
(IDD) with geographically remote ingest sites:  2 machines here in UCAR (each
connected to a different Novra S300N but the same dish); 2 machines at the
Space Science and Engineering Center (SSEC) at the University of Wisconsin
at Madison (UW) (each connected to a different Novra S300N but the same dish);
and one at the Southern Region Climate Center (SRCC) at Louisiana State
University (LSU).  All of these ingest sites greatly benefited from measurably
improved C/N in their NOAAPort ingest after dishes were moved to Galaxy 28.

Aside: we have offered IDD feeds to folks at GSD in the past, but we have
never had anybody express an interest in this free service.  We are also
making an IDD feed of all GOES-16 data available to interested users,
again free-of-charge.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: MFT-584849
Department: Support NOAAPORT
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.