[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Thredds init error



Hi Bruce,

Looks like your gzipped catalog directory didn't get attached. Could you send one for the entire content/thredds directory rather than just the dodsC subdirectory?

The TDS keeps track of all datasetRoot/datasetScan elements with a unique path. If the TDS runs into duplicate paths, the later ones are dropped. So, the messages should be WARNings rather than ERRORs. What happens if you try using your server even with these messages?

Actually, are you sure you have the latest war file? We changed CatalogRootHandler to DataRootHandler a few versions back so I suspect given your error message that you still have an old version. The current war file is available at:

ftp://ftp.unidata.ucar.edu/pub/thredds/3.10/

Ethan

Bruce Flynn wrote:

I've spent some time trying to get my server running the way I need it too, and I'm sooo close. Most of my problems were solved by obtaining the correct latest version war file. There is still one problem that I'm having... when Thredds is initialized I get the following errors:


2006-06-09T17:28:14.866 +0100 [ 8857][ 1] ERROR - thredds.servlet.CatalogRootHandler - already have dataRoot =<dodsC/ Shelves> mapped to directory= </Volumes/gulf1/Shelves/>
2006-06-09T17:28:14.869 +0100 [ 8860][ 1] ERROR - thredds.servlet.CatalogRootHandler - already have dataRoot =<fileServer/Shelves> mapped to directory= </Volumes/gulf1/Shelves/>
2006-06-09T17:28:14.992 +0100 [ 8983][ 1] ERROR - thredds.servlet.CatalogRootHandler - already have dataRoot =<dodsC/ Shelves> mapped to directory= </Volumes/gulf1/Shelves/>
2006-06-09T17:28:14.995 +0100 [ 8986][ 1] ERROR - thredds.servlet.CatalogRootHandler - already have dataRoot =<fileServer/Shelves> mapped to directory= </Volumes/gulf1/Shelves/>
2006-06-09T17:28:15.131 +0100 [ 9122][ 1] ERROR - thredds.servlet.CatalogRootHandler - already have dataRoot =<dodsC/ Shelves> mapped to directory= </Volumes/gulf1/Shelves/>
2006-06-09T17:28:15.134 +0100 [ 9125][ 1] ERROR - thredds.servlet.CatalogRootHandler - already have dataRoot =<fileServer/Shelves> mapped to directory= </Volumes/gulf1/Shelves/>
2006-06-09T17:28:15.466 +0100 [ 9457][ 1] ERROR - thredds.servlet.CatalogRootHandler - already have dataRoot =<dodsC/ Shelves> mapped to directory= </Volumes/gulf1/Shelves/>
2006-06-09T17:28:15.469 +0100 [ 9460][ 1] ERROR - thredds.servlet.CatalogRootHandler - already have dataRoot =<fileServer/Shelves> mapped to directory= </Volumes/gulf1/Shelves/>


This is just a sample, I get one for every catalog I have I think. In each of my catalogs I have compound service that has the same datasetRoot. I have included a gziped copy of my catalog tree from $TOMCAT_ROOT/content/thredds/dodsC with this email. Are multiple like datasetRoots not allowed? If not how would I share one datasetRoot for all the services in my catalogRef tree, or propagate the services from my root catalog to the sub-catalogs via the catalogRefs?

Thanks
Bruce

===============================================================================

To unsubscribe thredds, visit:
http://www.unidata.ucar.edu/mailing-list-delete-form.html
===============================================================================




--
Ethan R. Davis                                Telephone: (303) 497-8155
Software Engineer                             Fax:       (303) 497-8690
UCAR Unidata Program Center                   E-mail:    address@hidden
P.O. Box 3000
Boulder, CO  80307-3000                       http://www.unidata.ucar.edu/
---------------------------------------------------------------------------


=============================================================================== To unsubscribe thredds, visit: http://www.unidata.ucar.edu/mailing-list-delete-form.html ===============================================================================


NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.