Re: [thredds] Large aggregated datasets, WMS memory issue

Hi all,

Thanks for the kind replies.

I've done further testing on a 1GB dataset with -Xmx1536m, the WMS service 
works fine. The WMS service never worked for the 1.3GB dataset.

I'm in process of setting up a x64 VM with lot of memory, and will try out the 
options as suggested here.
It's good to know that the memory size doesn't matter but the request size.

Thanks again and hopefully I'll be back with some detailed information.

Regards,
Lin





________________________________
From: John Caron [mailto:caron@xxxxxxxxxxxxxxxx]
Sent: Friday, 1 April 2011 1:33 AM
To: Lin, Xiangtan (CSIRO IM&T, Yarralumla)
Cc: thredds@xxxxxxxxxxxxxxxx
Subject: Re: [thredds] Large aggregated datasets, WMS memory issue

Hi Lin:

Its likely that you are at the edge of your heap memory, and "random" requests 
are pushing you over.

you might run a heap monitor like jvisualvm to watch the memory use.

a quick fix may be to reduce or eliminate file caching, eg:


 <NetcdfFileCache>
    <minFiles>10</minFiles>
    <maxFiles>20</maxFiles>
    <scour>10 min</scour>
  </NetcdfFileCache>

in threddsConfig.xml

see:

http://www.unidata.ucar.edu/projects/THREDDS/tech/tds4.2/reference/ThreddsConfigXMLFile.html

if possible, run jvisualvm before and after any change to verify.


On 3/30/2011 6:25 PM, Xiangtan.Lin@xxxxxxxx<mailto:Xiangtan.Lin@xxxxxxxx> wrote:
Hi John,

Thanks for the information. My large datasets will eventually be hosted on a 
64bit JVM. At the moment, I'm testing a 1.3GB dataset with -Xmx1536m on a dev 
machine. The following WMS request results in HTTP Status 500 - Internal Server 
Error (java.lang.OutOfMemoryError: Java heap space):

http://localhost:8090/thredds/wms/aggregated/magmap_V5_2010_s.nc?service=WMS&version=1.3.0&request=GetMap&Layers=magmap_V5_2010&CRS=EPSG:4326&BBOX=152,-39,154,-37&width=100&height=100&Styles=BOXFILL/occam_pastel-30&format=image/jpeg

The dataset has an bounding box:
<westBoundLongitude>112.50461544244448</westBoundLongitude>
    <eastBoundLongitude>154.66376524558447</eastBoundLongitude>
    <southBoundLatitude>-39.560232620003625</southBoundLatitude>
    <northBoundLatitude>-35.19773436500001</northBoundLatitude>

    I guess the request is faily small (BBOX=152,-39,154,-37) .

I   In addtion, the NetCDFSubsetService works fine for the same dataset with 
the same amount of memory.

I'm wondering if in general case, the maximum heap is determined by the dataset 
size for Thredds WMS service, and if there is an configuration option to limit 
the WMS request size.

Regards and thanks,

Lin

________________________________
From: thredds-bounces@xxxxxxxxxxxxxxxx<mailto:thredds-bounces@xxxxxxxxxxxxxxxx> 
[mailto:thredds-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John Caron
Sent: Thursday, 31 March 2011 5:21 AM
To: thredds@xxxxxxxxxxxxxxxx<mailto:thredds@xxxxxxxxxxxxxxxx>
Subject: Re: [thredds] Large aggregated datasets, WMS memory issue

Hi Lin:

It doesnt (usually) matter how big the dataset is, just how big the request is.

Can you send a typical WMS request that causes this problem? Do you know what 
size of data you are requesting? What file format?

-Xmx1536m is around the max for 32 bit JVMs. I strongly advise you to use a 
64bit JVM with more like 4 Gbyte heap.

John



On 3/29/2011 7:19 PM, Xiangtan.Lin@xxxxxxxx<mailto:Xiangtan.Lin@xxxxxxxx> wrote:
Hi all,

I'm in process of serving out some large aggregated datasets (5 - 8GB) with 
Thredds WMS.  I consistently experience "OutOfMemoryError: Java heap space" 
error when making WMS requests. On my test machine, I've allocated -Xmx1536m to 
Tomcat and experimentally Thredds is able to serve about 1GB aggregated 
datasets.

By digging through the Dataset Aggregation and Memory related topics on the 
list, I've found out that the general suggestions are to allocate more memory 
to Tomcat such as: -Xmx4g

One post mentioned that "Thredds does everything in memory".

Can somebody please advise how much memory I should allocate to Tomcat for the 
8GB dataset? I'm afraid I have to secure more than 8GB memory.
And what is the best practice to serve out the large datasets on Thredds in 
general?

Regards and thanks,


Xiangtan Lin

Technical Services Officer | CSIRO IM&T

xiangtan.lin@xxxxxxxx<mailto:xiangtan.lin@xxxxxxxx>| 
www.csiro.au<http://www.csiro.au>


_______________________________________________
thredds mailing list
thredds@xxxxxxxxxxxxxxxx<mailto:thredds@xxxxxxxxxxxxxxxx>
For list information or to unsubscribe,  visit: 
http://www.unidata.ucar.edu/mailing_lists/


  • 2011 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the thredds archives: