RE: [GMLJP2] NetCDF <--> GML/JPEG2000

NOTE: The galeon mailing list is no longer active. The list archives are made available for historical reasons.

Thanks Mike,

within the GML-JP2 activity, we have been mediating the tension between two goals.  The first is 
"simplicity and ease of implementation." To satisfy this we have gone to some effort to 
ensure that an implementor can get basic coverage information using only a JP2 codec and a SAX 
parser. I think we have largely succeeded in this endeavor - a basic implementation needs only to 
scan the file for "RectifiedGrid", and from that can get the associated SRS, origin, 
extent, etc. This is enough information to plot the images on a map.

The other goal might be described "richness of expression." To that end, we 
have made a place for annotations, custom CRS and UOM dictionaries, and features 
associated with the coverages. We have defined a GML profile that excludes topology and 
some other things that we think are not germaine to this application. An end user can use 
their own application schema, with its own constructs, and as long as they obey the 
rules, the simple implementations should still be able to get the info they need.

I want to make sure that we (the GML-JP2 folks) do not define something that will lock us out of expressing O&M, or other valid use cases. Our base requirement is to allow one to associate a GML coverage with a JPEG2000 codestream, and to associate features and metadata/annotations with that coverage. We have done that. The next question is "how do we define relationships between the coverages." Currently we have grouped all related items in a GML FeatureCollection - is that rich enough? Is it too restrictive? Another question might be, "should GML-JP2 consider using O&M as its base schema"? When I ask that, I know I risk the ire of those who have been working on the current GML-JP2 schema. Also, it seems like O&M won't be fully flushed out within the timeline that we had hope to produce a specification, and may not use GML.
Thanks for your input. We are certainly interested in the progress of O&M, and making 
sure that we our spec has immediate value in that context. Is there an email list or other 
resource which we can watch to keep up to date on O&M? Would you (or anyone from the 
O&M team) be willing to review our next draft specification and comment?

        -Sean

-----Original Message-----
From: Mike Botts [mailto:mike.botts@xxxxxxxxxxxxx]
Sent: Tuesday, May 10, 2005 2:21 PM
To: Sean Forde; caron@xxxxxxxxxxxxxxxx; galeon@xxxxxxxxxxxxxxxx;
gmljp2@xxxxxxxxxxx
Cc: 'Martin Daly'; ows-3-swe@xxxxxxxxxxxxxxxxxx
Subject: RE: [GMLJP2] NetCDF <--> GML/JPEG2000

Sean et al.

First of all, I'm coming from outside the GML-JP2 group, so I'm not sure
about what's actually going on with that activity. However, within the
OGC Sensor web Enablement (SWE) activity, there is an effort going on
that may be relevant to both your question and the JP2 work.

As you may know, in earlier SWE efforts, an Observation&Measurement
schema was developed, mostly by Simon Cox, to support sensor
observations. He built this upon GML, added some extensions to support
basic scalar types, and developed application schemas for observation
and for measurements. From what I understand from Ron Lake, about 70% of
this has been incorporated into core GML.

The challenge that we have had in getting acceptance of O&M outside of
simple in-situ sensors, has been a lack of decent support for
high-volume data sets (e.g. 5000x5000 resolution images, video,
500x500x100 resolution grids, 40,000 particles, etc.) that are common in
the remote sensing and modeling communities. In SWE, we are interested
in supporting those data sets, as well as real-time streaming data. I
believe GML is certainly starting to address these issues, but is
perhaps not fully mature yet in this regard.

Within SensorML development and particularly in response to
harmonization with TransducerML (streaming data), we have played around
with some encodings that are flexible and efficient for large ascii or
binary data sets or streaming data. The key has been to describe the
data's reference system (and I don't just mean spatial or temporal
reference system) and structure within one XML element (class) and then
to provide the data encoding and data values separately within a
_DataProvider class. This ends up being very similar to a somewhat more limited data block structure defined in GML, but one that I haven't seen exercised much. For example, I might describe within the data reference system that the
data coming from an aircraft sensor will output data clusters composed
of time, latitude, longitude, altitude, sensor mode, CO2, NOx, SO2,
etc. ... this essentially defines a tuple of data that might be updated at a rate of say 5 times per second. The reference description would in
essence define the order of these components, the type of measurement
(Quantity, Category, Boolean, etc), and their units of measure. This is
similar to defining a CRS, but is not confined to to only spatial and
temporal coordinates. We are also describing arrays of values using this
scheme in order to support images, grids, volumes, etc.

The actual values of the observations can then be provided as an
XML-based tuple data block, binary images, video segments, flat-file
binary data, or a TML data stream. The values could be inline or linked
by URI to an online resource or perhaps a mime attachment with a SOAP
message.  It would seem that this would also work within the JPEG2000
structure.

As of 2 weeks ago at the OWS3 Kickoff meeting, we have initiated an
accelerated effort to define a Common Observation model that combines
lessons-learned from GML, O&M, SensorML, and TML and provides a single
schema capable of efficiently supportiing simple observations, as well
as images, grids, streams, etc. In addition, the Observation model would
point to sensor descriptions (in SensorML) as well as procedures or
process chains describing the lineage of the data. More than likely, the
Observation model and encoding will be based somewhat on the current
Observation model used in O&M, possibly built upon GML as is the current O&M. We are also looking at possible enveloping mechanisms for combining
the Observation "header" description with binary attachements, using
protocols such as SOAP messaging with attachments, etc.

Those involved in this effort have scheduled another face-to-face
meeting in Huntsville, AL in 2 weeks. We should have the Common
Observation model pretty well defined at that time and could make that
available for review by the GML and GML-JP2 groups.

Seems like there is some possible synergy here between the GML-JP2 and
the SWE Common Observation activity.

I understand that even after this long email, I didn't address all of
your questions.

Thanks,
Mike Botts



  • 2005 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the galeon archives: