Re: [galeon] plan for establishing CF-netCDF as an OGC standard

NOTE: The galeon mailing list is no longer active. The list archives are made available for historical reasons.

Hi all,

Needless to say, I am overwhelmed with the wealth of good ideas that have
come forth in response to my brief draft plan for proceeding with this
standardization effort.  Rather than respond to each of them in detail, I've
attempted to read through them carefully and thus far haven't found anything
in direct conflict with the original plan.  Below I've attempted to distill
out the major issues that have come to the fore and provide my take on each
of them.

My overall sense is that, while there are several strong suggestions for
doing things (netCDF, CF, API, DAP of OPeNDAP) in a different order or
perhaps leaving some of these items off the list entirely, I have not yet
seen any arguments that persuade me to start with anything other than a ...

*NetCDF core standard.*

The netCDF classic spec as submitted to NASA is still the starting point.
 In my view, that really is the foundation for what we are attempting to
standardize -- for the same reasons we started with that spec for the NASA
standardization effort.  This really is the heart of the matter and forms a
solid foundation regardless of which options are taken for the ...

*Next step(s).*

There are some good arguments for moving forward with the APIs as extensions
to the core as the second step in the process.  On the other hand, there
appears to be consensus on the need for a more formal standardization of the
CF conventions -- although some would argue the CF standard should be
independent of netCDF.   Which direction to take in this area depends
partially on whether and where we can find resources to work on any of these
extensions.  It also depends on CF community reaction to this.  We may not
find such a solid consensus there.

In any case, it's important that we proceed in a set of smaller,
manageable, achievable steps.  The core and extensions model serves us well
in that regard.

*Who makes changes?*

It's hard to imagine how changing the standards document (in a manner
inconsistent with the existing community of practice) could offset the
massive momentum of the existing, substantial and stable reference
implementations and the widespread community of use.  Practically speaking,
a non-exclusive right to change the document does not seem to be a major
concern.

*Access protocol: WCS, WFS, WMS, SOS, DAP*

Once we agree on a specification that carefully and completely describes
what is being delivered, I've heard good descriptions for the CF-netCDF
payload fitting into any of these data access protocols.  My personal plan
is to continue to focus on WCS where we started; Andrew and Bryan and the
BADC crew have made great strides in showing how these datasets fit into the
WFS realm;  the Oceans I.E. efforts have a place for an "out of band"
payload in SOS; the newly formed OGC Meteorology DWG is focusing on WMS; and
of course OPeNDAP (DODS at that time) was originally envisioned as a way to
subset and access netCDF datasets on remote servers.   This is all to the
good as long as we accomplish what we set out to in terms of netCDF, CF and
the APIs.  I don't see that this group would benefit from trying to decide
definitively among those approaches at this time.

Please let me know if I've missed any of the key issues.

-- Ben



On Mon, Jul 20, 2009 at 9:52 AM, Carl Reed <creed@xxxxxxxxxxxxxxxxxx> wrote:

> All -
>
> A couple of thoughts (points of view)
>
> First, will the OGC Members change the document? By way of example, we
> did not change core KML when KML was submitted. Why? Huge number of
> implementations and KML files. However, we did, after consensus discussion,
> enhance the document. We added more informative sections, added more words
> or coordinate reference systems (the discussion of which was absent in the
> original KML submission), and added new elements to enable extensibility.
> So
> yes, we changed the document and KML itself but did so in a way that
> allowed
> all existing applications to keep right on running. The only real change
> that the implementation community saw was the location of the schemas and
> the ability to add extensions. The value proposition for bringing KML into
> the OGC is that KML moved from being a single vendor de-facto standard to
> being an international standard "owned" by the community and not one
> vendor.
>
> That said, there is the possibility that KML 3.0 will have new OGC related
> elements, such as additional geometry types, and other changes that will
> align KML with other OGC and ISO standards. But that is up to the community
> to decide. Further, even Google will tell you that major revisions (as
> shown
> by the major number) will break backwards compatibility. They have stated
> so
> publicly.
>
> So, any fear that the OGC will change the current NetCDF/CF specification
> in
> a manner that will break backwards compatibility is unfounded. However, I
> suspect the document will change as more eyes on any standard tends to
> uncover ambiguities, suggest additional examples and uses cases, and other
> changes that enhance the value of the standards - but do not break it.
>
> Of course, many of the users of NetCDF/CF are also OGC members . . .
>
> Second comment: Proliferation of profiles. I agree that having too many
> profiles of a given standard reduces interoperability. Now, in terms of the
> work of the OGC, a profile is a restricted subset of a existing standard (in
> ISO definition terms, type 1). The OGC Members have actually generated very
> few profiles. Typically, the Members and the community work on application
> schemas (in ISO definition terms, type 2). An application schema extends say
> GML with new elements, typically through the inclusion of additional
> namespaces. We have learned over the last several years that well considered
> application schemas grounded in community consensus content models
> significantly increase interoperability within and often between communities
> of interest (domains). There are now numerous examples of this, such as
> CityGML, GeoSciML, and more recently AIXM/GML.
>
> As to WCS, if there are issues, please bring these issues into the OGC
> process so that they are addressed!! At the end of the day, if a server
> cannot provide this information, there is not much that an OGC interface
> standard can do to solve the issue of content payload response size! Also,
> consider that the WFS interface definition does support the ability to
> provide the client information about the response size:
>
> numberOfFeatures attribute. In this way a client may obtain a count of the
> number of features that a query would return without having to incur the
> cost of transmitting the entire result set.
>
> Finally, WRT to NetCDF/CF, OGC standards work is driven by the Members and
> the communities of interest they represent.
>
> Regards
>
> Carl
>
>
> ----- Original Message ----- From: "Bryan Lawrence" <
> bryan.lawrence@xxxxxxxxxx>
> To: "David Arctur" <darctur@xxxxxxxxxxxxxxxxxx>
> Cc: "Unidata GALEON" <galeon@xxxxxxxxxxxxxxxx>; "Woolf, A (Andrew)" <
> andrew.woolf@xxxxxxxxxx>; "Ben Domenico" <Ben@xxxxxxxxxxxxxxxx>; "Unidata
> Techies" <techies@xxxxxxxxxxxxxxxx>; "Mohan Ramamurthy" <mohan@xxxxxxxx>;
> "Meg McClellan" <mmcclell@xxxxxxxx>; "Carl Reed" <creed@xxxxxxxxxxxxxxxxxx>;
> "George Percivall" <gpercivall@xxxxxxxxxxxxxxxxxx>; "Jeff
> deLaBeaujardiere" <Jeff.deLaBeaujardiere@xxxxxxxx>; "Steve Hankin" <
> steven.c.hankin@xxxxxxxx>
> Sent: Monday, July 20, 2009 5:06 AM
> Subject: Re: [galeon] plan for establishing CF-netCDF as an OGC standard
>
>
>
>  Hi David
>>
>>  picture of the future of WCS, and what was horrific about it. Btw, WFS
>>> has the same deficiency as WCS when it comes to predicting how big the
>>> response will be; that's a function-point I'd sure like to see in
>>> those web services.
>>>
>>
>> ... and opendap has the same problem ... except, that if you know enough
>> to use an opendap interface, you know enough to calculate the size of the
>> response ... but yes, I think this is a big issue!
>>
>>  standards. OPeNDAP and CF/netCDF already qualify as mature, effective
>>> standards, so I wouldn't recommend changing them just to bring them
>>> into OGC.  .... As to this being "just publicity" as Bryan suggests, that
>>> seems to me
>>> to disregard the value of open community participation and cross-
>>> fertilization of ideas that take place within the OGC community and
>>> processes.
>>>
>>
>> The problem is either: we have community participation in *standardising*
>> or we don't. If we don't, then what is the standardisation *process* for? If
>> one doesn't envisage allowing the community to modify the candidates, then
>> why have a community process?
>>
>> I think it's important for ALL standardisation communities to recognise
>> well characterised and governed "standards" (whatever that means) from other
>> communities, rather than take
>> on managing everything for themselves.
>>
>> So, to reiterate my point which was obviously less clear than it ought to
>> have been (given some private email I have received), and to give some
>> context to where I am coming from.
>>
>> - I clearly believe OGC standards and procedures have lots to add for some
>> tasks, but
>> - I think that NetCDF is well characterised, and via it's dependency on
>> HDF (at V4.0) rather difficult to cast in stone as something that should be
>> *defined* (standardised) by an OGC process.
>> - I think the CF *interface* could be decoupled from it's dependency on
>> netcdf and a candidate for OGC governance.
>> - I think that a host of OGC protocols would benefit from allowing xlink
>> out to cf/netcdf binary content *whether or not OGC governs the definition
>> of cf/netcdf*.
>>
>>   Perhaps you're concerned about the potential for reduced
>>> control over the interface definition, but that's not what will happen
>>> -- you won't lose control over it. There may be variations and
>>> profiles for special applications that emerge, but that wouldn't
>>> require you to change what already works.
>>>
>>
>> Hmmm. I think history demonstrates pretty conclusively that profile
>> proliferation reduces interoperability to the point that (depending on
>> application) it's notional not rational. I would be concerned if we profiled
>> CF in the same way, as for example, one NEEDS to profile GML (which is not
>> to say I don't believe in GML for some appplications, we're heavily into
>> doing exactly that on other fronts) ... but really, we have to think of
>> profile proliferation as standard proliferation ...
>>
>>  I apologize immediately if I've missed or misrepresented any of the
>>> issues with CF/netCDF or OPeNDAP. Please take this at face value. At
>>> the end of the day, I just want to see stronger relationships and
>>> stronger technology. And I think the relationships, personal and
>>> institutional, matter more than the technology, because having better
>>> relationships will lead to better solutions, whatever technology is
>>> chosen.
>>>
>>
>> I'm sure we're all on the same page here ... and we just need to spell out
>> the details to each other.
>>
>> Most folk know I'm in favour of exploring how an OGC relationship can help
>> CF. What I'm not in favour of is function creep so that we end up with OGC
>> taking on HDF and the netcdf bits/bytes storage etc. I jumped in here for
>> precisely that reason, and that reason alone. I may have muddied the waters
>> with some other stuff ...
>>
>> Cheers
>> Bryan
>>
>> p.s. we can have the WCS/WFS discussion another  day, I don't have time to
>> do it now ...
>>
>> --
>> Bryan Lawrence
>> Director of Environmental Archival and Associated Research
>> (NCAS/British Atmospheric Data Centre and NCEO/NERC NEODC)
>> STFC, Rutherford Appleton Laboratory
>> Phone +44 1235 445012; Fax ... 5848;
>> Web: home.badc.rl.ac.uk/lawrence
>>
>
>
  • 2009 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the galeon archives: