Re: [galeon] plan for establishing CF-netCDF as an OGC standard

NOTE: The galeon mailing list is no longer active. The list archives are made available for historical reasons.

Ben, David, Bryan, Carl, et. al.

Great discussions. A satisfying part of this discussion is the return to the foundations: what are the characteristics of a standards processes that serve us well?

Regarding Bryan's point:

   /The problem is either: we have community participation in
   *standardising* or we don't. If we don't, then what is the
   standardisation *process* for? If one doesn't envisage allowing the
   community to modify the candidates, then why have a community process?/

Aren't the baby's head and shoulders heading out with the bathwater in this outlook? Overwhelmingly the most vital element of the standards process is simply to publicly acknowledge and advertise that agreement has been reached. This is what guides new projects to doing <whatever> in the same (compatible) way. Boring. But the surest path to success. In that sense there is huge potential value to broader adoption of netCDF/CF/DAP even if no changes are made at all. And when, as Carl Reed says, the standards process serves to clarify and improve documentation that is win-win value.

Regarding this ticklish issue of "profiles" -- well, it is an issue that we have bounced around in CF-internal discussions and (because we are all overly busy volunteers and do not meet as often as CF deserves) we have never sorted it out properly. In the current discussion we are looking at netCDF/CF/DAP as a package -- a bundle that provides both semantics and syntax. Maximum interoperability will be achieved if we can identify a core of CF that is viewed as indispensable and so-designate it for standardization. Arguably this outcome could and should be a part of the standard process. (Recent demonstrations of CF server-side functionality demonstrate how we can enable emerging high-end CF datasets (e.g. gridspec) to be interoperable with a simpler core CF interoperability standard. libCF is also paving the road this way and we hope will provide a future reference library to do so.)

   - Steve

P.S. (Opinion) When standards processes are working well they will be mostly boring. Too much technical excitement inside of standard committees is a symptom that they are no longer working on standardization; they are working on new technology. R&D should be scrupulously minimized in the standard process -- a lesson that we learn over and over, but it doesn't seem to stick.

==========================================

Carl Reed wrote:
All -

A couple of thoughts (points of view)

First, will the OGC Members change the document? By way of example, we
did not change core KML when KML was submitted. Why? Huge number of
implementations and KML files. However, we did, after consensus discussion, enhance the document. We added more informative sections, added more words or coordinate reference systems (the discussion of which was absent in the original KML submission), and added new elements to enable extensibility. So yes, we changed the document and KML itself but did so in a way that allowed
all existing applications to keep right on running. The only real change
that the implementation community saw was the location of the schemas and
the ability to add extensions. The value proposition for bringing KML into
the OGC is that KML moved from being a single vendor de-facto standard to
being an international standard "owned" by the community and not one vendor.

That said, there is the possibility that KML 3.0 will have new OGC related
elements, such as additional geometry types, and other changes that will
align KML with other OGC and ISO standards. But that is up to the community to decide. Further, even Google will tell you that major revisions (as shown by the major number) will break backwards compatibility. They have stated so
publicly.

So, any fear that the OGC will change the current NetCDF/CF specification in
a manner that will break backwards compatibility is unfounded. However, I
suspect the document will change as more eyes on any standard tends to
uncover ambiguities, suggest additional examples and uses cases, and other
changes that enhance the value of the standards - but do not break it.

Of course, many of the users of NetCDF/CF are also OGC members . . .

Second comment: Proliferation of profiles. I agree that having too many profiles of a given standard reduces interoperability. Now, in terms of the work of the OGC, a profile is a restricted subset of a existing standard (in ISO definition terms, type 1). The OGC Members have actually generated very few profiles. Typically, the Members and the community work on application schemas (in ISO definition terms, type 2). An application schema extends say GML with new elements, typically through the inclusion of additional namespaces. We have learned over the last several years that well considered application schemas grounded in community consensus content models significantly increase interoperability within and often between communities of interest (domains). There are now numerous examples of this, such as CityGML, GeoSciML, and more recently AIXM/GML.

As to WCS, if there are issues, please bring these issues into the OGC process so that they are addressed!! At the end of the day, if a server cannot provide this information, there is not much that an OGC interface standard can do to solve the issue of content payload response size! Also, consider that the WFS interface definition does support the ability to provide the client information about the response size:

numberOfFeatures attribute. In this way a client may obtain a count of the number of features that a query would return without having to incur the cost of transmitting the entire result set.

Finally, WRT to NetCDF/CF, OGC standards work is driven by the Members and the communities of interest they represent.

Regards

Carl


----- Original Message ----- From: "Bryan Lawrence" <bryan.lawrence@xxxxxxxxxx>
To: "David Arctur" <darctur@xxxxxxxxxxxxxxxxxx>
Cc: "Unidata GALEON" <galeon@xxxxxxxxxxxxxxxx>; "Woolf, A (Andrew)" <andrew.woolf@xxxxxxxxxx>; "Ben Domenico" <Ben@xxxxxxxxxxxxxxxx>; "Unidata Techies" <techies@xxxxxxxxxxxxxxxx>; "Mohan Ramamurthy" <mohan@xxxxxxxx>; "Meg McClellan" <mmcclell@xxxxxxxx>; "Carl Reed" <creed@xxxxxxxxxxxxxxxxxx>; "George Percivall" <gpercivall@xxxxxxxxxxxxxxxxxx>; "Jeff deLaBeaujardiere" <Jeff.deLaBeaujardiere@xxxxxxxx>; "Steve Hankin" <steven.c.hankin@xxxxxxxx>
Sent: Monday, July 20, 2009 5:06 AM
Subject: Re: [galeon] plan for establishing CF-netCDF as an OGC standard


Hi David

picture of the future of WCS, and what was horrific about it. Btw, WFS
has the same deficiency as WCS when it comes to predicting how big the
response will be; that's a function-point I'd sure like to see in
those web services.

... and opendap has the same problem ... except, that if you know enough to use an opendap interface, you know enough to calculate the size of the response ... but yes, I think this is a big issue!

standards. OPeNDAP and CF/netCDF already qualify as mature, effective
standards, so I wouldn't recommend changing them just to bring them
into OGC. .... As to this being "just publicity" as Bryan suggests, that seems to me
to disregard the value of open community participation and cross-
fertilization of ideas that take place within the OGC community and
processes.

The problem is either: we have community participation in *standardising* or we don't. If we don't, then what is the standardisation *process* for? If one doesn't envisage allowing the community to modify the candidates, then why have a community process?

I think it's important for ALL standardisation communities to recognise well characterised and governed "standards" (whatever that means) from other communities, rather than take
on managing everything for themselves.

So, to reiterate my point which was obviously less clear than it ought to have been (given some private email I have received), and to give some context to where I am coming from.

- I clearly believe OGC standards and procedures have lots to add for some tasks, but - I think that NetCDF is well characterised, and via it's dependency on HDF (at V4.0) rather difficult to cast in stone as something that should be *defined* (standardised) by an OGC process. - I think the CF *interface* could be decoupled from it's dependency on netcdf and a candidate for OGC governance. - I think that a host of OGC protocols would benefit from allowing xlink out to cf/netcdf binary content *whether or not OGC governs the definition of cf/netcdf*.

 Perhaps you're concerned about the potential for reduced
control over the interface definition, but that's not what will happen
-- you won't lose control over it. There may be variations and
profiles for special applications that emerge, but that wouldn't
require you to change what already works.

Hmmm. I think history demonstrates pretty conclusively that profile proliferation reduces interoperability to the point that (depending on application) it's notional not rational. I would be concerned if we profiled CF in the same way, as for example, one NEEDS to profile GML (which is not to say I don't believe in GML for some appplications, we're heavily into doing exactly that on other fronts) ... but really, we have to think of profile proliferation as standard proliferation ...

I apologize immediately if I've missed or misrepresented any of the
issues with CF/netCDF or OPeNDAP. Please take this at face value. At
the end of the day, I just want to see stronger relationships and
stronger technology. And I think the relationships, personal and
institutional, matter more than the technology, because having better
relationships will lead to better solutions, whatever technology is
chosen.

I'm sure we're all on the same page here ... and we just need to spell out the details to each other.

Most folk know I'm in favour of exploring how an OGC relationship can help CF. What I'm not in favour of is function creep so that we end up with OGC taking on HDF and the netcdf bits/bytes storage etc. I jumped in here for precisely that reason, and that reason alone. I may have muddied the waters with some other stuff ...

Cheers
Bryan

p.s. we can have the WCS/WFS discussion another day, I don't have time to do it now ...

--
Bryan Lawrence
Director of Environmental Archival and Associated Research
(NCAS/British Atmospheric Data Centre and NCEO/NERC NEODC)
STFC, Rutherford Appleton Laboratory
Phone +44 1235 445012; Fax ... 5848;
Web: home.badc.rl.ac.uk/lawrence

  • 2009 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the galeon archives: