Overview of Zarr Support in netCDF-C

Beginning with netCDF version 4.8.0, the Unidata netCDF group has extended the netCDF-C library to provide access to cloud storage (e.g. Amazon S3 [1] ) by providing a mapping from a subset of the full netCDF Enhanced (aka netCDF-4) data model to a variant of the Zarr [4] data model that already has mappings to key-value pair cloud storage systems. The netCDF version of this storage format is called NCZarr [2].

The NCZarr Data Model

NCZarr uses a data model [2] that is, by design, similar to, but not identical with the Zarr Version 2 Specification [4]. Briefly, the data model supported by NCZarr is netCDF-4 minus the user-defined types and the String type. As with netCDF-4 it supports chunking. Eventually it will also support filters in a manner similar to the way filters are supported in netCDF-4.

Specifically, the model supports the following.

  • "Atomic" types: char, byte, ubyte, short, ushort, int, uint, int64, uint64.
  • Shared (named) dimensions
  • Attributes with specified types -- both global and per-variable
  • Chunking
  • Fill values
  • Groups
  • N-Dimensional variables
  • Per-variable endianness (big or little)

With respect to full netCDF-4, the following concepts are currently unsupported.

  • String type
  • User-defined types (enum, opaque, VLEN, and Compound)
  • Unlimited dimensions

Enabling NCZarr Support

NCZarr support is enabled if the --enable-NCZarr option is used with './configure'. If NCZarr support is enabled, then a usable version of libcurl must be specified using the LDFLAGS environment variable (similar to the way that the HDF5 libraries are referenced). Refer to the installation manual for details. NCZarr support can be disabled using the --disable-dap.

Accessing Data Using the NCZarr Prototocol

In order to access a NCZarr data source through the netCDF API, the file name normally used is replaced with a URL with a specific format.

URL Format

The URL is the usual scheme:://host:port/path?query#fragment format. There are some details that are important.

  • Scheme: this should be https or s3,or file. The s3 scheme is equivalent to https plus setting mode=NCZarr (see below). Specifying file is mostly used for testing.
  • Host: Amazon S3 defines two forms: Virtual and Path.
    • Virtual: the host includes the bucket name as in
      bucket.s3.<region>.amazonaws.com
    • Path: the host does not include the bucket name, but rather the bucket name is the first segment of the path. For example s3.<region>.amazonaws.com/bucket
    • Other: It is possible to use other non-Amazon cloud storage, but that is cloud library dependent.
  • Query: currently not used.
  • Fragment: the fragment is of the form key=value&key=value&.... Depending on the key, the =value part may be left out and some default value will be used.
Client Parameters

The fragment part of a URL is used to specify information that is interpreted to specify what data format is to be used, as well as additional controls for that data format. For NCZarr support, the following key=value pairs are allowed.

  • mode=NCZarr|zarr|s3|nz4|nzf... -- The mode key specifies the particular format to be used by the netCDF-C library for interpreting the dataset specified by the URL. Using mode=NCZarr causes the URL to be interpreted as a reference to a dataset that is stored in NCZarr format. The modes s3, nz4, and nzf tell the library what storage driver to use. The s3 is default] and indicates using Amazon S3 or some equivalent. The other two, nz4 and nzf are again for testing. The zarr mode tells the library to use NCZarr, but to restrict its operation to operate on pure Zarr Version 2 datasets.
  • log=<output-stream>: -- this control turns on logging output, which is useful for debugging and testing. If just log is used then it is equivalent to log=stderr.

NCZarr Map Implementation

Internally, the NCZarr implementation has a map abstraction that allows different storage formats to be used. This is closely patterned on the same approach used in the Python Zarr implementation, which relies on the Python MutableMap [3] class. In NCZarr, the corresponding type is called zmap.

The zmap model is a set of keys where each key maps to an object that can hold arbitrary data. The keys are assumed to have following BNF grammar.

key:   '/' segment
     | key '/' segment
     ;

This key structure induces a tree structure where each segment matches a node in the tree. This key/tree duality deliberately matches that of a typical file system path in e.g. linux. The key '/' is the root of the tree.

Datasets

Within the key induced tree, each dataset (in the netCDF sense) has a root which is specified by a specific key. All objects making up the dataset (see the section on NCZarr vs Zarr ) reside in objects (keys) below that dataset root key.

One restriction is that datasets cannot be nested in that no dataset root key can be a prefix of another dataset root key.

Zmap Implementatons

The primary zmap implementation is s3 (i.e. mode=NCZarr,s3) and indicates that the Amazon S3 cloud storage is to be used. Other storage formats use a structured netCDF-4 file format (mode=NCZarr,nz4), or a directory tree (mode=NCZarr,nzf) The latter two are used mostly for debugging and testing. However, the nzf format is important because it is intended to match a corresponding storage format used by the Python Zarr implementation. Hence it should serve to provide interoperability between NCZarr and the Python Zarr.

NCZarr versus Pure Zarr.

The NCZarr format extends the pure Zarr format by adding extra objects such as .NCZarr and .ncvar. It is possible to suppress the use of these extensions so that the netCDF library can read and write a pure zarr formatted file. This is controlled by using mode=NCZarr,zarr combination.

Notes on Debugging NCZarr Access

The NCZarr support has a logging facility. Turning on this logging can sometimes give important information. Logging can be enabled by using the client parameter log or `log=filename", or by setting the environment variable NCLOGGING. The first case will send log output to standard error and the second will send log output to the specified file. The environment variable is equivalent to log.

Amazon S3 Storage

The Amazon AWS S3 storage driver currently uses the Amazon AWS S3 Software Development Kit for C++ (aws-s3-sdk-cpp). In order to use it, the client must provide some configuration information. Specifically, the ~/.aws/config file should contain something like this.

[default]
output = json
aws_access_key_id=XXXX...
aws_secret_access_key=YYYY...
Addressing Style

The notion of "addressing style" may need some expansion. Amazon S3 accepts two forms for specifying the endpoint for accessing the data.

  1. Virtual -- the virtual addressing style places the bucket in the host part of a URL. For example:
    https://.s2..amazonaws.com/
  2. Path -- the path addressing style places the bucket in at the front of the path part of a URL. For example:
    https://s2..amazonaws.com//

The NCZarr code will accept either form, although internally, it is standardized on path style.

Zarr vs NCZarr

Data Model

The NCZarr storage format is almost identical to that of the the standard Zarr version 2 format. The data model differs as follows.

  1. Zarr supports filters -- NCZarr as yet does not
  2. Zarr only supports anonymous dimensions -- NCZarr supports only shared (named) dimensions.
  3. Zarr attributes are untyped -- or perhaps more correctly characterized as of type string.
Storage Format

Consider both NCZarr and Zarr, and assume S3 notions of bucket and object. In both systems, Groups and Variables (Array in Zarr) map to S3 objects. Containment is modelled using the fact that the container's key is a prefix of the variable's key. So for example, if variable v1 is contained int top level group g1 -- /g1 -- then the key for v1 is /g1/v. Additional information is stored in special objects whose name start with ".z". In Zarr, the following special objects exist.

  1. Information about a group is kept in a special object named .zgroup; so for example the object /g1/.zgroup.
  2. Information about an array is kept as a special object named .zarray; so for example the object /g1/v1/.zarray.
  3. Group-level attributes and variable-level attributes are stored in a special object named .zattr; so for example the objects /g1/.zattr and _/g1/v1/.zattr.

The NCZarr format uses the same group and variable (array) objects as Zarr. It also uses the Zarr special .zXXX objects.

However, NCZarr adds some additional special objects.

  1. .NCZarr -- this is in the top level group -- key /.NCZarr. It is in effect the "superblock" for the dataset and contains any netCDF specific dataset level information. It is also used to verify that a given key is the root of a dataset.
  2. .nczgroup -- this is a parallel object to .zgroup and contains any netCDF specific group information. Specifically it contains the following.

    • dims -- the name and size of shared dimensions defined in this group.
    • vars -- the name of variables defined in this group.
    • groups -- the name of sub-groups defined in this group.

    These lists allow walking the NCZarr dataset without having to use the potentially costly S3 list operation.

  3. .nczvar -- this is a parallel object to .zarray and contains netCDF specific information. Specifically it contains the following.
    • dimrefs -- the names of the shared dimensions referenced by the variable.
    • storage -- indicates if the variable is chunked vs contiguous in the netCDF sense. 1 .nczattr -- this is parallel to the .zattr objects and stores the attribute type information.
Translation

With some constraints, it is possible for an NCZarr library to read Zarr and for a zarr library to read the NCZarr format.

The latter case, zarr reading NCZarr is possible if the zarr library is willing to ignore objects whose name it does not recognize: specifically anthing beginning with .ncz.

The former case, NCZarr reading zarr is also possible if the NCZarr can simulate or infer the contents of the missing .nczXXX objects. As a rule this can be done as follows.

  1. .nczgroup -- The list of contained variables and sub-groups can be computed using the S3 list operation to list the keys "contained" in the key for a group. By looking for occurrences of .zgroup, .zattr, _.zarray to infer the keys for the contained groups, attribute sets, and arrays (variables). Constructing the set of "shared dimensions" is carried out by walking all the variables in the whole dataset and collecting the set of unique integer shapes for the variables. For each such dimension length, a top level dimension is created named ".zdim" where len is the integer length. The name is subject to change.

  2. .nczvar -- The dimrefs are inferred by using the shape in .zarray and creating references to the simulated shared dimension. netCDF specific information.

  3. .nczattr -- The type of each attribute is inferred by trying to parse the first attribute value string.

Compatibility

In order to accommodate existing implementations, certain mode tags are provided to tell the NCZarr code to look for information used by specific implementations.

Examples

Here are a couple of examples using the ncgen and ncdump utilities.

  1. Create an NCZarr file using a local directory tree as storage.
    ncgen -4 -lb -o "file:///home/user/dataset.nzf#mode=NCZarr,nzf" dataset.cdl
    
  2. Display the content of an NCZarr file using a local directory tree as storage.
    ncdump "file:///home/user/dataset.nzf#mode=NCZarr,nzf"
    
  3. Create an NCZarr file using S3 as storage.
    ncgen -4 -lb -o "s3://datasetbucket" dataset.cdl
    
  4. Create an NCZarr file using S3 as storage and keeping to the pure zarr format.
    ncgen -4 -lb -o "s3://datasetbucket#mode=zarr" dataset.cdl
    

References

[1] Amazon Simple Storage Service Documentation
[2] netCDF ZARR Data Model Specification
[3] Python Documentation: 8.3. collections — High-performance container datatypes
[4] Zarr Version 2 Specification
[5] XArray Zarr Encoding Specification

Appendix A. Building NCZarr Support

Currently only the following build cases are supported.

Operating SystemSupported Build Systems
Linux Automake, CMake
OS-X Automake, CMake
Visual Studio N.A.

There are several options relevant to NCZarr support and to Amazon S3 support. These are as follows.

  1. --enable-NCZarr -- enable the NCZarr support. If disabled, then all of the following options are disabled or irrelevant.
  2.   aws-c-common aws-cpp-sdk-s3 and aws-cpp-sdk-core -- if these libraries are available, then Amazon S3 support is enabled for NCZarr.
  3. --disable-s3 -- even if the aws libraries are available, this option will forcibly disable Amazon S3 support.

The CMake equivalents are as follows:

  • --enable-NCZarr => ENABLE_NCZarr=ON
  • --disable-s3 => ENABLE_S3=OFF

If S3 support is desired, then LDFLAGS should be properly set, namely this.

LDFLAGS="$LDFLAGS -L/usr/local/lib -laws-cpp-sdk-s3 aws-cpp-sdk-core"

The above assumes that these libraries were installed in /usr/local/lib, so the above requires modification if they were installed elsewhere.

Note also that if S3 support is enabled, then you need to have a C++ compiler installed because part of the S3 support code is written in C++.

Appendix B. Building aws-sdk-cpp

In order to use the S3 storage driver, it is necessary to install the Amazon aws-sdk-cpp library.

As a starting point, here are the CMake options used by Unidata to build that library. It assumes that it is being executed in a build directory, build say, and that build/../CMakeLists.txt exists.

cmake -DFORCE_CURL=ON -DBUILD_ONLY=s3 -DMINIMIZE_SIZE=ON -DBUILD_DEPS=OFF -DCMAKE_CXX_STANDARD=14 ..

The expected set of installed libraries are as follows:

  • aws-cpp-sdk-s3
  • aws-cpp-sdk-core

Appendix C. Amazon S3 Imposed Limits

The Amazon S3 cloud storage imposes some significant limits that are inherited by NCZarr (and Zarr also, for that matter).

Some of the relevant limits are as follows:

  1. The maximum object size is 5 Gigabytes with a total for all objects limited to 5 Terabytes.
  2. S3 key names can be any UNICODE name with a maximum length of 1024 bytes. Note that the limit is defined in terms of bytes and not (Unicode) characters. This affects the depth to which groups can be nested because the key encodes the full path name of a group.

Point of Contact

Author: Dennis Heimbigner
Email: dmh at ucar dot edu
Initial Version: 4/10/2020
Last Revised: 6/8/2020

Comments:

Post a Comment:
Comments are closed for this entry.
Unidata Developer's Blog
A weblog about software development by Unidata developers*
Unidata Developer's Blog
A weblog about software development by Unidata developers*

Welcome

FAQs

News@Unidata blog

Take a poll!

What if we had an ongoing user poll in here?

Browse By Topic
Browse by Topic
« November 2020
SunMonTueWedThuFriSat
1
3
4
5
6
7
8
10
11
12
13
14
15
17
18
19
20
21
22
24
25
26
27
28
29
30
     
       
Today