The Unidata Program Center is pleased to announce its 2013 Software Training Workshop. The workshop features our display and analysis packages GEMPAK (with an introduction to AWIPS-II) and the IDV, data access and management tools, the Local Data Manager (LDM), the Network Common Data Form (netCDF), and the THREDDS Data Server (TDS).
The workshop will be held July 23 - August 8, 2013. Individual courses last from one to four days.
Version 4.3.0 of the netCDF Operators (NCO) has been released. NCO is an Open Source package that consists of a dozen standalone, command-line programs that take netCDF files as input, then operate (e.g., derive new data, average, print, hyperslab, manipulate metadata) and output the results to screen or files in text, binary, or netCDF formats.
Unidata is currently evaluating AWIPS II release 13.1.2, which includes a new unified grib decoder to supply both D2D and the National Centers Perspective with gridded data. With this decoder upgrade and the UPC's improved decoding of high-volume NEXRAD3 and CONDUIT feeds, the last major addition to AWIPS II before release is GEMPAK functionality. GEMPAK 7 will be released as an add-on to AWIPS II for the National Centers and the Unidata community.
The Unidata Program Center wishes a fond farewell to Russ Rew, who is retiring after 37 years at Unidata. Russ was the first employee of Unidata, and was notable for never missing a day of work in Unidata's 25 years of existence. During that time, he personally answered over 100 million netCDF questions, without ever even making a spelling or grammeratical mistake.
The Unidata Program Center is pleased to announce the completion today of its newest community service facility in Boulder, Colorado. The Unidata Data Hallway provides modern, efficient, 24/7 data services to the university community -- while also making a stylish interior design statement.
"It's great to have such a lavish facility for our 'big iron'," says Unidata Program Center systems administrator Mike Schmidt. "We were excited to be able to showcase our data-delivery infrastructure like this -- it really helps the public understand the level of technical sophistication involved in what we're doing here at Unidata."
In part 1, we explained what data chunking is about in the context of scientific data access libraries such as netCDF-4 and HDF5, presented a 38 GB 3-dimensional dataset as a motivating example, discussed benefits of chunking, and showed with some benchmarks what a huge difference chunk shapes can make in balancing read times for data that will be accessed in multiple ways.
In this post, I'll continue looking at that example dataset to see how we can derive good chunk shapes, generalize to other datasets, look at how long it can take to rechunk a multidimensional dataset, and look at the use of Solid State Disk (SSD) for both accessing and rechunking data.