[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: C++ notes



Hi Tomas,

> I saw a note from you the other day about the NcFile destructor.
> I see that the change you made following my suggestion has caused
> some problems so you need to find another way to make the C++
> interface HP-compatible. The change you suggested will certainly
> work, but there is a simpler way to do this without changing
> the class structure (which really does not need to be changed).
> The problem is that you want to define an abstract base class
> NcFile, but there is no function which can be made pure virtual
> by declaring it  "=0" because all the functions that are defined
> in the base class can be implemented there. Declaring the destructor
> or constructors pure virtual or protected is either illegal or
> has its own problems. The simple solution I suggest is to define
> the private function "virtual void abstract() =0;" in NcFile and
> redefine it as "virtual void abstract() {}" in NcNewFile and
> NcOldFile, thereby creating the missing function that is unimplemented
> in the base and implemented the children.
> This makes NcFile abstract and Nc*File concrete.

Thanks for the suggestion.  It turns out there is another reason, first
suggested by Dan Schmitt, for changing the class structure.

If a user wants to use inheritance to make a new type of netCDF file (for
example a time series file or a geo-referenced file) that has some required
dimensions, variables, or attributes, it is convenient to be able to either
read old files of this type or create new files.  The current class
structure makes this awkward.  The C++ code for creating these new file
types is very short if you can just derive a new class from NcFile and use
its methods for creating new files or opening old files, rather than trying
to inherit from both NcOldFile and NcNewFile.

To support the elimination of the NcOldFile and NcNewFile classes, I've
added NcFile::FileMode

    enum FileMode {
        ReadOnly = NC_NOWRITE,  // file exists, open read-only
        Write = NC_WRITE,       // file exists, open for writing
        Replace = NC_CLOBBER,   // create new file, even if already exists
        New = NC_NOCLOBBER      // create new file, fail if already exists
      };

and added the mode to the NcFIle constructor:

    NcFile( const char * path, FileMode = ReadOnly );

Backward compatibility with the old NcOldFile and NcNewFile classes is
supported with

    /*
     * For backward compatibility.  We used to derive NcOldFile and NcNewFile
     * from NcFile, but that was over-zealous inheritance.
     */
    #define NcOldFile NcFile
    #define NcNewFile NcFile
    #define Clobber Replace
    #define NoClobber New

and an NcFile constructor that does the right thing with the mode.

> I have tried compiling NetCDF as a DLL for Borland C++ 4.5.
> I finally succeded, but had to change a fair amount of code
> where NetCDF uses non-standard pointer arithmetic.
> That is, there are many cases of do loops in the NetCDF code
> where a pointer is decremented and the loop terminates when the
> decremented pointer becomes LOWER than a base pointer.
> This is illegal because pointers can only be compared
> WITHIN the memory of the variable or array which they refer to.
> If x is defined as "double x[100]" and y is defined as "double* y=x"
> the the expression "y--" should decrement the pointer y and one would
> expect that y would be less than x. This is NOT guarantied to be
> the case and on a 16 bit system (especially in DLLs) the variable
> x may start at the initial location of a 64K segment and the
> variable y may have (and very often has) an undefined value
> after it is decremented. Therefore, all do loops with pointer
> arithmetic have to be recoded with this in mind if NetCDF is
> to be ported to MS-Windows as a DLL.
> This is also advisable anyway because the pointer arithmetic
> is non-standard as it is. An example of NetCDF code of this
> kind is (code from putget.c, orig. code commented out,
> int ii defined and loop rephrased):
> 
>       if( IS_RECVAR(vp) )
>               boundary = coords + 1 ;
>       else
>               boundary = coords ;
> 
>       up = vp->dsizes + vp->assoc->count - 1 ;
>       ip = coords + vp->assoc->count - 1 ;
> 
>       /* for(offset = 0 ; ip >= boundary ; ip--, up--) */
>       ii = ip - boundary ;    /* 25.05.1995, tj */
>       for(offset = 0 ; ii >= 0 ; ip--, up--, ii--)
>               offset += *up * *ip ;
> 
> If you wish I can send you a diff file with the changes that I
> made. They are not direct suggestions for changes in the NetCDF code
> because I have not put any effort into verifying or optimizing
> the modified code. However the changes locate many places where
> the current NetCDF code should perhaps be modified (there could be other
> places too) to make it more robust.

You are exactly right about this, and we would appreciate getting your diffs
for incorporating into the netCDF 2.4 release.  We thought one of the code
checkers like ObjectCenter or Purify would catch such non-standard address
arithmetic, but neither provides a warning or error for this run-time
violation.  That makes your identification of these even more valuable,
since we don't know any tools that find them.

Lots of users have requested a netCDF DLL, and will be grateful for your
efforts.

> By the way, have you found the explanation for the size differences
> between the NetCDF files genereated by the c-interface and the c++-interface
> which I reported some time ago?

No, sorry, it's on my list but I haven't gotten to it yet.  Maybe next week
...

--Russ


______________________________________________________________________________

Russ Rew                                           UCAR Unidata Program
address@hidden                              http://www.unidata.ucar.edu