[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: netcdf-2.4.3 on SGI r10000



> I have tried to build the libraries several ways and can't get
> things to work right.  If I do a default installation (just leaving
> FFLAGS and CFLAGS as -O) I get the following message when I try to
> compile my code that uses the netCDF commands:

>        f77 -col120 -64 -r4 -mips4 -r10000 -O3 ... stretchz.o -lnetcdf -o
cfd.x
> ld64: FATAL 12: Expecting /usr/local/lib64/libnetcdf.a objects:
> ld64: Segmentation fault.  Removing output file...
> f77 ERROR:  /usr/lib64/cmplrs/ld64 returned non-zero status 1
> *** Error code 1 (bu21)

I don't see why the loader (ld) should ever dump core. I'm wondering if your
compiler install is correct. Did the OS get upgraded to 6.2 without upgrading
the compilers? You can discover what versions of compiler products you have
installed using the 'showprods' command.
%  showprods c_eoe c_dev c++_dev compiler_dev ftn77_eoe ftn77_dev
The products should all be version 6.2 or above.

The second question is, what, if anything, is /usr/local/lib64/libnetcdf.a?
% file /usr/local/lib64/libnetcdf.a
As russ mentions, you should see something like
 /usr/local/lib64/libnetcdf.a:    current ar archive containing 64-bit objects,
or, you should add a -Lpath_to_netcdf_lib_directory option to your link line.


> Trying to compile my source as new 32bit code made me think the
> library was old 32bit output.

Old 32 bit libs should link okay with new 32 bit libs.

> So I tried specifying the flags to
> force compilation to be 64 bit, mips4, on an r10000 (see env output
> below).

I believe the compiler defaults to -64 on your hardware. The other flags aren't
relevant to this problem.

> Then the make for netcdf bombs.

> output from configure:
> loading cache ./config.cache

One has to be careful rebuilding netcdf upon changing compilers to delete the
cache!!!!! `rm config.cache` or `make distclean`. Values like sizeof(long) are
stored in the cache. Careful reading of the output of configure shows that the
"change" to -64 did not change the model from the previous build, the previous
build had 64 bit longs
>  checking size of long... (cached) 8
So, the fact that you didn't delete the cache wasn't a problem here.
(But, you didn't need to rebuild, either.)

> cc -o nctest -64 -mips4 -r10000 varget.o vargetg.o varput.o varputg.o
vardef.o > vartests.o vputget.o vputgetg.o driver.o cdftests.o dimtests.o rec.o
> atttests.o misctest.o add.o error.o emalloc.o val.o slabs.o
> ../libsrc/libnetcdf.a

Note that this link, like all the previous 'cc' links, was sucessful.

>       CC  nctst.o -L. -lnetcdf_c++ -L../libsrc -lnetcdf  -o nctst
>ld:
> Archive: ../libsrc/libnetcdf.a has no table of contents (not searched)
>         add one with 'ar ts'

This makes me think you are accessing an old version (like maybe version 5) of
CC. It can't even read 64 bit archive headers. The fact that you can't build
the the CC (cplusplus) test is not on the critical path to you building a
fortran
application. To get this out of your face, set the environment variable CXX to
the empty string,  re-run `configure`, `make all`, and `make test`.

I'll wager that the c language tests will suceed and the fortran jackets test
(fortran/ftest) will fail in similar way to your application. If that is the
case, the problem is you have an old (I'm guessing version 6.0) fortran
compiler. If ftest suceeds, then your application link line is just not finding
the library.


-----

> Error on line 311 of write3D_nc.f: Declaration error for kdimvalues:
> adjustable automatic array
> Error on line 368 of write3D_nc.f: Declaration error for hold2d:
> adjustable automatic array

These are appear to be genuine fortran errors. The compiler is telling you
something about the source code that needs to be fixed. (Perhaps this is CRAY
source, which allows dynamic array sizing, but which is non-standard?)

I don't know why you don't see these errors when you use -n32. Take that (and
the meaningless "Expecting /usr/local/lib64/libnetcdf.a objects" diagnostic) up
with SGI support. The "new" (-64) compiler is supposed to be better in some
sense, so maybe it is being more strict where the old one would happily
generate
code from invalid source, and dump core at run time.

-glenn