To search the cig-short email archive, do a Google search with
site:http://www.geodynamics.org/pipermail/cig-short/ MY_SEARCH_STRING
Issues related to installing PyLith
'import site' failed; use -v for traceback --LOTS OF OUTPUT-- TypeError: stat() argument 1 must be encoded string without NULL bytes, not str
We have seen this error on Darwin system running OS X 10.5 and later. Files downloaded from the web are marked with an extra attribute that can prevent python from starting up properly. The best solution in these cases is to download PyLith using command line tools:
mkdir pylith cd pylith curl -O http://www.geodynamics.org/cig/software/pylith/pylith-1.6.2-darwin-10.6.8.tgz tar -zxf pylith-1.6.2-darwin-10.6.8.tgz
ImportError: numpy.core.multiarray failed to import Traceback (most recent call last): File "/usr/bin/pylith", line 37, in <module> from pylith.apps.PyLithApp import PyLithApp File "/usr/lib/python2.6/site-packages/pylith/apps/PyLithApp.py", line 26, in <module> class PyLithApp(PetscApplication): File "/usr/lib/python2.6/site-packages/pylith/apps/PyLithApp.py", line 33, in PyLithApp class Inventory(PetscApplication.Inventory): File "/usr/lib/python2.6/site-packages/pylith/apps/PyLithApp.py", line 51, in Inventory from pylith.topology.MeshImporter import MeshImporter File "/usr/lib/python2.6/site-packages/pylith/topology/MeshImporter.py", line 28, in <module> class MeshImporter(MeshGenerator): File "/usr/lib/python2.6/site-packages/pylith/topology/MeshImporter.py", line 37, in MeshImporter class Inventory(MeshGenerator.Inventory): File "/usr/lib/python2.6/site-packages/pylith/topology/MeshImporter.py", line 58, in Inventory from pylith.meshio.MeshIOAscii import MeshIOAscii File "/usr/lib/python2.6/site-packages/pylith/meshio/MeshIOAscii.py", line 26, in <module> from MeshIOObj import MeshIOObj File "/usr/lib/python2.6/site-packages/pylith/meshio/MeshIOObj.py", line 26, in <module> from meshio import MeshIO as ModuleMeshIO File "/usr/lib/python2.6/site-packages/pylith/meshio/meshio.py", line 25, in <module> _meshio = swig_import_helper() File "/usr/lib/python2.6/site-packages/pylith/meshio/meshio.py", line 21, in swig_import_helper _mod = imp.load_module('_meshio', fp, pathname, description) ImportError: numpy.core.multiarray failed to import
This error arises from having another version of Python installed that interferes with the Python included with PyLith. The solution is to set your environment variables so that the shell doesn't see the existing Python when you run PyLith.
unset PYTHON unset PYTHON26 unset PYTHON27 unset PYTHONPATH PATH=/usr/bin:/bin:/lib:/lib/lapack
In general, you will need to perform steps 1 and 3 (but not 2) every time you run PyLith. To add these commands to the PyLith startup script, add the commands to the bottom of the pylithrc file included in the PyLith distribution. Usually this file is in Program Files (x86)/PyLith. This shell script is run every time PyLith starts up.
See the INSTALL file included with the PyLith installer utility for directions and example configuration parameters. Troubleshooting tips are also included at the end of the INSTALL file.
Issues related to generating a mesh to use as input for PyLith.
See examples/2d/subduction
and Sessions III and IV of the 2011 Crustal Deformation Modeling tutorial.
See examples/2d/subduction
and examples/meshing/cubit_cellsize
.
Using surface meshes to identify the fault surface is a development feature that is fragile and untested. Use with extreme caution. Identifying faults using psets is a much more thoroughly tested feature.
General issues related to running PyLith
It is VERY IMPORTANT to make sure that the scales used in the nondimensionalization are appropriate for your problem. PyLith can solve problems across an extremely wide range of spatial and temporal scales if the appropriate scales are used in the nondimensionalization.
Due to roundoff errors and convergence tolerances in the iterative solvers, PyLith relies on reasonable scales in the solution in constructing the friction criterion and preconditioning the system. Failure to set appropriate scales in the nondimensionalization will cause the solution to be garbage.
Default values: relaxation_time = 1.0*year length_scale = 1.0*km pressure_scale = 3.0e+10*Pa Recommended values: relaxation_time = TIME_STEP length_scale = DISCRETIZATION_SIZE or DISPLACEMENT_MAGNITUDE pressure_scale = SHEAR_MODULUS
Default values: shear_wave_speed = 3.0*km/s density = 3000.0*kg/m**3 wave_period = 1.0*s
Recommended values: shear_wave_speed = MINIMUM_SHEAR_WAVE_SPEED density = DENSITY wave_period = MINIMUM_WAVE_PERIOD
Slip and traction vectors are output in the fault coordinate system (along strike, up-dip, and opening). The direction of the slip vector corresponds to the direction of motion on the “negative” side of the fault, which is defined by the origin of the fault normal vector. To convert to the global coordinate system, request that the fault orientation be included in the fault output via:
vertex_info_fields = [strike_dir,dip_dir,normal_dir]
With this information it is easy to rotate the slip or traction vector from the fault coordinate system to the global coordinate system. This is usually done in a Python script with HDF5 output or within ParaView using the calculator. The expression for the slip in global coordinates is:
(slip_X*strike_dir_X+slip_Y*dip_dir_X)*iHat+(slip_X*strike_dir_Y+slip_Y*dip_dir_Y)*jHat+(slip_X*strike_dir_Z+slip_Y*dip_dir_Z)*kHat
Use of the FaultCohesiveDyn object for spontaneous (dynamic) rupture in quasistatic simulations requires careful selection of solver parameters. See Session V of the 2013 Crustal Deformation Modeling tutorial for a detailed discussion.
Errors when running PyLith
RuntimeError: Error occurred while reading spatial database file 'FILENAME'. I/O error while reading SimpleDB data.
Make sure the num-locs
values in the header matches
the number of lines of data and that the last line of data
includes an end-of-line character.
Issues related to running PyLith on a cluster or other parallel computer.
PETSC ERROR: ------------------------------------------------------------------------ PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[14]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run PETSC ERROR: to get more information on the crash. PETSC ERROR: --------------------- Error Message ------------------------------------ PETSC ERROR: Signal received! PETSC ERROR: ------------------------------------------------------------------------ PETSC ERROR: Petsc Development HG revision: 78eda070d9530a3e6c403cf54d9873c76e711d49 HG Date: Wed Oct 24 00:04:09 2012 -0400 PETSC ERROR: See docs/changes/index.html for recent updates. PETSC ERROR: See docs/faq.html for hints about trouble shooting. PETSC ERROR: See docs/index.html for manual pages. PETSC ERROR: ------------------------------------------------------------------------ PETSC ERROR: /home/brad/pylith-1.8.0/bin/mpinemesis on a arch-linu named des-compute11.des by brad Tue Nov 13 10:44:06 2012 PETSC ERROR: Libraries linked from /home/brad/pylith-1.8.0/lib PETSC ERROR: Configure run at Wed Nov 7 16:42:26 2012 PETSC ERROR: Configure options --prefix=/home/brad/pylith-1.8.0 --with-c2html=0 --with-x=0 --with-clanguage=C++ --with-mpicompilers=1 --with-debugging=0 --with-shared-libraries=1 --with-sieve=1 --download-boost=1 --download-chaco=1 --download-ml=1 --download-f-blas-lapack=1 --with-hdf5=1 --with-hdf5-include=/home/brad/pylith-1.8.0/include --with-hdf5-lib=/home/brad/pylith-1.8.0/lib/libhdf5.dylib --LIBS=-lz CPPFLAGS="-I/home/brad/pylith-1.8.0/include " LDFLAGS="-L/home/brad/pylith-1.8.0/lib " CFLAGS="-g -O2" CXXFLAGS="-g -O2 -DMPICH_IGNORE_CXX_SEEK" FCFLAGS="-g -O2" PETSC_DIR=/home/brad/build/pylith_installer/petsc-dev PETSC ERROR: ------------------------------------------------------------------------ PETSC ERROR: User provided function() line 0 in unknown directory unknown file
This appears to be associated with how OpenMPI interprets calls to fork() when PyLith starts up. Set your environment (these can also be set on the command line like other OpenMPI parameters) to turn off Infiniband support for fork so that a normal fork call is made:
export OMPI_MCA_mpi_warn_on_fork=0 export OMPI_MCA_btl_openib_want_fork_support=0
–bind-to-core
command line argument for mpirun.[pylithapp] scheduler = pbs [pylithapp.pbs] shell = /bin/bash qsub-options = -V -m bea -M johndoe@university.edu [pylithapp.launcher] command = mpirun -np ${nodes} -machinefile ${PBS_NODEFILE}
Command line arguments:
−−nodes=NUMPROCS --scheduler.ppn=N --job.name=NAME --job.stdout=LOG_FILE # NPROCS = total number of processes # N = number of processes per compute node # NAME = name of job in queue # LOG_FILE = name of file where stdout will be written
[pylithapp] scheduler = sge [pylithapp.pbs] shell = /bin/bash pe-name = orte qsub-options = -V -m bea -M johndoe@university.edu -j y [pylithapp.launcher] command = mpirun -np ${nodes} # Use the options below if not using the OpenMPI ORTE Parallel Environment #command = mpirun -np ${nodes}-machinefile ${PE_HOSTFILE} -n ${NSLOTS}
Command line arguments:
−−nodes=NPROCS --job.name=NAME --job.stdout=LOG_FILE # NPROCS = total number of processes # NAME = name of job in queue # LOG_FILE = name of file where stdout will be written
The PyLith HDF5 data writers (DataWriterHDF5Mesh, etc) use HDF5 parallel I/O to write files in parallel. As noted in the PyLith manual, this is not nearly as robust as the HDF5Ext data writers (DataWriterHDF5ExtMesh, etc) that write raw binary files using MPI I/O accompanied by an HDF5 metadata file written. If you experience errors when running on multiple compute nodes where jobs mysteriously get hung up with or without HDF5 error messages, switching from the DataWriterHDF5 data writers to the DataWriterHDF5Ext data writers may fix the problem (if HDF5 parallel I/O is the source of the problem). This will produce one raw binary file per HDF5 dataset, so it means lots more files that must be kept together.