I am currently working with ThermoRichardMechanics process and I would like to speed up my simulations using a multi-core machine. I was checking the documentation and it says I could build my configuration with PETS and MPI, which I did. It also says that I need to have structured mesh and create partitions, which I also did, but this changed my initial mesh (which I didn’t want). Moreover, when I tried to run it, the model failed and showed me this message (I also tried to run the model without mpi configuration and it worked):
Instead of PETSc, you can use the PardisoLU eigen solver, and by specifying the number of cores for your analyses, you can benefit from parallel computing, which will help parallelize the assembly process of your problem. For more information please go through the following link: OpenGeoSys 6.5.0
Please note that for configuring and compiling the OGS, you have to use the following command:
I have been checking some previous posts with similar problems, but I don’t really understand what I should do.
Also, do you know exactly where do I have to specify the number of cores that I would like to use?
Did you set BUILD_SHARED_LIBS ON during the configuration of the OGS? You can easily use OGS_ASM_THREADS= “number of cores” to parallelize your simulation.
since the direct linear solver fails using more threads won’t help.
I’d assume the assembled Jacobian is singular. This could result from a zero or wrong stiffness (i.e., a problem with the solid material model) or from completely incompressible condition, such that the system cannot react properly to external forces (could be a problem with the Biot factor and solid and fluid compressibilities). There might be other reasons than these.
Thank you for your response. I will check the assembled matrices. Nevertheless, I feel the problem is related to the Paradiso solver, I have run this simulation before with BiCGSTAB and it was working alright. This was my solver configuration:
<nonlinear_solvers>
<nonlinear_solver>
<name>basic_newton</name>
<type>Newton</type>
<max_iter>50</max_iter>
<linear_solver>general_linear_solver</linear_solver>
</nonlinear_solver>
</nonlinear_solvers>
<linear_solvers>
<linear_solver>
<name>general_linear_solver</name>
<eigen>
<solver_type>BiCGSTAB</solver_type>
<precon_type>DIAGONAL</precon_type> <!--ILUT is also possible -->
<max_iteration_step>10000</max_iteration_step> <!-- 10000 is a default value -->
<error_tolerance>1e-16</error_tolerance> <!-- 1e-16 is a default value -->
<scaling>1</scaling>
</eigen>
<petsc>
<parameters>-ksp_type tfqmr
-pc_type jacobi
-ksp_rtol 1.e-10 -ksp_atol 1.e-7
-ksp_max_it 4000
</parameters>
</petsc>
</linear_solver>
</linear_solvers>
I also tried running a simpler simulation that was working with BICGSTAB and the same problem occurs. Maybe I am mistaken with something in the solver? This is how I set PardisoLU:
I have now another question regarding the same topic. Is there anything else I need to run the program using the several cores?
I used the following preset for building the program:
The presets are only for the build. You need to set OGS_ASM_THREADS in advance to the execution or in the .bashrc/.zshrc.
There is also OMP_NUM_THREADS controling the number of cores for Pardiso. However, this variable is usually already >1 if not set.