How to improve the computing speed of a THM model on OGS6?

Dear all,

I am a newbie in OGS6, and I would like to improve the computing speed of a THM (THERMO_HYDRO_MECHANICS) model on OGS6 6.4.0. I use a linear mesh in my model, and time step is 100s. Could anyone give me some advices? I attach my files here,
model.zip (1.0 MB)

The situation is when I was trying to increase the time step to save computing time, the model cannot be convergent in the latter steps. So, I found 100s would be a suitable time step to keep the calculating going on. However, in this way, the computing time is too long, for example, the model time is set to 3600s, and I need 3200s in the real world to complete this simulation.

Best,
Rui

By the way, I also want to inquire how to use all CPU cores (even GUP) for computing because I found not all CPU cores took part in. I guess if all CPU cores can participate in computing, the speed would be increased.

Hi @Rui_Feng,
from looking at the input file, there are 2 main things you can do:

  • time stepping: not sure about the behavior of your problem, but having I look at how many linear steps are needed for the nonlinear solver to converge over time could help to gain some insight. An adaptive time stepping scheme like iteration number based time stepping could improve the time discretization in the right manner. With adaptive time stepping you can also reduce the maximum number of nonlinear steps further.
  • The linear solver: Here you have a lot of options in theory. Nevertheless, most of the options require some effort to get them working. If you want to stick with Eigen as library, SparseLU is the only direct solver that works out of the box. Its main disadvantage is that it is not parallelized. If you compile OGS with MKL you can use PardisoLU which is a parallelized direct solver (here are some hints if you run into trouble: PardisoLU fails with unspecific message - #13 by joergbuchwald, and I would also suggest to switch off scaling for Pardiso as it leads sometimes into more trouble). This should work quite well for your problem size(~30000 nodes). Alternatively, you could also try an iterative solver like BiCGSTAB which works also out-of-the-box and is parallelized together with DIAGONAL preconditioning. However, it depends very much on the problem whether the solver converges. You could also try to precondition BiCGSTAB with ILUT and scaling switched on to achieve sometimes better convergence.
    Unfortunately, ILUT is not parallelized, but it might be faster than SparseLU.
    With PETSc or LIS libraries you will have even more possibilities, and you could also make use of MPI parallelization (However I suspect no big advantages for your problem size).
1 Like

Dear @joergbuchwald ,

Thank your constant kind helps! I have no words to thank you. I will try the methods you mentioned here, and my model is for investigating and testing the fault stability under EGS stimulations, like this

Happy new year in advance!

All the best,
Rui