ran out of memory in OpenSeesSP
Moderator: selimgunay
ran out of memory in OpenSeesSP
Hi, When run out of memory (even if you use: system Mumps -ICNTL14 20) can we try something like this: system Mumps -ICNTL14 80? If not, what should I do then? Thank you.
Re: ran out of memory in OpenSeesSP
As far as I know, you can use INCTL14 80 or larger, remember using OpenSees 64 bit. Have you checked whether there is a memory leak in your model or not?
Re: ran out of memory in OpenSeesSP
Thank you mo_zarrin. Maybe the problem is that my laptop is a 6 GB RAM one! It doesn't work even with "INCTL14 100"! Nonetheless, the problem may source from something else!
Could you tell me please, are the only differences between regular OS and OpenSeesSP tcl files that:
1- we just simply change the system for analysis in OpenSeesSP (system Mumps -ICNTL14 100), and
2- we use -xml instead of -file in node/element recorders.
Or should I alter anything else in my tcl file as well?! Thanks.
Could you tell me please, are the only differences between regular OS and OpenSeesSP tcl files that:
1- we just simply change the system for analysis in OpenSeesSP (system Mumps -ICNTL14 100), and
2- we use -xml instead of -file in node/element recorders.
Or should I alter anything else in my tcl file as well?! Thanks.
Re: ran out of memory in OpenSeesSP
the Mumps _ICNTL 100 is going to fail if you are having memory problems as that is requesting an NxN matrix (do the math: N X N X sizeof(double) is how much memory you need).
you don't need -xml
you don't need -xml
Re: ran out of memory in OpenSeesSP
I've changed the system (64-Gb RAM) and it seems there is no problem with memory anymore. However, when I run OpenSeesSP I get the following error:
special ele: 0
VERTEX ONE: 1
Slave Process Running 3
StaticDomainDecompositionAnalysis::recvSelfSlave Process Running 2
StaticDomainDecompositionAnalysis::recvSelf - failed to get the Solver
Fatal error in MPI_Recv: Message truncated, error stack:
MPI_Recv(186).....................: MPI_Recv(buf=0000000003F7FDB0, count=4, MPI_
INT, src=0, tag=0, MPI_COMM_WORLD, status=000000000308F3B0) failed
MPIDI_CH3U_Receive_data_found(129): Message from rank 0 and tag 0 truncated; 24
bytes received but buffer size is 16
Slave Process Running 1
StaticDomainDecompositionAnalysis::recvSelf - failed to get the Solver
Fatal error in MPI_Recv: Message truncated, error stack:
MPI_Recv(186).....................: MPI_Recv(buf=000000000406FDB0, count=4, MPI_
INT, src=0, tag=0, MPI_COMM_WORLD, status=000000000319F9E0) failed
MPIDI_CH3U_Receive_data_found(129): Message from rank 0 and tag 0 truncated; 24
bytes received but buffer size is 16
- failed to get the Solver
Fatal error in MPI_Recv: Message truncated, error stack:
MPI_Recv(186).....................: MPI_Recv(buf=0000000003F0FDB0, count=4, MPI_
INT, src=0, tag=0, MPI_COMM_WORLD, status=00000000030EF9B0) failed
MPIDI_CH3U_Receive_data_found(129): Message from rank 0 and tag 0 truncated; 24
bytes received but buffer size is 16
job aborted:
rank: node: exit code[: error message]
0: Luisa-PC: 123
1: Luisa-PC: 1: process 1 exited without calling finalize
2: Luisa-PC: 1: process 2 exited without calling finalize
3: Luisa-PC: 1: process 3 exited without calling finalize
Could you tell me what the reason is?!
special ele: 0
VERTEX ONE: 1
Slave Process Running 3
StaticDomainDecompositionAnalysis::recvSelfSlave Process Running 2
StaticDomainDecompositionAnalysis::recvSelf - failed to get the Solver
Fatal error in MPI_Recv: Message truncated, error stack:
MPI_Recv(186).....................: MPI_Recv(buf=0000000003F7FDB0, count=4, MPI_
INT, src=0, tag=0, MPI_COMM_WORLD, status=000000000308F3B0) failed
MPIDI_CH3U_Receive_data_found(129): Message from rank 0 and tag 0 truncated; 24
bytes received but buffer size is 16
Slave Process Running 1
StaticDomainDecompositionAnalysis::recvSelf - failed to get the Solver
Fatal error in MPI_Recv: Message truncated, error stack:
MPI_Recv(186).....................: MPI_Recv(buf=000000000406FDB0, count=4, MPI_
INT, src=0, tag=0, MPI_COMM_WORLD, status=000000000319F9E0) failed
MPIDI_CH3U_Receive_data_found(129): Message from rank 0 and tag 0 truncated; 24
bytes received but buffer size is 16
- failed to get the Solver
Fatal error in MPI_Recv: Message truncated, error stack:
MPI_Recv(186).....................: MPI_Recv(buf=0000000003F0FDB0, count=4, MPI_
INT, src=0, tag=0, MPI_COMM_WORLD, status=00000000030EF9B0) failed
MPIDI_CH3U_Receive_data_found(129): Message from rank 0 and tag 0 truncated; 24
bytes received but buffer size is 16
job aborted:
rank: node: exit code[: error message]
0: Luisa-PC: 123
1: Luisa-PC: 1: process 1 exited without calling finalize
2: Luisa-PC: 1: process 2 exited without calling finalize
3: Luisa-PC: 1: process 3 exited without calling finalize
Could you tell me what the reason is?!
Re: ran out of memory in OpenSeesSP
sorry for the delay .. what solver were you using?
Re: ran out of memory in OpenSeesSP
Thanks for your reply Dr McKenna ... The system is Mumps and here are the analysis commands:
constraints Lagrange;
numberer RCM;
system Mumps -ICNTL14 80;
test EnergyIncr 1.E-6 10;
algorithm Newton;
integrator Newmark $gamma $beta;
analysis Transient;
analyze 500 0.01;
constraints Lagrange;
numberer RCM;
system Mumps -ICNTL14 80;
test EnergyIncr 1.E-6 10;
algorithm Newton;
integrator Newmark $gamma $beta;
analysis Transient;
analyze 500 0.01;
Re: ran out of memory in OpenSeesSP
is this by any chance occuring on a second analysis and working for the first.
Re: ran out of memory in OpenSeesSP
I tested some other systems and still get the same error! Is there any solution? Why this happens again and again ... ?! Is the problem with the Mumps?
Re: ran out of memory in OpenSeesSP
it is mumps.
Re: ran out of memory in OpenSeesSP
So ... isn't it possible to solve the mumps problem?! Should I forget about parallel processing?!