poor memory management in sparse linear direct solver

Forum for OpenSees users to post questions, comments, etc. on the use of the OpenSees interpreter, OpenSees.exe

Moderators: silvia, selimgunay, Moderators

Post Reply
autumnboy
Posts: 5
Joined: Thu Jun 04, 2009 11:59 pm
Location: china

poor memory management in sparse linear direct solver

Post by autumnboy »

I wrote a simple test below:

wipe ;

model basic -ndm 3 -ndf 3

set Econ [expr 2.55*1e10 ]

nDMaterial ElasticIsotropic 1 $Econ 0.25 1.27

set eleArgs "1"
set element stdBrick
set nx 10
set ny 10
set nz 300

set eleNum [expr $nx*$ny*$nz]

set nn [expr ($nz+1)*($nx+1)*($ny+1) ]

block3D $nx $ny $nz 1 1 $element $eleArgs {

1 0 0 0

2 0.5 0 0

3 0.5 0.5 0

4 0 0.5 0

5 0 0 5

6 0.5 0 5

7 0.5 0.5 5

8 0 0.5 5
}


set load [expr -10000]


pattern Plain 1 Linear {

load $nn 0.0 $load 0.0

}
# boundary conditions

fixZ 0.0 1 1 1

numberer Plain
system SparseSPD 3
set NstepGravity 1; # apply gravity in 10 steps

set DGravity [expr 1./$NstepGravity]; # first load increment;

integrator LoadControl $DGravity; # determine the next time step for an analysis

test NormUnbalance 1.0e-7 20 1

algorithm Linear

constraints Transformation

analysis Static; # define type of analysis static or transient

set startT [clock seconds]
analyze $NstepGravity; # apply gravity
set endT [clock seconds]
puts "Execution time: [expr $endT-$startT] seconds."
autumnboy
Posts: 5
Joined: Thu Jun 04, 2009 11:59 pm
Location: china

Post by autumnboy »

Node num Element num equnum None zero
system ProfileSPD
numberer Plain 30000 36421 108900 43180731

system ProfileSPD
numberer RCM 30000 36421 108900 44036667

system SpareSPD
nested dissection ordering 30000 36421 108900 9803302

system SpareSPD
minimum degree ordering 30000 36421 108900 5719914(fewest , but memory consuming is largest)

system SpareSPD
general RCM ordering 30000 36421 108900 not run
Last edited by autumnboy on Tue May 18, 2010 12:19 am, edited 2 times in total.
autumnboy
Posts: 5
Joined: Thu Jun 04, 2009 11:59 pm
Location: china

Post by autumnboy »

Number of nonzero entries in the matrix for different method is shown. Though the number of nonzero entries in symsparse solver is fewer, the memory consumption is even larger than profileSPD due to the integration of fortran, c. It causes poor memory management.
fmk
Site Admin
Posts: 5884
Joined: Fri Jun 11, 2004 2:33 pm
Location: UC Berkeley
Contact:

Post by fmk »

you cannot make a statment like that because it is model dependent .. the fill in that sparse matrix solvers deal with greatly depends on the model and element connectevity .. typically when students generate large models to look at the solver performance and structure they are not very realistic of actual large models .. if you want to look at fill-in, etc use some of the sparse matrices that have been collected from real world situations are are used to test and compare sparse solvers

[url]
http://people.sc.fsu.edu/~burkardt/data ... hbsmc.html
[/url]

i suggest you also look at the other sparse solvers.
autumnboy
Posts: 5
Joined: Thu Jun 04, 2009 11:59 pm
Location: china

Post by autumnboy »

I do not mean the symspare solver is bad. On the contrary, the symspare solver in my test is better than the profileSPD. What I question is the integration of solvers in OpenSees. It uses lots of buffer and uses up the memory, not the nonzero.
Post Reply