
Hi. does MathNet support multicore for vector/matrix operation? (without native wrapper)
I convert my program with C++ and Boost ublas to C# and Math.Net. And i want to accelerate calculation speed by multicorecpu.
If Math.Net support multicore cpu at library level, my conversion working is very simple.


Nov 14, 2011 at 7:26 AM
Edited Nov 14, 2011 at 7:27 AM

I found MathNet.Numerics.Control.DisableParallelization and set this parameter to true, but calculation speed be more fast! its odd. my cpu is i7 which has 4 core.
I repeated 50order DenseVector and 50x50 dense matrix multiply operation 100000. In default state, calculation time is 4251ms, but i changed MathNet.Numerics.Control.DisableParallelization to true, calculation time is 1429ms. is it correct? why disable
parallelization is more fast?



Hi,
Yes, we do parallelize several (but not all) managed linear algebra operations on dense matrices/vectors.
It is expected that managed parallelization actually downgrades performance on small data because it introduces a lot of overhead, that's why we disable it automatically on small matrices and vectors. It seems that for this case our threshold for disabling
parallelization may be too low though, we should check that.
Btw, we recently got contributions for a new managed GEMM implementation with a very impressive speedup compared to the current implementation, but I believe it did not make it into the codebase yet. This should speed up such managed operations remarkably.
Thanks,
Chris



Thanks for reply Chris. Then, parallelization is supported for Sparse vectors/matrices? I using MathNet to nonlinear optimization by newton approximations. So I need to calculate about 10000x10000 or about 20000x20000 matrix and 20000(or more bigger and
maybe sparse) order vector operations. In this case, it looks like more efficient that using parallelization :)
When will beta 3 be released? I really expect Math.Net !



I'm currently completely reworking the managed sparse linear algebra algorithms. Unfortunately for now they are less parallelized than dense algorithms though because they are much harder to partition properly (because of their irregular structure). But
they'll still be much faster than dense vectors/matrices if the data is very sparse.
Concerning beta 3: we've completely changed the versioning scheme lately (using semantic versioning instead of date based, see
semver.org), the current version as of today being
v2.1.2. As such there are no plans for a "beta 3", instead we just continue with normal releases. The next release will be v2.2.
Thanks,
Chris

