This project has moved and is read-only. For the latest updates, please go here.

Math.Net Numerics Svd vs. LAPACK DGESVD

Dec 15, 2012 at 9:06 AM
Edited Dec 15, 2012 at 9:14 AM

Hi,

it may be a dump question but I've a program which is written in C and uses the dgesvd function of netlib (http://www.netlib.org/lapack/double/dgesvd.f).

Now I want to rewrite what is done in the C program in C# and tried to use Math.Net Numerics Double.DenseSvd and I don't get the same results.

Can someone tell me what is the difference between both implementations?

I used the following input matrix:

DenseMatrix A = new DenseMatrix(24, 3);
A[0, 0] = 0.00853749999998854;    A[0, 1] = 89.2522666666667;    A[0, 2] = 0;
A[1, 0] = -0.0175625000000537;    A[1, 1] = 55.1682666666667;    A[1, 2] = 0;
A[2, 0] = -0.0174625000000788;    A[2, 1] = 50.8299666666667;    A[2, 2] = 0;
A[3, 0] = 0.000137499999937063;   A[3, 1] = 32.6130666666667;    A[3, 2] = 0;
A[4, 0] = 0.025837499999966;      A[4, 1] = 2.34146666666666;    A[4, 2] = 0;
A[5, 0] = 0.0247375000000147;     A[5, 1] = 0.0117666666666638;  A[5, 2] = 0;
A[6, 0] = 0.00273749999996653;    A[6, 1] = -17.3512333333333;   A[6, 2] = 0;
A[7, 0] = -0.0143625000000611;    A[7, 1] = -27.1400333333333;   A[7, 2] = 0;
A[8, 0] = -0.0251625000000786;    A[8, 1] = -35.9106333333333;   A[8, 2] = 0;
A[9, 0] = -0.026762500000018;     A[9, 1] = -39.7074333333333;   A[9, 2] = 0;
A[10, 0] = -0.0261625000000549;   A[10, 1] = -43.1843333333333;  A[10, 2] = 0;
A[11, 0] = -0.010362500000042;    A[11, 1] = -53.1814333333333;  A[11, 2] = 0;
A[12, 0] = 0.0203374999999824;    A[12, 1] = -62.1617333333333;  A[12, 2] = 0;
A[13, 0] = 0.0272374999999556;    A[13, 1] = -63.9696333333333;  A[13, 2] = 0;
A[14, 0] = -0.0147625000000744;   A[14, 1] = -72.0996333333333;  A[14, 2] = 0;
A[15, 0] = -0.0130625000000464;   A[15, 1] = -69.6783333333333;  A[15, 2] = 0;
A[16, 0] = -0.00576250000005984;  A[16, 1] = -67.0790333333333;  A[16, 2] = 0;
A[17, 0] = 0.00063749999992524;   A[17, 1] = -0.530333333333337; A[17, 2] = 0;
A[18, 0] = -0.000762500000064392; A[18, 1] = 4.58916666666666;   A[18, 2] = 0;
A[19, 0] = 0.0177374999999529;    A[19, 1] = 27.9494666666667;   A[19, 2] = 0;
A[20, 0] = 0.0226374999999734;    A[20, 1] = 31.8685666666667;   A[20, 2] = 0;
A[21, 0] = 0.00643749999994725;   A[21, 1] = 81.9918666666667;   A[21, 2] = 0;
A[22, 0] = 0.00663750000001073;   A[22, 1] = 86.5695666666667;   A[22, 2] = 0;
A[23, 0] = 0.00853749999998854;   A[23, 1] = 88.8083666666667;   A[23, 2] = 0;

If I let Math.Net Numerics do the Svd and output the VT matrix I get the following result:

-0.000077323216972426711  -0.99999999701056008          0.0
-0.99999999701056008       0.000077323216972426711      0.0
 0.0                       0.0                          1.0

If I use the netlib implementation I get the following for VT:

-0.000077323216972426589  -0.99999999701056019          0.0
 0.99999999701056019      -0.000077323216972426589      0.0
 0.0                       0.0                          1.0

Also the U matrix is different but this is too much to post here...

Dec 16, 2012 at 9:10 AM

If I remember correctly, the managed Math.NET code is a port of the LINPACK SVD routine. I don't know the differences between the LAPACK and LINPACK implementations. The native provider uses LAPACK's DGESVD.  Is the U matrix as close as the VT matrix (which seems to be the same within 14 decimal places)?

Regards,
Marcus 

Dec 16, 2012 at 6:10 PM
Edited Dec 16, 2012 at 6:20 PM

The U matrix is as close as the VT matrix if you look at the absolute values, but the first column of the U matrix is negated.

Similar is the VT matrix but here it is the second row which is negated. As I know now, that Math.NET uses LINPACK SVD instead of LAPACK's DGESVD routine I'll try to find something with google. I thought Math.NET uses LAPACK's DGESVD and was wondering why the results are different..

Thanks for that Info!

Dec 18, 2012 at 9:27 PM

Ok after looking at Math.Net Numerics code I found the comment, that it's SVD is equivalent to the GESVD LAPACK routine.

What is the difference between LAPACK GESVD and LAPACK DGESVD? I know this is maybe a stupid question but I couldn't find something with google and I'm still wondering why I get different results (in case GESVD and DGESVD are the same).

Thanks

Dec 19, 2012 at 4:59 AM
Edited Dec 19, 2012 at 5:10 AM

>that it's SVD is equivalent to the GESVD LAPACK routine.

The comment was meant for the ILinearAlgebra interface and probably shouldn't have been copied to the managed code implementation. I'll remove it in both places to prevent future confusion.

>What is the difference between LAPACK GESVD and LAPACK DGESVD?

DGESVD is the double precision version of GESVD, SGESVD is the single precision, and so on.


SVD is only unique up to the sign (for real numbers, I think it is a little more complicated for complex numbers). From a quick search it seems that LAPACK uses QR decomp to compute the SVD while LINPACK uses Lanczos iterations. That probably explains the differences in signs.

Dec 19, 2012 at 1:17 PM
Edited Dec 19, 2012 at 1:18 PM

Well that could explain it. Thanks for your help and sorry for some maybe stupid questions..

I at least should have been able to answer the GESVD and DGESVD myself..