Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LargestEigenValueFinder bugs #3

Open
cdavidshaffer opened this issue May 4, 2023 · 5 comments
Open

LargestEigenValueFinder bugs #3

cdavidshaffer opened this issue May 4, 2023 · 5 comments
Assignees

Comments

@cdavidshaffer
Copy link

cdavidshaffer commented May 4, 2023

The LargestEigenValueFinder class has a bug that will cause it to produce incorrect eigenvectors in some cases. When the algorithm converges to the eigenvector quickly, it may be the case that the eigenvector of the transpose matrix has not converged yet. This causes the nextLargestEigenValueFInder method to pass an inaccurate matrix. Normally this will have no impact on the actual eigenvalues but can significantly impact eigenvectors. The solution is simple: add an i-var transposeResult (name chosen to be symmetric with result but a better name might be transposeEigenValue) and then modify #evaluateIteration so that it requires convergence on both the eigenvalue and the transposeEigenvalue:

`evaluateIteration
"Iterate the product of the matrix of the eigen vector and the transpose.
(c) Copyrights Didier BESSET, 1999, all rights reserved.
Initial code: 6/1/99 "

| oldEigenvalue oldTransposeEigenvalue |
oldEigenvalue := result.
oldTransposeEigenvalue := transposeResult.
transposeEigenvector := transposeEigenvector * matrix.
transposeResult := transposeEigenvector at: 1.
transposeEigenvector := transposeEigenvector * (1 / transposeResult).
eigenvector := matrix * eigenvector.
result := eigenvector at: 1.
eigenvector := eigenvector * (1 / result).
^oldEigenvalue isNil
	ifTrue: [2 * desiredPrecision]
	ifFalse: 
		[((result - oldEigenvalue) abs
			+ (transposeResult - oldTransposeEigenvalue) abs) / 2]`

Here's a matrix which shows the difference:

m := Matrix rows: #((0.5d 0.4d 0.1d) (0.3d 0.4d 0.3d) (0.2d 0.3d 0.5d)).

Any accurate eigenvalue finder will list (value -> vector) -- note that many normalize the eigenvectors so that their last element is 1:

1 -> (1 1 1)
0.34142136 -> (-1.03099482 0.15872440 1)
0.05857864 -> (2.79570070 -3.33520499 1)

whereas the old algorithm gives:

1.0d->#(1.0d 1.0d 1.0d)
0.34142135624741d->#(-1.0609563302115d 0.14632393655588d 1.0d)
0.05857864376269d->#(2.8119936749989d -3.3745397045445d 1.0d)

with the supplied change we get:

1.0d->#(1.0d 1.0d 1.0d)
0.34142135625393d->#(-1.0309948197844d 0.15873440051694d 1.0d)
0.058578643762691d->#(2.7957007035743d -3.33520499022d 1.0d)

I don't use Pharo (I use this code under VW) hence I'm not going to go to the effort of a PR.

BTW, this bug will show up a lot in MCMC algorithms!

@cdavidshaffer
Copy link
Author

I also recommend the following change (which makes #nextLargestEigenValueFinder return a finder configured with the same value for maximumIterations as the original):

`nextLargestEigenValueFinder
"Return an eigen value finder for the same eigen values of the receiver except the one found.
(c) Copyrights Didier BESSET, 1999, all rights reserved.
Initial code: 11/2/99 "

| norm |
norm := 1 / (eigenvector * transposeEigenvector).
^(self class
	matrix: matrix * ((SymmetricMatrix identity: eigenvector size)
					- (eigenvector * norm tensorProduct: transposeEigenvector))
	precision: desiredPrecision)
	maximumIterations: maximumIterations;
	yourself`

@jordanmontt
Copy link
Collaborator

Thanks for the feedback, we will investigate this

@cdavidshaffer
Copy link
Author

Here's a unit test from VisualWorks:

`rapidFirstEigenvalue
"The first eigenvalue will be found very quickly (1 iteration) but the original version of this code had a bug where the eigenvector of the tranpose (the 'left' eigenvector), which is needed to find the next eigenvalue, had not converged yet. The eigenvalues and eigenvectors used here were produced with R. The eigenvectors are normalized so that their last element is 1 which means that some transformation is required to compare them to the eigenvectors produced by LargestEigenValueFinder which normalized to a norm of 1."

| m f firstExpectedEigenvalue firstExpectedEigenvector secondExpectedEigenvalue secondExpectedEigenvector thirdExpectedEigenvalue thirdExpectedEigenvector |
m := Matrix rows: #(#(0.5d 0.4d 0.1d) #(0.3d 0.4d 0.3d) #(0.2d 0.3d 0.5d)).
f := (LargestEigenValueFinder matrix: m precision: 1.0d-6)
			maximumIterations: 200;
			yourself.
firstExpectedEigenvalue := 1.0d0.
firstExpectedEigenvector := #(1.0d0 1.0d0 1.0d0) asVector / 3 sqrt.
f evaluate.
self
	assert: f eigenvalue
	equals: firstExpectedEigenvalue
	tolerance: 1.0d-6.
self
	assertVector: f eigenvector
	equals: firstExpectedEigenvector
	tolerance: 1.0d-4.
secondExpectedEigenvalue := 0.34142136d0.
secondExpectedEigenvector := #(-1.03099482d0 0.15873440d0 1.0d0) asVector
			negated normalized.
f := f nextLargestEigenValueFinder.
f evaluate.
self
	assert: f eigenvalue
	equals: secondExpectedEigenvalue
	tolerance: 1.0d-6.
self
	assertVector: f eigenvector
	equals: secondExpectedEigenvector
	tolerance: 1.0d-4.
thirdExpectedEigenvalue := 0.05857864d0.
thirdExpectedEigenvector := #(2.79570070d0 -3.33520499d0 1.0d0) asVector
			normalized.
f := f nextLargestEigenValueFinder.
f evaluate.
self
	assert: f eigenvalue
	equals: thirdExpectedEigenvalue
	tolerance: 1.0d-6.
self
	assertVector: f eigenvector
	equals: thirdExpectedEigenvector
	tolerance: 1.0d-4`

@Ducasse
Copy link

Ducasse commented May 5, 2023

Thanks for the tests!
This is strange because we originally took the same code from Didier.

@cdavidshaffer
Copy link
Author

Yes. The original code had this bug. It only shows up on matrices where the algorithm converges rapidly on the eigenvalue so it is likely Didier didn't notice it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants