Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support of Exponentiator in Lambda Lanczos #2

Open
zhouyou-gu opened this issue Nov 11, 2023 · 9 comments
Open

Support of Exponentiator in Lambda Lanczos #2

zhouyou-gu opened this issue Nov 11, 2023 · 9 comments

Comments

@zhouyou-gu
Copy link

Does this Python interface support the Exponentiator from Lambda Lanczos? Thank you in advance!

@mrcdr
Copy link
Owner

mrcdr commented Nov 13, 2023

Thank you for your comment! Currently the interface is not supported, but it should be.
I just started to import it so please wait a while!

@zhouyou-gu
Copy link
Author

Hi @mrcdr , Thank you for the reply. Also, could I have some reference papers or books for this algorithm to read? I need to compute the exponential of a sparse matrix (~500x500, ~10 connections per row) as fast as possible (maybe using some randomized approximation, I am not sure?). I am from wireless engineering and do not have much experience in linalg computation (normally, I just call the package function, but they seem not fast enough this time). Thank you so much for your help!

@mrcdr
Copy link
Owner

mrcdr commented Nov 15, 2023

I'm not a linear-algebra specialist too (sorry, my background is physics). But my library is based on so called "Krylov subspace method," whose characteristic is summarized in the 10th section of the article https://www.cs.jhu.edu/~misha/ReadingSeminar/Papers/Moler03.pdf . There are a lot of Krylov method related articles in various fields, so I believe some could be found in yours.

By the way, does scipy.sparse.linalg.expm work fast enough in your case? This maybe effective if you desire the matrix exponential itself.

@zhouyou-gu
Copy link
Author

@mrcdr I am trying to write the matrix multiplicative weights methods [1] where matrix exponential (expm) is needed for each iteration. As I wish to use this method in some real-time applications, I want to figure out how to design the expm so that maybe I can accelerate it using CUDA or something similar when the matrix is large (I am not sure whether it's possible or not). Nevertheless, I will try to use scipy.sparse.linalg.expm or your codes first, and I will also read the paper you suggest to study. Thank you so much for the help.

[1] Arora, Sanjeev, and Satyen Kale. "A combinatorial, primal-dual approach to semidefinite programs." Proceedings of the thirty-ninth annual ACM symposium on Theory of computing. 2007.

@mrcdr
Copy link
Owner

mrcdr commented Nov 16, 2023

@zhouyou-gu Thank you for waiting! I just released the exponentiator-included version pylanczos==2.1.0. Though no samples of the exponentiator interface are available for now, its test code would be helpful.

@zhouyou-gu
Copy link
Author

Thank you so much for the update! I will have a try on those test codes.

@zhouyou-gu
Copy link
Author

Hi @mrcdr , I was trying to use the Exponentiator class to compute e^A@v. Following the paper you shared, it seems we should be able to configure the number of iterations of the Lanczos method (the "m" in Kyrlov subspace {v, Av A^2 v,...., A^m v}). However, I did not find the interface to set up it. Do you think it can be configured somehow? Also, is the output of the Lanczos iteration available somewhere in the Python interface (orthonormal columns V and a tridiagonal real symmetric T )? Thank you so much!

@mrcdr
Copy link
Owner

mrcdr commented Nov 28, 2023

@zhouyou-gu Thank you for your question.

You don't have to configure the $m$ yourself. Lanczos exponention automatically determines it in the following iteration:

  1. Calculate $\lbrace{ v, Av, Av^2, \cdots, A^kv \rbrace}$ and express an approximated vector $u_k \sim \exp(\alpha A)v$ as a linear combination of them.
  2. If the approximation is "good enough", that is, the lastly added Krylov vector $A^kv$ does not change the $u_k$, exit the algorithm (now $m=k$). This can be checked by comparing $u_k$ and $u_{k-1}$, which is obtained in the previous iteration.
  3. If the approximation is not good enough, $k\to k+1$ and continue the iteration.

This approximation is intuitively understood from the fact that the Taylor expansion

$$ v' = \exp(\alpha A) v = \sum_{k=0}^{\infty} \frac{\alpha^k}{k!} A^k v $$

gets more accurate as the $k$ is increased. Note that the Krylov subspace algorithm is much more efficient than direct calculation of the power $A^k$.

Due to this auto-determination, the library does not provide internal data $V$ and $T$ (these are deeply embedded in C++ implementation).

@zhouyou-gu
Copy link
Author

Thank you so much for your explanation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants