You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you have a basis of Chebyshev Polynomials, for example, that looks like this:
[T00 T01 T02;
T10 T11 0 ;
T20 0 0 ]
These are stored like so
[T00, T01, T10, T02, T11, T20]
My question is: why? Would it not be more natural to simply keep using the matrix? Or perhaps reshape the matrix to a vector, so columns are concatenated together?
In my case, using the matrix directly means I don't have a differentiation operator with 1e12 entries, but rather one with 1e6 entries, which is the difference between feasible and impossible. (The reduction occurs because in the matrix way of doing things, I can use the 1D operators on each column/row, which amounts to a simple matrix matrix multiplication).
Of course, this comes at the cost of doubling the memory, but it seems more than worth it to double the memory at this stage if you can save many orders of magnitude in memory/time when building operators and applying them.
The text was updated successfully, but these errors were encountered:
Luapulu
changed the title
Why are coefficients in tensor product bases represented in the way they are?
Why are coefficients in tensor product bases represented the way they are?
Oct 5, 2021
ClassicalOrthogonalPolynomials.jl will eventually have better support for working with matrix of coefficients. But for now I'd suggest doing it by hand.
If you have a basis of Chebyshev Polynomials, for example, that looks like this:
These are stored like so
My question is: why? Would it not be more natural to simply keep using the matrix? Or perhaps reshape the matrix to a vector, so columns are concatenated together?
In my case, using the matrix directly means I don't have a differentiation operator with
1e12
entries, but rather one with1e6
entries, which is the difference between feasible and impossible. (The reduction occurs because in the matrix way of doing things, I can use the 1D operators on each column/row, which amounts to a simple matrix matrix multiplication).Of course, this comes at the cost of doubling the memory, but it seems more than worth it to double the memory at this stage if you can save many orders of magnitude in memory/time when building operators and applying them.
The text was updated successfully, but these errors were encountered: