-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add regularization to Laplacian and preconditioners for ConjugateGradientPoissonSolver
#3848
base: main
Are you sure you want to change the base?
Conversation
I'll add that I also tried zeroing the mean of the RHS of the Poisson equation, by inserting some code after this line:
|
ConjugateGradientPoissonSolver
ConjugateGradientPoissonSolver
Hi @ali-ramadhan @xkykai @jagoosw @Yixiao-Zhang this PR is ready to be viewed. I think we can merge as is since it implements a number of improvements, and continue work in a new PR. In short this PR implements an optional "regularization" of the Laplace operator: where which has the property that the mean pressure Similarly This PR also modifies the preconditioners for the solver to account for the regularization, so the preconditioners provide an approximate solution to the regularized Laplacian equation. This provides an additional benefit over just regularizing the Laplacian equation alone. I'm calling pressure_solver = ConjugateGradientPoissonSolver(grid; preconditioner, regularization=1) Probably we can come up with a better name than "regularization". Note also that as formulated, The choice of The regularization seems to stabilize the CG iteration, so that we can iterate all the way to julia> include("mwe.jl")
[ Info: Initializing simulation...
Iter: 0, time: 0.0, Δt: 1.10e-01, max|δ|: 0.00e+00, mean(p): 0.00e+00, solver iterations: 0, max|r|: 0.00e+00
[ Info: ... simulation initialization complete (2.976 seconds)
[ Info: Executing initial time step...
[ Info: ... initial time step complete (9.965 seconds).
Iter: 1, time: 0.1, Δt: 1.10e-01, max|d|: 4.17e-04, mean(p): -5.98e-03, solver iterations: 4096, max|r|: 6.97e-03
Iter: 2, time: 0.2, Δt: 1.10e-01, max|d|: 6.88e-04, mean(p): 3.33e-03, solver iterations: 4096, max|r|: 2.21e-02
Iter: 3, time: 0.3, Δt: 1.10e-01, max|d|: 5.67e-04, mean(p): -4.26e-03, solver iterations: 4096, max|r|: 1.12e-02 Above julia> include("mwe.jl")
[ Info: Initializing simulation...
Iter: 0, time: 0.0, Δt: 1.10e-01, max|d|: 0.00e+00, mean(p): 0.00e+00, solver iterations: 0, max|r|: 0.00e+00
[ Info: ... simulation initialization complete (5.899 seconds)
[ Info: Executing initial time step...
[ Info: ... initial time step complete (29.592 ms).
Iter: 1, time: 0.1, Δt: 1.10e-01, max|d|: 5.91e-04, mean(p): -9.52e-04, solver iterations: 10, max|r|: 1.52e-02
Iter: 2, time: 0.2, Δt: 1.10e-01, max|d|: 2.45e-04, mean(p): -7.63e-04, solver iterations: 10, max|r|: 5.91e-03
Iter: 3, time: 0.3, Δt: 1.10e-01, max|d|: 2.65e-04, mean(p): -5.79e-06, solver iterations: 10, max|r|: 7.22e-03 Turning regularization off with julia> include("mwe.jl")
[ Info: Initializing simulation...
Iter: 0, time: 0.0, Δt: 1.10e-01, max|d|: 0.00e+00, mean(p): 0.00e+00, solver iterations: 0, max|r|: 0.00e+00
[ Info: ... simulation initialization complete (2.973 seconds)
[ Info: Executing initial time step...
[ Info: ... initial time step complete (17.534 ms).
Iter: 1, time: 0.1, Δt: 1.10e-01, max|d|: 4.30e-04, mean(p): -1.22e-16, solver iterations: 10, max|r|: 1.17e-02
Iter: 2, time: 0.2, Δt: 1.10e-01, max|d|: 2.24e-04, mean(p): 3.78e-17, solver iterations: 10, max|r|: 6.10e-03
Iter: 3, time: 0.3, Δt: 1.10e-01, max|d|: 1.16e-04, mean(p): 8.26e-17, solver iterations: 10, max|r|: 3.15e-03 So in conclusion:
I think there is probably more work to do. First, I am not sure that the regularization is applied correctly; the rather bland results suggest there could be improvement. Second, and separately, I am wondering if there is another way to stabilize the non-regularized CG iteration by applying the linear constraint within the CG solver (eg to the search direction). It seems that the solver may be more efficient if we apply the linear constraint directly (setting mean(p)=0) rather than modifying the linear operator. This does make sense to me, because when we modify the linear operator the CG algorithm is also tasked with finding mean(p) whereas when we apply the constraint, we insert our desired solution for mean(p) directly. Feedback welcome, I think we should merge this PR and continue work in future PRs. |
Great job! Though I have not looked at the code, I have two comments:
using LinearAlgebra
using Oceananigans
using Oceananigans.Models.NonhydrostaticModels: ImmersedPoissonSolver
using Oceananigans.ImmersedBoundaries: active_cells_map, immersed_cell, mask_immersed_field!
using Oceananigans.Solvers: solve!
using Statistics: norm, mean
using Oceananigans.Solvers: precondition!
ENV["JULIA_DEBUG"] = "Solvers"
# ---------------------------------------------------------------------- #
# Define Parameters
# Numerical Technic
const arch = CPU()
# Grid
const Nx = 10
const Ny = 10
const Nz = 10
const Lx = 1.0
const Ly = 1.0
const Lz = 1.0
const Δz = Lz / 2 # elevation difference at the top
# ---------------------------------------------------------------------- #
# Define Utils
# Height at Top
@inline function z_top(y::R) where {R<:Real}
# return Lz - Δz * sin(π/2 * y/Ly) - Δz * 0.2
return Lz - Δz
end
# ---------------------------------------------------------------------- #
# Define the Simulation
# Grid
ib_grid = begin
underlying_grid = RectilinearGrid(
arch,
size = (Nx, Ny, Nz),
x = (-Lx / 2, Lx / 2),
y = (0.0, Ly),
z = (0.0, Lz),
topology = (Periodic, Bounded, Bounded),
halo = (4, 4, 4),
)
@inline function is_ib(x::R, y::R, z::R) where {R<:Real}
return z > z_top(y)
# return false
end
ImmersedBoundaryGrid(
underlying_grid,
GridFittedBoundary(is_ib)
)
end
# pressure solver
pressure_solver = ImmersedPoissonSolver(
ib_grid,
preconditioner = :FFT,
solver_method = :PreconditionedConjugateGradient,
reltol = 0,
abstol = 0,
maxiter = 20,
)
# ---------------------------------------------------------------------- #
# test the solver
rhs = Field((Center, Center, Center), ib_grid)
output = Field((Center, Center, Center), ib_grid)
set!(output, 0.0)
output[1, 1, 1] = 1.0 # use a random field as the solution
pressure_solver.pcg_solver.linear_operation!(rhs, output)
rhs ./= norm(rhs)
set!(output, 0.0)
solve!(output, pressure_solver.pcg_solver, rhs)
# ---------------------------------------------------------------------- #
# calculate the eigenvalues
const c0 = 0.1
const do_gauge_fixing = true
const active_cells = active_cells_map(ib_grid, ib_grid.immersed_boundary)
const n_active_cells = length(active_cells)
if do_gauge_fixing
@assert c0 > 0
end
function apply_linear_operation!(w, v, solver)
solver.pcg_solver.linear_operation!(w, v)
if do_gauge_fixing
w .-= c0 * mean(v)
end
end
function apply_preconditioner!(w, v, solver)
precondition!(w, solver.pcg_solver.preconditioner, v)
mask_immersed_field!(w)
if do_gauge_fixing
w .-= inv(c0) * mean(v)
end
end
function get_matrix(f!, ib_grid, pressure_solver)
# define fields
rhs = Field((Center, Center, Center), ib_grid)
p = Field((Center, Center, Center), ib_grid)
# use a matrix to represent the preconditioner
matrix = zeros((n_active_cells, n_active_cells))
for i in 1:n_active_cells
rhs .= 0.
rhs.data[active_cells[i]...] = 1.0
f!(p, rhs, pressure_solver)
for j in 1:n_active_cells
v = p.data[active_cells[j]...]
matrix[i, j] = v
end
end
return -Symmetric(matrix)
end
const A = get_matrix(apply_linear_operation!, ib_grid, pressure_solver)
const M⁻¹ = get_matrix(apply_preconditioner!, ib_grid, pressure_solver)
C = cholesky(M⁻¹)
A1 = Symmetric(C.U * A * C.L)
@info "Distribution of eigenvalues" eigvals(A1)
# should be the same as A1
# @info "Distribution of eigenvalues" eigvals(A * M⁻¹) |
mean_p = mean(p) | ||
|
||
if !isnothing(δ) | ||
mean_rhs = mean(rhs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In principle, the preconditioner solves the Poisson equation in the underlying grid. Following this logic, it is the mean of rhs
calculated in the underlying grid that should be subtracted. I have not tested which is better, but I feel the mean in the underlying grid is more consistent mathematically. (I assume mean(rhs)
here is the mean of rhs
calculated in the immersed boundary grid.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, mean(rhs)
accounts for the immersed boundary. My objective with this was to define a preconditioner that (approximately) solves the same problem as the CG iteration. This means using mean
over the whole grid.
Put another way, if we use mean
over the whole grid, then the mean pressure produced by the preconditioner would be different than the mean pressure that would minimize the conjugate gradient residual?
It's worth testing though. To implement this properly, we need to know the volume of the immersed region which will require an additional computation.
if !isnothing(L.δ) | ||
# Add regularization | ||
ϕ̄ = mean(ϕ) | ||
parent(Lϕ) .+= L.δ * ϕ̄ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to mask Lϕ
after this operation? It seems to that the masked region also gets a non-zero value after this line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, good catch. Actually, while dot
and norm
should ignore immersed regions, I'm actually not sure they do right now so this may matter. I will test it.
end | ||
end | ||
|
||
rhs = CenterField(grid) | ||
operator = RegularizedLaplacian(regularization) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be a stupid question. It seems to me that regularization
and δ
is set to a positive number, 1 for example in the demo. However, the Laplacian operator is actually negative semi-definite (although our previous conversations were about positive semi-definite), and thus regularization
(or sayδ
) should be negative as well. Am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, right... I played around this and found that the solver "doesn't blow up" only if we use positive regularization
. But this may actually be a clue that there's a bug somewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm now convinced that there is a bug or some conceptual error given that this doesn't work with
From the experiment with a regularizer, but comparing Looking at divergence as well it even seems to me that |
Actually I didn't look into that, I just implemented the regularization and ran a few simulations. I think it would be useful to investigate whether the residual is decreasing. It does seem like there may be a problem with the implementation, both given this and @Yixiao-Zhang's comment about the sign of the regularization. |
After the considerations yesterday I made the following changes:
I'm no longer convinced that this PR actually makes progress on #3831. But it could be the basis for further investigation. I also think that |
This PR implements an optional "regularization" of the Laplace operator: where which has the property that the mean pressure Similarly The choice of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@glwagner and @Yixiao-Zhang Thank you for looking into all of this!
I'm no longer convinced that this PR actually makes progress on #3831. But it could be the basis for further investigation.
If merging this PR means we don't need manually play around with maxiter
to avoid blowups, isn't that an improvement? Even if regularization doesn't improve the rate of convergence.
I'm curious to play around with this next week! I think I have a few more tests in mind beyond the MWE from #3831.
return nothing | ||
end | ||
|
||
struct DefaultPreconditioner end | ||
|
||
function ConjugateGradientPoissonSolver(grid; | ||
regularization = nothing, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be clearer to call this the regularization_constant
. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about regularizer
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me!
|
||
function regularize_poisson_solution!(p, regularized) | ||
δ = regularized.regularization | ||
rhs = regularized.rhs | ||
mean_p = mean(p) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may have found why the solver blows up for positive regularization
.
Here, p
is the solution in the whole grid by the FFT Poisson solver, but mean_p
calculates the mean of p
in the immersed boundary grid. In my test, mean_p
is usually order 1e-5, which is much larger than round-off errors. I also calculated the eigenvalues of the preconditioner with regularization=0
, and I found both negative and positive eigenvalues.
Removing the subtraction of mean_p
enables convergence for positive regularization
according to my test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is interesting! I'll test it. I don't completely understand why this is what's needed, but let's think about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still getting NaNs. What regularization
did you use? Did you make any other changes? EDIT: the regularization does seem to matter, now I get non-blow-up with regularization ~ 1/L^2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested the code in a simpler case without FlatExtrapolationOpenBoundaryCondition
. That may explain why the result is different. Besides, I usually check the distribution of eigenvalues to judge whether the solver is numerically stable than running the simulation. I am looking into the case with FlatExtrapolationOpenBoundaryCondition
and aim to understand how it changes the distribution of eigenvalues.
I just ran your demo and got NaNs
as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to write out the idea, we are trying to solve a "regularized" Poisson equation:
where
We're proposing to use a preconditioner that directly inverts (with an FFT --- fast) the "non-immersed" (but regularized) Laplace operator,
where
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way @Yixiao-Zhang I think this example really meaningfully exposes the effect of regularization, because it's only with this kind of open boundary condition that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not realized FlatExtrapolationOpenBoundaryCondition
properly. As far as I understand, the pressure solver is unaware of the open boundary condition, but the open boundary condition should affect how fill_halo_region!
works for rhs
. The original boundary condition for rhs
is
Right now, rhs
is defined in this way:
rhs = CenterField(grid)
The good news is that the Laplace operator becomes negative definite if we take the open boundary into account. So, there is no need to regularize the Poisson solver. (The preconditioner is still semi-definite, though.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the pressure solver is unaware of the open boundary condition
This isn't quite true. The inhomogeneous part of any boundary condition is incorporated into the Poisson equation RHS (this happens implicitly by 1) filling halo regions on the predictor velocity prior to the pressure solve and then 2) taking the divergence of the predictor velocity.) This is why the operator / FFT-solve must use homogeneous Neumann conditions for the algorithm to work.
…Oceananigans.jl into glw/regularized-laplacian
This PR experiments with "regularizing" the Poisson operator in
ConjugateGradientPoissonSolver
in an attempt to resolve #3831. The issue there seems specific toFlatExtrapolationOpenBoundaryCondition
. So far, the regularization in this PR doesn't seem to help. However, there might be some details in the implementation that need tweaking which is why I'm opening this PR.There's related discussion on #3802.
cc @ali-ramadhan @jagoosw @Yixiao-Zhang @xkykai
Here is a script that I've been working with on this journey: