-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recording primal/dual iterates #791
Comments
This is, in fact, exactly what @jarandh did 😄 |
It's one reason that we didn't really report computation times in the paper, because there was so much other stuff going on just so we could make this one pretty picture. |
Haha :) understood! Do you think it might be worth it to create a CapExForwardPass that records iterates if the first stage is deterministic? Otherwise, I will see how the approach above works for me. Would evaluating a decision rule for the first stage be quicker than simulating a (dummy) trajectory? |
I just remembered that there is: Line 1026 in 2fd1e8d
You could try: trajectories = Any[]
SDDP.train(model; forward_pass_callback = trajectory -> push!(trajectories, trajectory)) |
Works like a charm! I used the following to obtain a DataFrame of first-stage states by iteration: capacities= Any[]
SDDP.train(model; forward_pass_callback = trajectory -> push!(capacities, trajectory.sampled_states[1])) # This depends on the first node being deterministic
capacity_df = reduce(vcat,[DataFrame(keys(c) .=> values(c)) for c in capacities]) Thanks! |
Hi, sorry to reopen this. I struggle to do this in a parallel version of my code. I've tried using SharedArrays.jl but they do not admit Dict elements in the capas vector. Alternatively I could store the iterates locally on each worker and combine them later. I'm just not sure how to do that. I know, this is rather a question on distributed computing in general than one on SDDP.jl. Sorry about that :) |
What parallel scheme? I'd advise against using the Start Julia with capacities= Any[]
my_lock = Threads.ReentrantLock();
function callback(trajectory)
Threads.lock(my_lock) do
push!(capacities, trajectory.sampled_states[1])
end
return
end
SDDP.train(model; forward_pass_callback = callback, parallel_scheme = SDDP.Threaded()) |
Great, I will try this. Indeed, I've been using the asynchronous mode and it's been working quite well but I'll make the switch to Threaded then. |
I don't have an easy way to use |
Note that threaded works best when the number of nodes is >> the number of threads. |
Thank you! That is nodes in a policy graph right? Does that mean that you also parallelise the backward pass? |
Yes, nodes in the graph. The forward and backward pass are conducted asynchronously in parallel across the threads. The differences are:
|
Very, cool! Thank you! First test run on my problem with Threaded looks very promising! |
I'll point you to the docstring: https://sddp.dev/stable/apireference/#SDDP.Threaded It's still somewhat experimental. It should work for most standard use-cases, but if you've written any custom plugins you need to be very careful that they are themselves threadsafe. But yeah, assuming it works and you're running on a single machine, it is much, much better than before. |
Thank you! It looks like the Threaded option does not work with a RegularizedForwardPass, right? As the latter leads to significant performance gains compared to a vanilla ForwardPass in my case, it'd be interested in fixing this. |
I assume we need to make it thread safe |
The issue is that we modify the first-stage for the forward pass: SDDP.jl/src/plugins/forward_passes.jl Lines 356 to 370 in aa0cf1b
The fix might not be trivial. |
I see! Well, it seems like the Threaded version without regularization is still much faster than serial with regularization, at least in my case. Thank you! |
Hi Oscar,
in this working paper you show a very nice plot that shows how first-stage states converge with the number of iterations. Is there a quick way to record the current (first-stage) choices for a given iteration?
I was going to do it like this but it seems a little impractical (I have not run this yet):
Thank you,
Felix
The text was updated successfully, but these errors were encountered: