Replies: 2 comments 6 replies
-
I think this will be solved with the next release. Possibly, the memory inflation comes when JAGS transfers the result to R (via |
Beta Was this translation helpful? Give feedback.
5 replies
-
Oh, I see now it's in add_loglik() but it requires edits at multiple lines
and thorough testing that it's correct.
…On Fri, Mar 19, 2021, 20:11 mattmoo ***@***.***> wrote:
I haven't got the loo solution sorted yet, by modifying add_loo, do you
mean modifying (or maybe masking) it in the brms package?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#112 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAK7NOKGYMWVU47YMHDFID3TEOOVTANCNFSM4ZHLWZZA>
.
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I hate to spam the discussions with my big data problems, but I wonder whether this is expected behaviour.
I am struggling to increase the number of chains in an analysis. N is about 60000, currently I am looking at 1500 iterations (but that's too low, RHat ~ 1.8 on some parameters), with 8 chains (running parallel on a 16 core Linux machine). The size of the resulting
mcpfit
is about as expected, 5.2GB (~60000 * 1500 * 8chains * 8bytes).What I didn't expect is that each chain generates a worker which is about 5GB, which seems to quite rapidly build from a few 100MB each towards the end of the analysis (so it fails right at the end of the hour-long analysis!).
That means that I can't increase the number of iterations without significantly cutting down the number of chains. Of course this may be a jags problem, but do you think that there would be a way to address it? (Or maybe alternatively do some rough calculations and warn the user if memory is likely to be an issue).
Beta Was this translation helpful? Give feedback.
All reactions