You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I used your train function in my project because of its optimization. It runs the fit function in one batch an accelerates training quite a bit, thx for that.
Problem is that your Bellman equation is slightly wrong. The original Bellman equation states that the best policy is the one that leads to the next state that yields the highest possible return.
Thanks for the tip. I tested it but the agent was not able to achieve higher scores. I don't think that change is correct since each state will end up having the same new_q, which in this case is the new_q of the best play. Thus, the agent will not be able to really differentiate the different plays, since they will essentially be equal.
I also did not achieve significantly different results, so the impact of the change seems to be minor in practice.
The value for new_q is different in the reward component.
The Bellman equation states that for any state-action pair at time t the expected return is the q-value of the action plus the discounted maximum expected return that can be achieved from any possible next state-action pair. That is why the max_next_q_value should be used.
Hey man,
I used your train function in my project because of its optimization. It runs the fit function in one batch an accelerates training quite a bit, thx for that.
Problem is that your Bellman equation is slightly wrong. The original Bellman equation states that the best policy is the one that leads to the next state that yields the highest possible return.
Check out this blog or their sources: Deeplizard
Basically what you need to do is instead of adding reward and next_qs[i] you want to add reward and max(next_qs)
I copy and pasted this from my code, so the variable names are different, but I think you get the point.
https://github.com/nuno-faria/tetris-ai/blob/4d01877100870e2a6a1ef84dc955354e534589ae/dqn_agent.py#L132C64-L132C64
Again thanks for this cool optimization!
Keep up the good work.
The text was updated successfully, but these errors were encountered: