You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I get in trouble when trying to train the suggested RGBD dataset using GTX1080Ti.
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/generic/THCStorage.cu:58.
I have tried the python 3.6.4 with torch 1.0.1, 0.4.0, 0.4.1, none of them works. All of them is ok for testing. The weired thing is that both 0.4.0 and 0.4.1 achieve about 0.5s per image, while 1.0.1 version achieve 4s per image. Anyway, should I try modify some parameters of commmend line to make it work for training on GTX1080Ti?
The text was updated successfully, but these errors were encountered:
(1) Reduce the batch sie
(2) If you download my code implemented by 0.3.0 version,
you'd better put <with torch.no_grad():> before the psnet() line.
or you can download from different branch.
I get in trouble when trying to train the suggested RGBD dataset using GTX1080Ti.
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/generic/THCStorage.cu:58.
I have tried the python 3.6.4 with torch 1.0.1, 0.4.0, 0.4.1, none of them works. All of them is ok for testing. The weired thing is that both 0.4.0 and 0.4.1 achieve about 0.5s per image, while 1.0.1 version achieve 4s per image. Anyway, should I try modify some parameters of commmend line to make it work for training on GTX1080Ti?
The text was updated successfully, but these errors were encountered: