-
Notifications
You must be signed in to change notification settings - Fork 978
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add observation term history support to Observation Manager #1439
base: main
Are you sure you want to change the base?
Conversation
* add history cfg option to observation manager * Revert "add history cfg option to observation manager" This reverts commit 4b00994b8c9ce6e58779150dc42a707b0bc6701f. * add flatten history option to observation manager * run formatter * fix docstring
From the discussion in #1208, this PR introduces observation history tracking per-term. It was discussed that per-group tracking should also be implemented. In theory, the same effect can be had by giving all obs terms the same history length with the config provided in this PR. However, I want to reiterate what I think are the two best arguments for implementing per-group obs histories as well, and would be happy to hear feedback:
The PR as-is resolves per-term observation history tracking. If it is decided that Isaac Lab should only support per-term observation histories then I am happy to close my comment/proposal with this PR. Otherwise, I'm happy to help implement per-group history tracking as well. Thanks for putting this together James! |
I agree @KyleM73,the current implementation only covers term level that we talked about (in a slightly different implementation). I think having the option for group level is still worthwhile. I also agree we should be able to apply group level history via an overall group history_length. I was thinking, if set, the group level history length would override the term history length within that group. |
@@ -109,7 +121,7 @@ def append(self, data: torch.Tensor): | |||
# at the fist call, initialize the buffer | |||
if self._buffer is None: | |||
self._pointer = -1 | |||
self._buffer = torch.empty((self.max_length, *data.shape), dtype=data.dtype, device=self._device) | |||
self._buffer = torch.zeros((self.max_length, *data.shape), dtype=data.dtype, device=self._device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jtigue-bdai Isn't it better to initialize the buffer to the latest data on the first append or right after a reset?
Otherwise, we assume that zero is a valid value for the observation, which may not always be true.
For example, if we are gathering a history of, lets say, the gravitational force, which may be defined as a strictly negative value, having a buffer filled with zeros as previous observations may not be within the expected distribution.
I would suggest that, during reset or at first init, all the indices in the history is initialized to the most recent data, possibly the one being passed into append()
@@ -14,6 +14,7 @@ | |||
from typing import TYPE_CHECKING | |||
|
|||
from omni.isaac.lab.utils import modifiers | |||
from omni.isaac.lab.utils.buffers import CircularBuffer | |||
|
|||
from .manager_base import ManagerBase, ManagerTermBase | |||
from .manager_term_cfg import ObservationGroupCfg, ObservationTermCfg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cannot find how to provide a suggestion to lines that are not part of the changelist, so here we go.
In the method def __str__(self) -> str:
,
You may need to re-compute the observation dimensions depending on whether flattening is enabled or not.
I see two deficiencies which can cause confusions:
- Currently, the printed summary does not handle history length while computing the observation dimension.
- The
self._group_obs_dim
is wrong and does not correspond to the actual observation dimension if history is used. This is quite critical to be fixed IMO. - The shape of the observation group printed in the summary is wrong.
Hope this helps!
# create history buffers | ||
if term_cfg.history_length > 0: | ||
group_entry_history_buffer[term_name] = CircularBuffer( | ||
max_len=term_cfg.history_length, batch_size=self._env.num_envs, device=self._env.device | ||
) | ||
# call function the first time to fill up dimensions | ||
obs_dims = tuple(term_cfg.func(self._env, **term_cfg.params).shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# create history buffers | |
if term_cfg.history_length > 0: | |
group_entry_history_buffer[term_name] = CircularBuffer( | |
max_len=term_cfg.history_length, batch_size=self._env.num_envs, device=self._env.device | |
) | |
# call function the first time to fill up dimensions | |
obs_dims = tuple(term_cfg.func(self._env, **term_cfg.params).shape) | |
# call function the first time to fill up dimensions | |
obs_dims = tuple(term_cfg.func(self._env, **term_cfg.params).shape) | |
# create history buffers | |
if term_cfg.history_length > 0: | |
group_entry_history_buffer[term_name] = CircularBuffer( | |
max_len=term_cfg.history_length, | |
batch_size=self._env.num_envs, | |
device=self._env.device, | |
) | |
obs_dims = (obs_dims[0], term_cfg.history_length * obs_dims[1], *obs_dims[2:]) | |
if term_cfg.flatten_history_dim: | |
obs_dims = (obs_dims[0], np.prod(obs_dims[1:]),) |
This populates the correct _group_obs_term_dim
into the dictionary. Later, this is used to compute the _group_obs_dim
however, I assume that its computation does not need any changes.
Description
This PR adds observation history by adding configuration parameters to ObservationTerms and having the ObservationManager handling the collection and storage of the histories via CircularBuffers.
Fixes #1208
Type of change
Checklist
pre-commit
checks with./isaaclab.sh --format
config/extension.toml
fileCONTRIBUTORS.md
or my name already exists there