You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train RL models using vectorized environments on CPU in windows, is it feasible or not; while I am trying to do so am getting RuntimeError: Some environments have an observation space different.
I am using custom environment but even when using existing environment like PushCube am getting the same error.
For reference here is the section of code where am trying to create the vectorized environments.
hey, great that you came back to this issue. I just switched to using Linux so I opted for gpu simulation. Dont know whether there is same issue with cpu or not.
Hello,
I am trying to train RL models using vectorized environments on CPU in windows, is it feasible or not; while I am trying to do so am getting RuntimeError: Some environments have an observation space different.
I am using custom environment but even when using existing environment like PushCube am getting the same error.
For reference here is the section of code where am trying to create the vectorized environments.
Define the environment
env_id = "PickCube-v1" # Your chosen environment
obs_mode = "rgbd" # Observation mode
control_mode = "pd_ee_delta_pos" # Control mode
reward_mode = "normalized_dense" # Reward mode
robot_uids = "panda" # Robot type
def cpu_make_env(env_id, env_kwargs=dict()):
def thunk():
env = gym.make(env_id,
obs_mode=obs_mode,
reward_mode=reward_mode,
control_mode=control_mode,
robot_uids=robot_uids)
env = CPUGymWrapper(env)
return env
return thunk
I followed and made some changes as required from the example shown in https://maniskill.readthedocs.io/en/latest/user_guide/reinforcement_learning/setup.html#evaluation
Thanks
The text was updated successfully, but these errors were encountered: