-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added igibson environment files #992
base: pytorch
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job! Some preliminary comments. I haven't looked at igibson_tasks.py and done the test yet. Will do them later.
@@ -0,0 +1,76 @@ | |||
# scene |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name of this yaml file is confusing; the bot conf has nothing to do with "clip" actually.
elif self.config['task'] == 'room_rearrangement': | ||
self.task = RoomRearrangementTask(self) | ||
elif self.config['task'] == 'visual_object_nav': | ||
self.task = VisualObjectNavTask(self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the only difference with the base class is "VisualObjectNavTask", then you can simply overwrite this function as:
def load_task_setup(self):
super().load_task_setup()
if self.task is None and self.config['task'] == 'visual_object_nav':
self.task = VisualObjectNavTask(self)
from alf.environments.igibson.igibson_tasks import VisualObjectNavTask | ||
|
||
|
||
class iGibsonCustomEnv(iGibsonEnv): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we should decorate this class with @alf.configurable
so that it's possible for us to configure the env in an alf config file.
filename = os.path.join(object_dir, f'{object_name}.urdf') | ||
super(iGibsonObject, self).__init__(filename, scale) | ||
|
||
def load(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you want to have this function? It's exactly the same with load()
of the base class Object
:
https://github.com/StanfordVL/iGibson/blob/afe24c6aa95c8d4e74ce6d6b5eae2f479f2807ae/igibson/objects/object_base.py#L26
from igibson.objects.articulated_object import ArticulatedObject | ||
|
||
|
||
class iGibsonObject(ArticulatedObject): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to me that this class is unnecessary. It only has some preprocessing of converting a name to a path. You can just do this in your local script.
|
||
|
||
@alf.configurable() | ||
class StackRGBDWrapper(gym.ObservationWrapper): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we don't need this wrapper. We can instead define a network preprocessor NestConcat(dim=1)
to concat two input tensors along the channels dim (assuming it's dim 1).
return observation_space | ||
|
||
def transform_observation(self, observation): | ||
return (observation * 255).astype(np.uint8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not "one-hot". One-hot vector is a zero vector of length N except that only one dim is 1. And you can only convert an integer 0<=n<N (not a floating number) to a one-hot vector of length N.
If the input task obs is an integer, then we don't need a wrapper to do this. Just use EmbeddingPreprocessor
to convert it to an embedding when configuring the network.
|
||
|
||
@alf.configurable() | ||
class ImageIntWrapper(BaseObservationWrapper): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we should set the image type to uint8 in the first place when it's generated by the simulator, instead of using a wrapper to do this (which is inefficient).
vision_obs = self.sensors['vision'].get_obs(self) | ||
if 'rgb' in vision_obs: | ||
rgb_obs = vision_obs['rgb'] | ||
rgb_obs = (rgb_obs * 255).astype(np.uint8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If both 'rgb' and 'depth' are generated as uint8 in the first place by the simulator, then the conversions here can be skipped.
|
||
|
||
@alf.configurable() | ||
class GFTTupleInputWrapper(gym.ObservationWrapper): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems not quite necessary? Before calling GFT, if the time_step.observation
is a nest (any structure including dict), you can first do alf.nest.flatten(time_step.observation)
to convert it to a list/tuple.
at the target position | ||
""" | ||
super(VisualPointNavFixedTask, self).__init__(env) | ||
self.reward_type = self.config.get('reward_type', 'l2') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to ALF's name convention, please prefix all class members with '_'.
from alf.environments.igibson.igibson_object import iGibsonObject | ||
|
||
|
||
class VisualPointNavFixedTask(BaseTask): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to derive this class from PointNavFixedTask because most functions have already been implemented there, so it's unnecessary to redo the work here.
super(VisualPointNavRandomTask, self).reset_agent(env) | ||
|
||
|
||
class VisualObjectNavTask(BaseTask): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This class can be merged with VisualObjectNavFixedTask and VisualPointNavRandomTask. All three tasks can be formulated as randomizing (or fixing) a set of locations (number>=1) when resetting and asking the robot to reach the target location. In the case where there is no object concept, the object id can be an arbitrary integer (e.g., 0).
It's best to reduce redundant code as much as possible, so it's less error-prone and easier to maintain in the future.
No description provided.