Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question regarding the replay buffers and the Critic networks. (duplicates in the state) #43

Open
opt12 opened this issue Apr 3, 2020 · 0 comments

Comments

@opt12
Copy link

opt12 commented Apr 3, 2020

Hello everybody!

As far as I can see from the code, each agent maintains its own replay buffer.

In the training step, when sampling the minibatch, the observations of all agents are collected and concatenated.

for i in range(self.n):
obs, act, rew, obs_next, done = agents[i].replay_buffer.sample_index(index)
obs_n.append(obs)
obs_next_n.append(obs_next)
act_n.append(act)

As far as I can see, this would lead to duplicates in the state input to the agent's critic function. If there are components of the environment-state which are part of every agent's observation, these components would be contained the critic's input multiple times.

Is this true, or do I miss anything?

Does this (artificial) state expansion have any adverse effects on the critic, or can we safely assume, that the critic will learn quite fast, that the input values at some input nodes are always identical and hence can be treated commonly?

Are there any memory issues due to the multiple storage of the state components in each of the agents' replay buffer? (Probably, memory is not an issue with RL guys, but I have a background in embedded systems)

I would be very grateful for some more insight on this topic.

Regards,
Felix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant