Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion about the intention #14

Open
zichunxx opened this issue Jul 11, 2023 · 0 comments
Open

Confusion about the intention #14

zichunxx opened this issue Jul 11, 2023 · 0 comments

Comments

@zichunxx
Copy link

zichunxx commented Jul 11, 2023

Hi! @andrew-j-levy

I'm new to reinforcement learning and interested in your work.

After I read your article thoroughly, I'm confused about the intention to solve the long horizon task with the goal-conditioned reward scheme.

In my opinion, the goal-conditioned reward can be treated as the sparse reward, which performs badly in long horizon tasks.

Thus, why not use the dense reward with differentiable functions which can lead the training process to convergence? Sometimes, some tasks don't require a lot of goals.

I don't know if I'm on the right point and this may seem meaningless to you, but I'd like to get a response from you.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant