Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation in WA differs from the official implementation #304

Open
hysyyds opened this issue Jan 14, 2025 · 2 comments
Open

Evaluation in WA differs from the official implementation #304

hysyyds opened this issue Jan 14, 2025 · 2 comments

Comments

@hysyyds
Copy link

hysyyds commented Jan 14, 2025

Why call the evaluator at every step? In the official implementation, it only evaluates at the final step.

https://github.com/ServiceNow/BrowserGym/blob/ec6b802cd655f2c6a84ebd66a22a4435d8147272/browsergym/webarena/src/browsergym/webarena/task.py#L185C9-L185C11

https://github.com/web-arena-x/webarena/blob/df352854eef255b007110948f6d4f539af039717/run.py#L330

@hysyyds hysyyds closed this as completed Jan 14, 2025
@hysyyds hysyyds reopened this Jan 15, 2025
@gasse
Copy link
Collaborator

gasse commented Feb 5, 2025

Browsergym was originally designed without a stop action, which is why we call the evaluator at every step to detect if the task is done or not, from the environment side. It could be nice to have this (optional) stop action so that agents can run faster on webarena.

@hysyyds
Copy link
Author

hysyyds commented Feb 6, 2025

Browsergym was originally designed without a stop action, which is why we call the evaluator at every step to detect if the task is done or not, from the environment side. It could be nice to have this (optional) stop action so that agents can run faster on webarena.

I fully understand the idea of separating the environment implementation from specific task logic. However, there are two potential issues with this approach:

  1. Evaluating at every step might lead to the model stopping prematurely, such as when it simply lands on the answer page, which could inadvertently leak labels.

  2. Some evaluations in WebArena involve navigating away from the current page, which could result in losing unfinished operations on that page. The current implementation of page preservation in WebArena is ineffective (and I suspect it might be ineffective for most environments, as pages cannot be easily copied).

A possible solution would be to modify the evaluation implementation in the WebArena environment, ensuring that the actual evaluator is only invoked after the termination conditions are met.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants