Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update sampler in server configuration #33

Merged
merged 5 commits into from
Apr 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 19 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,26 @@ mkdir logs
```

## Easy Sample Server
You can view an episode demo with default parameters using the following command:
You can view an episode demo with default parameters with the following:
```python
python -m sotopia_conf.server --gin_file="sotopia_conf/server_conf/server.gin" --gin_file="sotopia_conf/generation_utils_conf/generate.gin"
import asyncio
from sotopia.samplers import UniformSampler
from sotopia.server import run_async_server

asyncio.run(
run_async_server(
model_dict={
"env": "gpt-4",
"agent1": "gpt-3.5-turbo",
"agent2": "gpt-3.5-turbo",
},
sampler=UniformSampler(),
)
)
```
or run
```bash
python examples/minimalist_demo.py
```

## Contribution
Expand Down
23 changes: 23 additions & 0 deletions examples/minimalist_demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# This demo servers as a minimal example of how to use the sotopia library.

# 1. Import the sotopia library
# 1.1. Import the `run_async_server` function: In sotopia, we use Python Async
# API to optimize the throughput.
import asyncio

# 1.2. Import the `UniformSampler` class: In sotopia, we use samplers to sample
# the social tasks.
from sotopia.samplers import UniformSampler
from sotopia.server import run_async_server

# 2. Run the server
asyncio.run(
run_async_server(
model_dict={
"env": "gpt-4",
"agent1": "gpt-3.5-turbo",
"agent2": "gpt-3.5-turbo",
},
sampler=UniformSampler(),
)
)
2 changes: 1 addition & 1 deletion sotopia/generation_utils/generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -321,7 +321,7 @@ def obtain_chain(
chat = ChatLiteLLM(
model=model_name,
temperature=temperature,
max_tokens=3072, # tweak as needed
max_tokens=2700, # tweak as needed
max_retries=max_retries,
)
human_message_prompt = HumanMessagePromptTemplate(
Expand Down
2 changes: 1 addition & 1 deletion sotopia/server.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@ def get_agent_class(
if env_agent_combo_list:
assert (
type(sampler) is BaseSampler
), "No sampler should be used when `env_agent_combo_list` is empty"
), "No sampler should be used when `env_agent_combo_list` is not empty"
env_agent_combo_iter = iter(env_agent_combo_list)
else:
env_agent_combo_iter = sampler.sample(
Expand Down
2 changes: 2 additions & 0 deletions sotopia_conf/server_conf/server.gin
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from __gin__ import dynamic_registration
import __main__ as train_script
import sotopia.server as server
import sotopia.samplers as samplers

ACTION_ORDER="round-robin"
TAG=None
Expand All @@ -14,3 +15,4 @@ server.run_async_server:
using_async=True
tag=%TAG
omniscient=%OMNISCIENT
[email protected]()
ProKil marked this conversation as resolved.
Show resolved Hide resolved
Loading