Skip to content

Commit

Permalink
Squashed commit of the following:
Browse files Browse the repository at this point in the history
commit 742aaaf
Merge: f2d21e7 fe87faa
Author: biobootloader <[email protected]>
Date:   Fri Apr 14 15:44:12 2023 -0700

    Merge pull request biobootloader#13 from fsboehme/main

    more robust parsing of JSON (+ indentation)

commit fe87faa
Author: Felix Boehme <[email protected]>
Date:   Fri Apr 14 17:49:48 2023 -0400

    cleanup

commit 4db9d1b
Author: Felix Boehme <[email protected]>
Date:   Fri Apr 14 17:49:09 2023 -0400

    more cleanup

commit e1d0a79
Author: Felix Boehme <[email protected]>
Date:   Fri Apr 14 17:46:18 2023 -0400

    cleanup

commit b044882
Author: Felix Boehme <[email protected]>
Date:   Fri Apr 14 17:37:27 2023 -0400

    remove duplicate code from rebase

commit dd174cf
Author: Felix Boehme <[email protected]>
Date:   Fri Apr 14 17:15:07 2023 -0400

    add DEFAULT_MODEL to .env.sample

    + fix typo

commit 2497fb8
Author: Felix Boehme <[email protected]>
Date:   Fri Apr 14 16:29:45 2023 -0400

    move json_validated_response to standalone function

commit 923f705
Author: Felix Boehme <[email protected]>
Date:   Thu Apr 13 11:35:24 2023 -0400

    update readme

    - updated readme to mention .env
    - added model arg back

commit 0656a83
Author: Felix Boehme <[email protected]>
Date:   Thu Apr 13 11:29:06 2023 -0400

    recursive calls if not json parsable

    - makes recursive calls to API (with a comment about it not being parsable) if response was not parsable
    - pass prompt.txt as system prompt
    - use env var for `DEFAULT_MODEL`
    - use env var for OPENAI_API_KEY

commit 7c072fb
Author: Felix Boehme <[email protected]>
Date:   Thu Apr 13 11:24:41 2023 -0400

    update prompt to make it pay attention to indentation

commit c62f91e
Author: Felix Boehme <[email protected]>
Date:   Thu Apr 13 11:23:44 2023 -0400

    Update .gitignore

commit f2d21e7
Merge: 0420860 6343f6f
Author: biobootloader <[email protected]>
Date:   Fri Apr 14 13:59:44 2023 -0700

    Merge pull request biobootloader#12 from chriscarrollsmith/main

    Implemented .env file API key storage

commit 6343f6f
Author: biobootloader <[email protected]>
Date:   Fri Apr 14 13:59:31 2023 -0700

    Apply suggestions from code review

commit d87ebfa
Merge: 9af5480 75f08e2
Author: Christopher Carroll Smith <[email protected]>
Date:   Fri Apr 14 16:53:25 2023 -0400

    Merge branch 'main' of https://github.com/chriscarrollsmith/wolverine

commit 9af5480
Author: Christopher Carroll Smith <[email protected]>
Date:   Fri Apr 14 16:53:02 2023 -0400

    Added python-dotenv to requirements.txt

commit 75f08e2
Merge: e8a8931 0420860
Author: Christopher Carroll Smith <[email protected]>
Date:   Fri Apr 14 16:50:29 2023 -0400

    Merge pull request biobootloader#1 from biobootloader/main

    Reconcile with master branch

commit 0420860
Merge: d547822 6afb4db
Author: biobootloader <[email protected]>
Date:   Fri Apr 14 13:22:53 2023 -0700

    Merge pull request biobootloader#20 from eltociear/patch-1

    fix typo in README.md

commit d547822
Merge: 1b9649e 4863df6
Author: biobootloader <[email protected]>
Date:   Fri Apr 14 13:19:43 2023 -0700

    Merge pull request biobootloader#17 from hemangjoshi37a/main

    added `star-history` ⭐⭐⭐⭐⭐

commit 6afb4db
Author: Ikko Eltociear Ashimine <[email protected]>
Date:   Fri Apr 14 16:37:05 2023 +0900

    fix typo in README.md

    reliablity -> reliability

commit 4863df6
Author: Hemang Joshi <[email protected]>
Date:   Fri Apr 14 10:27:32 2023 +0530

    added `star-history`

    added `star-history`

commit e8a8931
Author: Christopher Carroll Smith <[email protected]>
Date:   Wed Apr 12 13:45:54 2023 -0400

    Implemented .env file API key storage
  • Loading branch information
prayagnshah committed Apr 15, 2023
1 parent e1c413f commit 946e15f
Show file tree
Hide file tree
Showing 6 changed files with 94 additions and 31 deletions.
2 changes: 2 additions & 0 deletions .env.sample
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
OPENAI_API_KEY=your_api_key
#DEFAULT_MODEL=gpt-3.5-turbo
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
venv
openai_key.txt
.venv
.env
env/
.vscode/
15 changes: 13 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,11 @@ For a quick demonstration see my [demo video on twitter](https://twitter.com/bio
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cp .env.sample .env

Add your openAI api key to `openai_key.txt` - _warning!_ by default this uses GPT-4 and may make many repeated calls to the api.
Add your openAI api key to `.env`

_warning!_ By default wolverine uses GPT-4 and may make many repeated calls to the api.

## Example Usage

Expand All @@ -30,13 +33,21 @@ You can also use flag `--confirm=True` which will ask you `yes or no` before mak

python wolverine.py buggy_script.py "subtract" 20 3 --confirm=True

If you want to use GPT-3.5 by default instead of GPT-4 uncomment the default model line in `.env`:

DEFAULT_MODEL=gpt-3.5-turbo

## Future Plans

This is just a quick prototype I threw together in a few hours. There are many possible extensions and contributions are welcome:

- add flags to customize usage, such as asking for user confirmation before running changed code
- further iterations on the edit format that GPT responds in. Currently it struggles a bit with indentation, but I'm sure that can be improved
- a suite of example buggy files that we can test prompts on to ensure reliablity and measure improvement
- a suite of example buggy files that we can test prompts on to ensure reliability and measure improvement
- multiple files / codebases: send GPT everything that appears in the stacktrace
- graceful handling of large files - should we just send GPT relevant classes / functions?
- extension to languages other than python

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=biobootloader/wolverine&type=Date)](https://star-history.com/#biobootloader/wolverine)
5 changes: 4 additions & 1 deletion prompt.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,13 @@ Because you are part of an automated system, the format you respond in is very s

In addition to the changes, please also provide short explanations of the what went wrong. A single explanation is required, but if you think it's helpful, feel free to provide more explanations for groups of more complicated changes. Be careful to use proper indentation and spacing in your changes. An example response could be:

Be ABSOLUTELY SURE to include the CORRECT INDENTATION when making replacements.

example response:
[
{"explanation": "this is just an example, this would usually be a brief explanation of what went wrong"},
{"operation": "InsertAfter", "line": 10, "content": "x = 1\ny = 2\nz = x * y"},
{"operation": "Delete", "line": 15, "content": ""},
{"operation": "Replace", "line": 18, "content": "x += 1"},
{"operation": "Replace", "line": 18, "content": " x += 1"},
{"operation": "Delete", "line": 20, "content": ""}
]
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ multidict==6.0.4
openai==0.27.2
pycodestyle==2.10.0
pyflakes==3.0.1
python-dotenv==1.0.0
requests==2.28.2
six==1.16.0
termcolor==2.2.0
Expand Down
97 changes: 70 additions & 27 deletions wolverine.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,20 @@
import shutil
import subprocess
import sys

import openai
from termcolor import cprint
from dotenv import load_dotenv


# Set up the OpenAI API
with open("openai_key.txt") as f:
openai.api_key = f.read().strip()
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

DEFAULT_MODEL = os.environ.get("DEFAULT_MODEL", "gpt-4")


with open("prompt.txt") as f:
SYSTEM_PROMPT = f.read()


def run_script(script_name, script_args):
Expand All @@ -25,7 +32,49 @@ def run_script(script_name, script_args):
return result.decode("utf-8"), 0


def send_error_to_gpt(file_path, args, error_message, model):
def json_validated_response(model, messages):
"""
This function is needed because the API can return a non-json response.
This will run recursively until a valid json response is returned.
todo: might want to stop after a certain number of retries
"""
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0.5,
)
messages.append(response.choices[0].message)
content = response.choices[0].message.content
# see if json can be parsed
try:
json_start_index = content.index(
"["
) # find the starting position of the JSON data
json_data = content[
json_start_index:
] # extract the JSON data from the response string
json_response = json.loads(json_data)
except (json.decoder.JSONDecodeError, ValueError) as e:
cprint(f"{e}. Re-running the query.", "red")
# debug
cprint(f"\nGPT RESPONSE:\n\n{content}\n\n", "yellow")
# append a user message that says the json is invalid
messages.append(
{
"role": "user",
"content": "Your response could not be parsed by json.loads. Please restate your last message as pure JSON.",
}
)
# rerun the api call
return json_validated_response(model, messages)
except Exception as e:
cprint(f"Unknown error: {e}", "red")
cprint(f"\nGPT RESPONSE:\n\n{content}\n\n", "yellow")
raise e
return json_response


def send_error_to_gpt(file_path, args, error_message, model=DEFAULT_MODEL):
with open(file_path, "r") as f:
file_lines = f.readlines()

Expand All @@ -34,12 +83,7 @@ def send_error_to_gpt(file_path, args, error_message, model):
file_with_lines.append(str(i + 1) + ": " + line)
file_with_lines = "".join(file_with_lines)

with open("prompt.txt") as f:
initial_prompt_text = f.read()

prompt = (
initial_prompt_text +
"\n\n"
"Here is the script that needs fixing:\n\n"
f"{file_with_lines}\n\n"
"Here are the arguments it was provided:\n\n"
Expand All @@ -51,28 +95,27 @@ def send_error_to_gpt(file_path, args, error_message, model):
)

# print(prompt)
messages = [
{
"role": "system",
"content": SYSTEM_PROMPT,
},
{
"role": "user",
"content": prompt,
},
]

response = openai.ChatCompletion.create(
model=model,
messages=[
{
"role": "user",
"content": prompt,
}
],
temperature=1.0,
)

return response.choices[0].message.content.strip()
return json_validated_response(model, messages)


# Added the flag confirm. Once user use flag confirm then it will ask for confirmation before applying the changes.
def apply_changes(file_path, changes_json, confirm=False):
def apply_changes(file_path, changes: list, confirm=False):
"""
Pass changes as loaded json (list of dicts)
"""
with open(file_path, "r") as f:
original_file_lines = f.readlines()

changes = json.loads(changes_json)

# Filter out explanation elements
operation_changes = [change for change in changes if "operation" in change]
explanations = [
Expand Down Expand Up @@ -137,8 +180,7 @@ def apply_changes(file_path, changes_json, confirm=False):
print("Changes applied.")


# Added the flag confirm. Once user use flag confirm then it will ask for confirmation before applying the changes.
def main(script_name, *script_args, revert=False, model="gpt-4", confirm=False):
def main(script_name, *script_args, revert=False, model=DEFAULT_MODEL, confirm=False):
if revert:
backup_file = script_name + ".bak"
if os.path.exists(backup_file):
Expand Down Expand Up @@ -169,6 +211,7 @@ def main(script_name, *script_args, revert=False, model="gpt-4", confirm=False):
error_message=output,
model=model,
)

apply_changes(script_name, json_response, confirm=confirm)
cprint("Changes applied. Rerunning...", "blue")

Expand Down

0 comments on commit 946e15f

Please sign in to comment.