Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Develop #34

Merged
merged 13 commits into from
Aug 25, 2024
Merged

Develop #34

merged 13 commits into from
Aug 25, 2024

Conversation

abhishek9sharma
Copy link
Owner

No description provided.

abhishek9sharma and others added 9 commits May 16, 2024 23:28
Retrieve API key from OPENAI_API_KEY in case SARATHI_OPENAI_API_KEY is undefined for backward compatibility
* `retrieve_api_key()` function to retrieve API key from environment variable SARATHI_OPENAI_API_KEY or OPENAI_API_KEY (#54)
* Added new functions `retrieve_llm_url()`, `retrieve_model_name()`, and `call_llm_model()` to retrieve the OpenAI API endpoint URL, L
…ling, and clean up code. Add default values for missing variables. Update print statements. Remove unnecessary prints. Improve readability and maintainability.
- retrieve_model_name function. This change ensures the use of a
- more advanced model when the environment variable is not set.
- `call_llm_model` function. If the model is not found in
- `prompt_info`, it falls back to using a default model
- retrieved from `retrieve_model_name()`. This improves
- robustness against missing model information.
- retrieval, enhancing flexibility.
- Adjustments ensure that the model can be fetched directly from
- the prompt info dictionary if available, otherwise falling back
- to environment variables.
…r the

- CLI coding assistant, including support for local LLM model endpoints.
- Refactored `call_llm.py` to improve model name retrieval and handling.
…Isort.

- Modify `retrieve_model_name` function to use the model from `prompt_info`
- as default if `OPENAI_MODEL_NAME` is not set, ensuring better model retrieval.
@@ -71,6 +71,7 @@ def execute_cmd(args):
elif args.git_sub_cmd == "autocommit":
generated_commit_msg = generate_commit_message()
if generated_commit_msg:
print("**Below is the generated commit messaged **\n")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abhishek9sharma I think here messaged should be message.

- Updated message from "messaged" to "message" for clarity and correctness.
@abhishek9sharma abhishek9sharma merged commit d5f53af into main Aug 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants