Danai is a comprehensive utility package designed to streamline interactions with OpenAI models. It offers a suite of tools for various tasks including token counting, pricing calculations, quick queries, directory summarisation, and JSON response management.
Danai simplifies the process of moving between different output types – particularly with structured data – and assists with managing and analysing token usage and costs associated with OpenAI's powerful language models.
- Token Counting: Count the number of tokens in text files or strings using specified OpenAI models.
- Pricing Calculation: Calculate the cost of tokens generated by OpenAI models based on pricing data, which is routinely updated from OpenAI.
- Quick Queries: Quickly generate responses from OpenAI models and print them to the console or to timestamped files.
- Directory Summarisation: Print directory contents to collated markdown files and generate directory trees, while ignoring certain files and directories.
- JSON Response Saving: Save OpenAI API responses as JSON files with optional pretty-printing and cost calculation.
To install Danai, use pip:
pip install danai
Count tokens in a text file:
from danai import tokencount_file
token_count = tokencount_file("path/to/textfile.txt", model="gpt-4o")
print(f"Token count: {token_count}")
Count tokens in a text string:
from danai import tokencount_text
text = "Your text here"
token_count = tokencount_text(text, model="gpt-4o")
print(f"Token count: {token_count}")
Calculate the cost of tokens generated by an OpenAI model:
from danai import pricecheck
response = ... # Your OpenAI API response object
cost_details = pricecheck(response)
print(cost_details)
Generate a quick response from an OpenAI model and print it:
from danai import quickprint
prompt = "Your prompt here"
quickprint(prompt, model="gpt-4o-mini")
Print the contents of a directory while ignoring certain files and directories:
from danai import print_directory_contents
print_directory_contents(
directory="path/to/directory",
output_dir="path/to/output",
ignore_dirs=[".git", "__pycache__"],
ignore_files=[".DS_Store"],
ignore_extensions=[".pyc"]
)
Generate a directory tree:
from danai import print_directory_tree
print_directory_tree(
directory="path/to/directory",
output_dir="path/to/output",
ignore_dirs=[".git", "__pycache__"]
)
Join summaries of directory contents and tree:
from danai import join_summaries
join_summaries(output_directory="path/to/output")
Save an OpenAI API response as a JSON file:
from danai import jsonsave
response = ... # Your OpenAI API response object
jsonsave(response, filename="response", directory="outputs", overwrite=False, pretty=True, price=True)
Danai provides several command-line interface (CLI) tools for convenience:
- danai: The main CLI tool.
- tcount: A tool for counting tokens in text files or strings.
- printsetup: A tool for printing setup information.
To count tokens in a text file using the CLI:
tcount path/to/textfile.txt --model gpt-4o
To print setup information using the CLI:
danai printsetup
- Python: Requires Python 3.7 or higher.
- Environment Variables: Ensure that the
OPENAI_API_KEY
is set as an environment variable. This key is necessary for authenticating with the OpenAI API.
To set the OPENAI_API_KEY
environment variable on a Mac, you can add the following line to your .bash_profile
or .zshrc
file:
export OPENAI_API_KEY="your_openai_api_key"
Then, reload your profile:
source ~/.bash_profile # or source ~/.zshrc
We welcome contributions from the community! If you would like to contribute to Danai, please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bugfix.
- Make your changes and commit them with clear and concise messages.
- Push your changes to your fork.
- Submit a pull request to the main repository.
Please ensure that your code adheres to the project's coding standards and includes appropriate tests.
This project is licensed under the MIT License. See the LICENSE file for details.
Aidan Coughlan - [email protected]