This project sets up a simple Flask server to provide an API for transcribing audio files using OpenAI's Whisper model. It allows users to upload audio files through an API endpoint, which are then processed by Whisper to return transcriptions.
Before you begin, ensure you have met the following requirements:
- Python 3.6 or higher
- Pip (Python package manager)
- FFmpeg (for audio processing)
To install the necessary dependencies for this project, follow these steps:
-
Clone the repository:
git clone https://github.com/sauravpanda/whisper-service.git cd whisper-api-server
-
Install the required Python packages:
pip install -r requirements.txt
This will install Flask and Whisper.
-
Make sure
ffmpeg
is installed on your system:- For Ubuntu/Debian:
sudo apt-get install ffmpeg
- For Fedora:
sudo dnf install ffmpeg
- For macOS (with Homebrew):
brew install ffmpeg
- For Windows, download and install from FFmpeg's official site.
- For Ubuntu/Debian:
To run the server, execute:
python app.py
This will start the Flask server on http://localhost:5000
.
To transcribe an audio file, send a POST request to the /transcribe
endpoint with the audio file. For example, using curl
:
curl -X POST -F 'file=@sounds/southpark-the-coon.mp3' http://localhost:5000/transcribe
Replace sounds/southpark-the-coon.mp3
with the path to your audio file.
Contributions to this project are welcome. To contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Make your changes and commit them (
git commit -am 'Add some feature'
). - Push to the branch (
git push origin feature-branch
). - Create a new Pull Request.
This project is licensed under the MIT License.
If you have any questions or feedback, please contact me at [[email protected]].