A small utility for batch file rpecessing using AI.
- Install Ollama
- Download model for processing
ollama pull mistral-small
(you can change model in~/.donkey.toml
)
Then you can install the application in one of the following ways:
Homebrew (macOS | Linux)
brew install evg4b/tap/donkey
Scoop (Windows)
scoop bucket add evg4b https://github.com/evg4b/scoop-bucket.git
scoop install evg4b/donkey
NPM (Cross-platform)
npx -y @evg4b/donkey ...
Stew (Cross-platform)
stew install evg4b/donkey
Download the appropriate version for your platform from donkey releases page. Once downloaded, the binary can be run from anywhere. You don’t need to install it into a global location. This works well for shared hosts and other systems where you don’t have a privileged account.
Ideally, you should install it somewhere in your PATH
for easy use. /usr/local/bin
is the most probable location.
Caution
This program is a simple utility for batch processing of files using AI. The final result depends on the model used and your request. By running it, you take responsibility for all changes that were made to your file system.