Skip to content

Commit

Permalink
chore: update readme & shorten search flag descriptions
Browse files Browse the repository at this point in the history
  • Loading branch information
japelsin committed Feb 25, 2024
1 parent bdeb92b commit c37f3db
Show file tree
Hide file tree
Showing 3 changed files with 29 additions and 24 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
dist/
.DS_Store
44 changes: 24 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
# Perplexity CLI

CLI for interfacing with [Perplexity](https://www.perplexity.ai/)'s API. Can also be used as a chatbot.
[![Go](https://github.com/japelsin/pplx/actions/workflows/release.yml/badge.svg)](https://github.com/japelsin/pplx/actions/workflows/release.yml)
[![License](https://img.shields.io/badge/license-MIT-blue)](https://github.com/japelsin/pplx/blob/main/LICENSE)

CLI for searching with [Perplexity](https://www.perplexity.ai/)'s API. Can also be used as a chatbot.

## Prerequisites

- Perplexity account and API key. You'll be prompted for the API key the first time you run `pplx`.

## Installation

### With Homebrew

If you're using [Homebrew](https://brew.sh/):
### With [Homebrew](https://brew.sh)

```bash
brew install japelsin/tap/pplx
Expand All @@ -24,45 +25,48 @@ If you have [go](https://go.dev/) installed:
go install github.com/japelsin/pplx@latest
```

Otherwise you could grab the appropriate executable from releases.
You could also grab the appropriate executable from [releases](https://github.com/japelsin/pplx/releases).

## Usage

### Search

The response is always streamed, all other [parameters](https://docs.perplexity.ai/reference/post_chat_completions) are available. The model is set through the config (see below).
Search command. Most parameters allowed by `pplx-api` are available as options. The model is set through the config (see below).

```
Usage:
pplx search [flags]
Flags:
-f, --frequency_penalty int How much to penalize token reuse. 1 is no penalty. Between 0 and 1.
-m, --max_tokens int Maximum number of tokens to be used per request. Defaults to config value. (default 1000)
-p, --presence_penalty int How much to penalize existing tokens. Between -2 and 2.
-f, --frequency_penalty int How much to penalize token frequency.
-p, --presence_penalty int How much to penalize token presence. Between -2 and 2.
-t, --temperature int The amount of randomness in the response. Between 0 and 2.
-K, --top_k int Number of tokens to consider when generating tokens, lower values result in higher probability tokens being used. Between 0 and 2048.
-P, --top_p int Nucleus sampling. Probability cutoff for token selection, lower values result in higher probability tokens being used. Between 0 and 1.
-K, --top_k int Number of tokens to consider when generating tokens. Between 0 and 2048.
-P, --top_p int Nucleus sampling. Probability cutoff for token selection. Between 0 and 1.
```

### Config
The API reference can be found [here](https://docs.perplexity.ai/reference/post_chat_completions).

Configure `pplx`.
### Config

```
Usage:
pplx config [command]
pplx config [command]
Available Commands:
path Get configuration file path
reset Reset config
set Set config value
path Get configuration file path
reset Reset config
set Set config value
```

#### Config values
#### Config Values

- **Additional instructions**: Instructions appended to queries. If you intend to use `pplx` as a search engine it's recommended to instruct it to always provide its sources.
##### Additional Instructions
Instructions appended to queries. If you intend to use `pplx` as a search engine it's recommended to instruct it to always provide its sources.

- **Model**: Available models are: `pplx-7b-chat`, `pplx-70b-chat`, `pplx-7b-online`, `pplx-70b-online`, `llama-2-70b-chat`, `codellama-34b-instruct`, `codellama-70b-instruct`, `mistral-7b-instruct`, and `mixtral-8x7b-instruct`.
##### Model
Model to use. The available models are: `pplx-7b-chat`, `pplx-70b-chat`, `pplx-7b-online`, `pplx-70b-online`, `llama-2-70b-chat`, `codellama-34b-instruct`, `codellama-70b-instruct`, `mistral-7b-instruct`, and `mixtral-8x7b-instruct`.

- **API Key**: Self explanatory
##### API Key
Self explanatory.
8 changes: 4 additions & 4 deletions cmd/search.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,8 +130,8 @@ func init() {

searchCmd.Flags().IntP(utils.MaxTokensKey, "m", 1000, "Maximum number of tokens to be used per request. Defaults to config value.")
searchCmd.Flags().IntP(utils.TemperatureKey, "t", 0, "The amount of randomness in the response. Between 0 and 2.")
searchCmd.Flags().IntP(utils.TopKKey, "K", 0, "Number of tokens to consider when generating tokens, lower values result in higher probability tokens being used. Between 0 and 2048.")
searchCmd.Flags().IntP(utils.TopPKey, "P", 0, "Nucleus sampling. Probability cutoff for token selection, lower values result in higher probability tokens being used. Between 0 and 1.")
searchCmd.Flags().IntP(utils.FrequencyPenaltyKey, "f", 0, "How much to penalize token reuse. 1 is no penalty. Between 0 and 1.")
searchCmd.Flags().IntP(utils.PresencePenaltyKey, "p", 0, "How much to penalize existing tokens. Between -2 and 2.")
searchCmd.Flags().IntP(utils.TopKKey, "K", 0, "Number of tokens to consider when generating tokens. Between 0 and 2048.")
searchCmd.Flags().IntP(utils.TopPKey, "P", 0, "Nucleus sampling. Probability cutoff for token selection. Between 0 and 1.")
searchCmd.Flags().IntP(utils.FrequencyPenaltyKey, "f", 0, "How much to penalize token frequency.")
searchCmd.Flags().IntP(utils.PresencePenaltyKey, "p", 0, "How much to penalize token presence. Between -2 and 2.")
}

0 comments on commit c37f3db

Please sign in to comment.