Skip to content

Commit

Permalink
ensure valid link to docs with latest/ in url
Browse files Browse the repository at this point in the history
  • Loading branch information
lapp0 committed Oct 7, 2024
1 parent d1b8728 commit c0ee3b9
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 9 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,11 @@ Made with ❤👷️ by the team at [.txt](https://dottxt.co).
pip install outlines
```

First time here? Go to our [setup guide](https://dottxt-ai.github.io/outlines/welcome)
First time here? Go to our [setup guide](https://dottxt-ai.github.io/outlines/latest/welcome/)

## Features

- [x] 🤖 [Multiple model integrations](https://dottxt-ai.github.io/outlines/installation): OpenAI, transformers, llama.cpp, exllama2, mamba
- [x] 🤖 [Multiple model integrations](https://dottxt-ai.github.io/outlines/latest/installation): OpenAI, transformers, llama.cpp, exllama2, mamba
- [x] 🖍️ Simple and powerful prompting primitives based on the [Jinja templating engine](https://jinja.palletsprojects.com/)
- [x] 🚄 [Multiple choices](#multiple-choices), [type constraints](#type-constraint) and dynamic stopping
- [x] ⚡ Fast [regex-structured generation](#efficient-regex-structured-generation)
Expand All @@ -35,7 +35,7 @@ First time here? Go to our [setup guide](https://dottxt-ai.github.io/outlines/we
- [x] 💾 Caching of generations
- [x] 🗂️ Batch inference
- [x] 🎲 Sample with the greedy, multinomial and beam search algorithms (and more to come!)
- [x] 🚀 [Serve with vLLM](https://dottxt-ai.github.io/outlines/reference/serve/vllm), with official Docker image, [`outlinesdev/outlines`](https://hub.docker.com/r/outlinesdev/outlines)!
- [x] 🚀 [Serve with vLLM](https://dottxt-ai.github.io/outlines/latest/reference/serve/vllm), with official Docker image, [`outlinesdev/outlines`](https://hub.docker.com/r/outlinesdev/outlines)!


Outlines has new releases and features coming every week. Make sure to ⭐ star and 👀 watch this repository, follow [@dottxtai][dottxt-twitter] to stay up to date!
Expand Down Expand Up @@ -338,7 +338,7 @@ answer = outlines.generate.text(model)(prompt, max_tokens=100)
## Join us

- 💡 **Have an idea?** Come chat with us on [Discord][discord]
- 🔨 **Want to contribute?** Consult our [contribution guide](https://dottxt-ai.github.io/outlines/community/contribute/).
- 🔨 **Want to contribute?** Consult our [contribution guide](https://dottxt-ai.github.io/outlines/latest/community/contribute/).
- 🐞 **Found a bug?** Open an [issue](https://github.com/dottxt-ai/outlines/issues)


Expand All @@ -353,7 +353,7 @@ answer = outlines.generate.text(model)(prompt, max_tokens=100)
}
```

[documentation]: https://dottxt-ai.github.io/outlines/welcome/
[documentation]: https://dottxt-ai.github.io/outlines/latest/welcome/
[documentation-badge]: https://img.shields.io/readthedocs/outlines
[contributors]: https://github.com/dottxt-ai/outlines/graphs/contributors
[contributors-badge]: https://img.shields.io/github/contributors/dottxt-ai/outlines?style=flat-square&logo=github&logoColor=white&color=ECEFF4
Expand Down
4 changes: 2 additions & 2 deletions docs/cookbook/simtom.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ SimToM calls an LLM with two consecutive prompts:

To implement SimToM with Outlines, we will need to:

1. Write the prompts with [prompt functions](https://dottxt-ai.github.io/outlines/reference/prompting/).
1. Write the prompts with [prompt functions](https://dottxt-ai.github.io/outlines/latest/reference/prompting/).
2. Define the JSON object each prompt will return using Pydantic.
3. Generate responses with a Mistral model using the [transformers integration](https://dottxt-ai.github.io/outlines/reference/models/transformers/).
3. Generate responses with a Mistral model using the [transformers integration](https://dottxt-ai.github.io/outlines/latest/reference/models/transformers/).

Let's dive into it!

Expand Down
2 changes: 1 addition & 1 deletion outlines/fsm/guide.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ def __init__(self, cfg_string: str, tokenizer):
"""
warnings.warn(
"Outlines' public *community-contributed* CFG structured generation is experimental. "
"Please review https://dottxt-ai.github.io/outlines/reference/cfg#disclaimer"
"Please review https://dottxt-ai.github.io/outlines/latest/reference/generation/cfg#disclaimer"
)

self.cfg_string = cfg_string
Expand Down
2 changes: 1 addition & 1 deletion outlines/models/exllamav2.py
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ def exl2(
raise ImportError(
"The `exllamav2`, `transformers` and `torch` libraries needs to be installed in order to use `exllamav2` models. "
"Please run `pip install transformers torch git+https://github.com/lapp0/exllamav2@sampler-logits-processor` "
"Documentation: https://dottxt-ai.github.io/outlines/reference/models/exllamav2/"
"Documentation: https://dottxt-ai.github.io/outlines/latest/reference/models/exllamav2/"
)
config = ExLlamaV2Config(model_path)
if max_chunk_size is not None:
Expand Down

0 comments on commit c0ee3b9

Please sign in to comment.