Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ci): automatic release semver + git archival installation #143

Merged
merged 5 commits into from
Jul 25, 2023
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .git_archival.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
node: $Format:%H$
node-date: $Format:%cI$
describe-name: $Format:%(describe:tags=true,match=*[0-9]*)$
ref-names: $Format:%D$
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,5 @@ nightly-requirements-gpu.txt linguist-generated=true
tests/models/__snapshots__/* linguist-generated=true
typings/**/*.pyi linguist-generated=true
* text=auto eol=lf
# Needed for setuptools-scm-git-archive
.git_archival.txt export-subst
52 changes: 27 additions & 25 deletions .github/actions/release.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

set -ex
set -e

# Function to print script usage
print_usage() {
Expand Down Expand Up @@ -61,41 +61,43 @@ fi
release_package() {
local version="$1"
echo "Releasing version ${version}..."

jq --arg release_version "${version}" '.version = $release_version' < package.json > package.json.tmp && mv package.json.tmp package.json

if [[ $release == 'patch' ]]; then
hatch version "${version}"
fi

towncrier build --yes --version "${version}"
git add CHANGELOG.md changelog.d src/openllm/__about__.py package.json
git add CHANGELOG.md changelog.d package.json
git commit -S -sm "infra: prepare for release ${version} [generated] [skip ci]"
git push origin main

echo "Releasing tag ${version}..." && git tag -a "v${version}" -sm "Release ${version} [generated by GitHub Actions]"
git push origin "v${version}"

echo "Finish releasing version ${version}"
}

echo "Cleaning previously built artifacts..." && hatch clean
#get highest tags across all branches, not just the current branch
version="$(git describe --tags "$(git rev-list --tags --max-count=1)")"
VERSION="${version#v}"
# Save the current value of IFS to restore it later
OLD_IFS=$IFS
IFS='.'
# split into array
read -ra VERSION_BITS <<< "$VERSION"
# Restore the original value of IFS
IFS=$OLD_IFS
VNUM1=${VERSION_BITS[0]}
VNUM2=${VERSION_BITS[1]}
VNUM3=${VERSION_BITS[2]}

if [[ $release == 'major' ]]; then
hatch version major
CURRENT_VERSION=$(hatch version)
release_package "${CURRENT_VERSION}"
VNUM1=$((VNUM1+1))
VNUM2=0
VNUM3=0
elif [[ $release == 'minor' ]]; then
hatch version minor
CURRENT_VERSION="$(hatch version)"
release_package "${CURRENT_VERSION}"
VNUM2=$((VNUM2+1))
VNUM3=0
else
CURRENT_VERSION=$(hatch version)

if [[ "$CURRENT_VERSION" =~ \.dev ]]; then
release_package "${CURRENT_VERSION%%.dev*}"
else
echo "Current version is not properly setup as dev version. Aborting..."
exit 1
fi
VNUM3=$((VNUM3+1))
fi

echo "Commit count: $(git rev-list --count HEAD)"

#create new tag
RELEASE_VERSION="$VNUM1.$VNUM2.$VNUM3"
release_package "${RELEASE_VERSION}"
20 changes: 17 additions & 3 deletions .github/workflows/create-releases.yml
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,11 @@ jobs:
run: python -m build
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
print-hash: true
prepare-next-dev-cycle:
needs:
- release
- publish-python
- binary-distribution
runs-on: ubuntu-latest
Expand Down Expand Up @@ -126,16 +129,27 @@ jobs:
GIT_COMMITTER_EMAIL: ${{ steps.import-gpg-key.outputs.email }}
run: |
git pull --autostash --no-edit --gpg-sign --ff origin main
echo "Bumping version to dev..." && hatch version patch && hatch version dev
jq --arg release_version "$(hatch version)" '.version = $release_version' < package.json > package.json.tmp && mv package.json.tmp package.json
git add src/openllm/__about__.py package.json && git commit -S -sm "infra: bump to dev version of $(hatch version) [generated] [skip ci]"
SEMVER="${{ needs.release.outputs.version }}"
OLD_IFS=$IFS
IFS='.'
read -ra VERSION_BITS <<< "$SEMVER"
IFS=$OLD_IFS
VNUM1=${VERSION_BITS[0]}
VNUM2=${VERSION_BITS[1]}
VNUM3=${VERSION_BITS[2]}
VNUM3=$((VNUM3+1))
DEV_VERSION="$VNUM1.$VNUM2.$VNUM3.dev0"
echo "Bumping version to ${DEV_VERSION}..."
jq --arg release_version "${DEV_VERSION}" '.version = $release_version' < package.json > package.json.tmp && mv package.json.tmp package.json
git add package.json && git commit -S -sm "infra: bump to dev version of ${DEV_VERSION} [generated] [skip ci]"
git push origin HEAD:main
binary-distribution:
if: github.repository_owner == 'bentoml'
needs: release
name: Create binary/wheels distribution
uses: bentoml/OpenLLM/.github/workflows/binary-releases.yml@main
release-notes:
if: github.repository_owner == 'bentoml'
needs:
- release
- publish-python
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -141,3 +141,4 @@ pyapp
/target

.pdm-python
/src/openllm/_version.py
49 changes: 22 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,22 @@
<div align="center">
<h1 align="center">🦾 OpenLLM</h1>
<a href="https://pypi.org/project/openllm">
<img src="https://img.shields.io/pypi/v/openllm.svg" alt="pypi_status" />
<img src="https://img.shields.io/pypi/v/openllm.svg?logo=pypi&label=PyPI&logoColor=gold" alt="pypi_status" />
</a><a href="https://github.com/bentoml/OpenLLM/actions/workflows/ci.yml">
<img src="https://github.com/bentoml/OpenLLM/actions/workflows/ci.yml/badge.svg?branch=main" alt="ci" />
</a><a href="https://twitter.com/bentomlai">
<img src="https://badgen.net/badge/icon/@bentomlai/1DA1F2?icon=twitter&label=Follow%20Us" alt="Twitter" />
</a><a href="https://l.bentoml.com/join-openllm-discord">
<img src="https://badgen.net/badge/icon/OpenLLM/7289da?icon=discord&label=Join%20Us" alt="Discord" />
</a><br>
</a><a href="https://pypi.org/project/openllm">
<img src="https://img.shields.io/pypi/pyversions/openllm.svg?logo=python&label=Python&logoColor=gold" alt="python_version" />
</a><a href="https://github.com/pypa/hatch">
<img src="https://img.shields.io/badge/%F0%9F%A5%9A-Hatch-4051b5.svg" alt="Hatch" />
</a><br>
</a><a href="https://github.com/astral-sh/ruff">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json" alt="Ruff" />
</a><br>
<p>An open platform for operating large language models (LLMs) in production.</br>
Fine-tune, serve, deploy, and monitor any LLMs with ease.</p>
<i></i>
Expand Down Expand Up @@ -39,10 +47,14 @@ Images or deploy as serverless endpoint via
🤖️ **Bring your own LLM**: Fine-tune any LLM to suit your needs with
`LLM.tuning()`. (Coming soon)

<!-- hatch-fancy-pypi-readme intro stop -->

![Gif showing OpenLLM Intro](/assets/output.gif)
<br/>

## 🏃‍ Getting Started
<!-- hatch-fancy-pypi-readme interim start -->

## 🏃 Getting Started

To use OpenLLM, you need to have Python 3.8 (or newer) and `pip` installed on
your system. We highly recommend using a Virtual Environment to prevent package
Expand Down Expand Up @@ -105,13 +117,18 @@ openllm query 'Explain to me the difference between "further" and "farther"'

Visit `http://localhost:3000/docs.json` for OpenLLM's API specification.

OpenLLM seamlessly supports many models and different variants of models.
Users can also specify different variants of the model to be served, by
providing the `--model-id` argument, e.g.:

```bash
openllm start flan-t5 --model-id google/flan-t5-large
```

> **Note** that `openllm` also supports all variants of fine-tuning weights, custom model path
> as well as quantized weights for any of the supported models as long as it can be loaded with
> the model architecture. Refer to [supported models](https://github.com/bentoml/OpenLLM/tree/main#-supported-models) section for models' architecture.

Use the `openllm models` command to see the list of models and their variants
supported in OpenLLM.

Expand All @@ -127,17 +144,13 @@ dependencies can be installed with the instructions below:
<tr>
<th>Model</th>
<th>Architecture</th>
<th>CPU</th>
<th>GPU</th>
<th>Model Ids</th>
<th>Installation</th>
</tr>
<tr>

<td><a href=https://github.com/THUDM/ChatGLM-6B>chatglm</a></td>
<td><a href=https://github.com/THUDM/ChatGLM-6B><code>ChatGLMForConditionalGeneration</code></a></td>
<td>❌</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/thudm/chatglm-6b><code>thudm/chatglm-6b</code></a></li>
Expand All @@ -159,8 +172,6 @@ pip install "openllm[chatglm]"

<td><a href=https://github.com/databrickslabs/dolly>dolly-v2</a></td>
<td><a href=https://huggingface.co/docs/transformers/main/model_doc/gpt_neox#transformers.GPTNeoXForCausalLM><code>GPTNeoXForCausalLM</code></a></td>
<td>✅</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/databricks/dolly-v2-3b><code>databricks/dolly-v2-3b</code></a></li>
Expand All @@ -180,8 +191,6 @@ pip install openllm

<td><a href=https://falconllm.tii.ae/>falcon</a></td>
<td><a href=https://falconllm.tii.ae/><code>FalconForCausalLM</code></a></td>
<td>❌</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/tiiuae/falcon-7b><code>tiiuae/falcon-7b</code></a></li>
Expand All @@ -202,8 +211,6 @@ pip install "openllm[falcon]"

<td><a href=https://huggingface.co/docs/transformers/model_doc/flan-t5>flan-t5</a></td>
<td><a href=https://huggingface.co/docs/transformers/main/model_doc/t5#transformers.T5ForConditionalGeneration><code>T5ForConditionalGeneration</code></a></td>
<td>✅</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/google/flan-t5-small><code>google/flan-t5-small</code></a></li>
Expand All @@ -225,8 +232,6 @@ pip install "openllm[flan-t5]"

<td><a href=https://github.com/EleutherAI/gpt-neox>gpt-neox</a></td>
<td><a href=https://huggingface.co/docs/transformers/main/model_doc/gpt_neox#transformers.GPTNeoXForCausalLM><code>GPTNeoXForCausalLM</code></a></td>
<td>❌</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/eleutherai/gpt-neox-20b><code>eleutherai/gpt-neox-20b</code></a></li></ul>
Expand All @@ -244,8 +249,6 @@ pip install openllm

<td><a href=https://github.com/facebookresearch/llama>llama</a></td>
<td><a href=https://huggingface.co/docs/transformers/main/model_doc/llama#transformers.LlamaForCausalLM><code>LlamaForCausalLM</code></a></td>
<td>✅</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/meta-llama/llama-2-70b-chat-hf><code>meta-llama/llama-2-70b-chat-hf</code></a></li>
Expand Down Expand Up @@ -275,8 +278,6 @@ pip install "openllm[llama]"

<td><a href=https://huggingface.co/mosaicml>mpt</a></td>
<td><a href=https://huggingface.co/mosaicml><code>MPTForCausalLM</code></a></td>
<td>✅</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/mosaicml/mpt-7b><code>mosaicml/mpt-7b</code></a></li>
Expand All @@ -300,8 +301,6 @@ pip install "openllm[mpt]"

<td><a href=https://huggingface.co/docs/transformers/model_doc/opt>opt</a></td>
<td><a href=https://huggingface.co/docs/transformers/main/model_doc/opt#transformers.OPTForCausalLM><code>OPTForCausalLM</code></a></td>
<td>✅</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/facebook/opt-125m><code>facebook/opt-125m</code></a></li>
Expand All @@ -324,8 +323,6 @@ pip install "openllm[opt]"

<td><a href=https://github.com/Stability-AI/StableLM>stablelm</a></td>
<td><a href=https://huggingface.co/docs/transformers/main/model_doc/gpt_neox#transformers.GPTNeoXForCausalLM><code>GPTNeoXForCausalLM</code></a></td>
<td>✅</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/stabilityai/stablelm-tuned-alpha-3b><code>stabilityai/stablelm-tuned-alpha-3b</code></a></li>
Expand All @@ -346,8 +343,6 @@ pip install openllm

<td><a href=https://github.com/bigcode-project/starcoder>starcoder</a></td>
<td><a href=https://huggingface.co/docs/transformers/main/model_doc/gpt_bigcode#transformers.GPTBigCodeForCausalLM><code>GPTBigCodeForCausalLM</code></a></td>
<td>❌</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/bigcode/starcoder><code>bigcode/starcoder</code></a></li>
Expand All @@ -366,8 +361,6 @@ pip install "openllm[starcoder]"

<td><a href=https://github.com/baichuan-inc/Baichuan-7B>baichuan</a></td>
<td><a href=https://github.com/baichuan-inc/Baichuan-7B><code>BaiChuanForCausalLM</code></a></td>
<td>❌</td>
<td>✅</td>
<td>

<ul><li><a href=https://huggingface.co/baichuan-inc/baichuan-7b><code>baichuan-inc/baichuan-7b</code></a></li>
Expand Down Expand Up @@ -596,9 +589,12 @@ client.ask_agent(
)
```

<!-- hatch-fancy-pypi-readme interim stop -->

![Gif showing Agent integration](/assets/agent.gif)
<br/>

<!-- hatch-fancy-pypi-readme meta start -->

## 🚀 Deploying to Production

Expand Down Expand Up @@ -664,7 +660,6 @@ the serverless cloud for shipping and scaling AI applications.
[deployment instructions](https://docs.bentoml.com/en/latest/reference/cli.html#bentoml-deployment-create).



## 👥 Community

Engage with like-minded individuals passionate about LLMs, AI, and more on our
Expand Down
5 changes: 5 additions & 0 deletions changelog.d/143.feature.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Added installing with git-archival support

```bash
pip install "https://github.com/bentoml/openllm/archive/main.tar.gz"
```
Loading