Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publish to PyPI #82

Merged
merged 12 commits into from
Nov 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .github/workflows/build-and-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest]
python-version: [3.9]
python-version: [3.8, "3.11"]

runs-on: ${{ matrix.os }}

Expand Down Expand Up @@ -51,3 +51,4 @@ jobs:
run: |
cd main
make test
make notebooks
36 changes: 36 additions & 0 deletions .github/workflows/pypi-publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# This workflow will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries

# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

name: Upload Python Package

on:
release:
types: [published]

jobs:
deploy:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install build
- name: Build package
run: python -m build
- name: Publish package
uses: pypa/gh-action-pypi-publish@27b31702a0e7fc50959f5ad993c78deac1bdfc29
with:
user: __token__
password: ${{ secrets.PYPI_TOKEN }}
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ docs:
notebooks:
mkdir -p notebooks/_ignore
for nb in notebooks/*.ipynb; do \
jupyter nbconvert --to notebook --execute "$$nb" --output notebooks/_ignore/"$$(basename $$nb)"; \
jupyter nbconvert --to notebook --execute "$$nb" --output _ignore/"$$(basename $$nb)"; \
done

.PHONY: install test notebooks format lint docs
8 changes: 6 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,11 @@ Neural NETworks for antibody Affinity Maturation.

## pip installation

TODO
Netam is available on PyPI, and works with Python 3.8 through 3.11.

```
pip install netam
```

This will allow you to use the models.

Expand Down Expand Up @@ -55,4 +59,4 @@ If you are running one of the experiment repos, such as:
* [thrifty-experiments-1](https://github.com/matsengrp/thrifty-experiments-1/)
* [dnsm-experiments-1](https://github.com/matsengrp/dnsm-experiments-1/)

you will want to visit those repos and follow the installation instructions there.
you will want to visit those repos and follow the installation instructions there.
14 changes: 7 additions & 7 deletions netam/multihit.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
from torch.utils.data import Dataset
from tqdm import tqdm
import pandas as pd
from typing import Sequence
from typing import Sequence, List, Tuple

from netam.molevol import (
reshape_for_codons,
Expand All @@ -30,8 +30,8 @@


def _trim_to_codon_boundary_and_max_len(
seqs: list[Sequence], max_len: int = None
) -> list[Sequence]:
seqs: List[Sequence], max_len: int = None
) -> List[Sequence]:
"""Trims sequences to codon boundary and maximum length.

No assumption is made about the data of a sequence, other than that it is
Expand All @@ -55,7 +55,7 @@ def _observed_hit_classes(parents: Sequence[str], children: Sequence[str]):
children (Sequence[str]): A list of the corresponding child sequences.

Returns:
list[torch.Tensor]: A list of tensors, each containing the observed
List[torch.Tensor]: A list of tensors, each containing the observed
hit classes for each codon in the parent sequence. At any codon position
where the parent or child sequence contains an N, the corresponding tensor
element will be -100.
Expand Down Expand Up @@ -96,8 +96,8 @@ def __init__(
self,
nt_parents: Sequence[str],
nt_children: Sequence[str],
nt_ratess: Sequence[list[float]],
nt_cspss: Sequence[list[list[float]]],
nt_ratess: Sequence[List[float]],
nt_cspss: Sequence[List[List[float]]],
branch_length_multiplier: float = 1.0,
):
trimmed_parents = _trim_to_codon_boundary_and_max_len(nt_parents)
Expand Down Expand Up @@ -452,7 +452,7 @@ def hit_class_dataset_from_pcp_df(

def train_test_datasets_of_pcp_df(
pcp_df: pd.DataFrame, train_frac: float = 0.8, branch_length_multiplier: float = 1.0
) -> tuple[HitClassDataset, HitClassDataset]:
) -> Tuple[HitClassDataset, HitClassDataset]:
"""Splits a pcp_df prepared by `prepare_pcp_df` into a training and testing
HitClassDataset."""
nt_parents = pcp_df["parent"].reset_index(drop=True)
Expand Down
26 changes: 15 additions & 11 deletions netam/pretrained.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,22 +11,24 @@

from netam.framework import load_crepe

# This throws a deprecation warning. It could also be done by looking at
# __file__, or by using importlib.resources.
PRETRAINED_DIR = pkg_resources.resource_filename(__name__, "_pretrained")

PACKAGE_LOCATIONS_AND_CONTENTS = (
# Order of entries:
# * Local file name
# * Remote URL
# * Directory in which the models appear after extraction
# * Directory in which the models appear after extraction (must match path determined by archive)
# * List of models in the package
[
"thrifty-1.0.zip",
"https://github.com/matsengrp/thrifty-models/archive/refs/heads/release/1.0.zip",
"thrifty-models-release-1.0/models",
"thrifty-0.2.0.zip",
"https://github.com/matsengrp/thrifty-models/archive/refs/tags/v0.2.0.zip",
"thrifty-models-0.2.0/models",
[
"ThriftyHumV1.0-20",
"ThriftyHumV1.0-45",
"ThriftyHumV1.0-59",
"ThriftyHumV0.2-20",
"ThriftyHumV0.2-45",
"ThriftyHumV0.2-59",
],
],
)
Expand All @@ -39,7 +41,7 @@
LOCAL_TO_REMOTE[local_file] = remote

for model in models:
MODEL_TO_LOCAL[model] = local_file
MODEL_TO_LOCAL[model] = (local_file, models_dir)


def local_path_for_model(model_name: str):
Expand All @@ -50,7 +52,7 @@ def local_path_for_model(model_name: str):

os.makedirs(PRETRAINED_DIR, exist_ok=True)

local_package = MODEL_TO_LOCAL[model_name]
local_package, models_dir = MODEL_TO_LOCAL[model_name]
local_package_path = os.path.join(PRETRAINED_DIR, local_package)

if not os.path.exists(local_package_path):
Expand All @@ -62,14 +64,16 @@ def local_path_for_model(model_name: str):
f.write(response.content)
if local_package.endswith(".zip"):
with zipfile.ZipFile(local_package_path, "r") as zip_ref:
zip_ref.extractall(PRETRAINED_DIR)
zip_ref.extractall(path=PRETRAINED_DIR)
else:
raise ValueError(f"Unknown file type for {local_package}")
else:
print(f"Using cached models: {local_package_path}")

local_crepe_path = os.path.join(PRETRAINED_DIR, models_dir, model_name)

if not os.path.exists(local_crepe_path + ".yml"):
raise ValueError(f"Model {model_name} not found in pre-trained models.")
raise ValueError(f"Model {local_crepe_path} not found in pre-trained models.")
if not os.path.exists(local_crepe_path + ".pth"):
raise ValueError(f"Model {model_name} missing model weights.")

Expand Down
20 changes: 14 additions & 6 deletions notebooks/thrifty_demo.ipynb

Large diffs are not rendered by default.

4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
[build-system]
requires = ["setuptools>=64", "wheel"]
requires = ["setuptools>=64", "wheel", "setuptools-scm>=8"]
build-backend = "setuptools.build_meta"

[tool.setuptools_scm]
3 changes: 3 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,10 @@ black
docformatter
fire
nbconvert
ipython
ipykernel
pytest
snakemake
tensorboardX
typing_extensions
seaborn
4 changes: 3 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,17 @@
from setuptools import setup, find_packages

# Version is determined by setuptools-scm in pyproject.toml according to git
# tag numbering.
setup(
name="netam",
version="0.1.0",
url="https://github.com/matsengrp/netam.git",
author="Matsen Group",
author_email="ematsen@gmail.com",
description="Neural network models for antibody affinity maturation",
long_description=open("README.md").read(),
long_description_content_type="text/markdown",
packages=find_packages(),
python_requires=">=3.8,<3.12",
install_requires=[
"biopython",
"natsort",
Expand Down
2 changes: 1 addition & 1 deletion tests/test_molevol.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ def iterative_aaprob_of_mut_and_sub(parent_codon, mut_probs, csps):


def test_aaprob_of_mut_and_sub():
crepe = pretrained.load("ThriftyHumV1.0-45")
crepe = pretrained.load("ThriftyHumV0.2-45")
[rates], [subs] = crepe([parent_nt_seq])
mut_probs = 1.0 - torch.exp(-rates.squeeze().clone().detach())
parent_codon = parent_nt_seq[0:3]
Expand Down