Skip to content

Add question generator to example #1308

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Sep 9, 2020
10 changes: 10 additions & 0 deletions examples/pytorch/question-generator/cortex.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# WARNING: you are on the master branch; please refer to examples on the branch corresponding to your `cortex version` (e.g. for version 0.19.*, run `git checkout -b 0.19` or switch to the `0.19` branch on GitHub)

- name: question-generator
kind: RealtimeAPI
predictor:
type: python
path: predictor.py
compute:
cpu: 1
mem: 6G
4 changes: 4 additions & 0 deletions examples/pytorch/question-generator/dependencies.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# WARNING: you are on the master branch; please refer to examples on the branch corresponding to your `cortex version` (e.g. for version 0.19.*, run `git checkout -b 0.19` or switch to the `0.19` branch on GitHub)

# torchvision isn’t required for this example, and pip was throwing warnings with it installed
pip uninstall torchvision -y
36 changes: 36 additions & 0 deletions examples/pytorch/question-generator/predictor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# WARNING: you are on the master branch; please refer to examples on the branch corresponding to your `cortex version` (e.g. for version 0.19.*, run `git checkout -b 0.19` or switch to the `0.19` branch on GitHub)

from transformers import AutoModelWithLMHead, AutoTokenizer
import spacy
import subprocess
import json


class PythonPredictor:
def __init__(self, config):
subprocess.call("python -m spacy download en_core_web_sm".split(" "))
import en_core_web_sm

self.tokenizer = AutoTokenizer.from_pretrained(
"mrm8488/t5-base-finetuned-question-generation-ap"
)
self.model = AutoModelWithLMHead.from_pretrained(
"mrm8488/t5-base-finetuned-question-generation-ap"
)
self.nlp = en_core_web_sm.load()

def predict(self, payload):
context = payload["context"]
answer = payload["answer"]
max_length = int(payload.get("max_length", 64))

input_text = "answer: {} context: {} </s>".format(answer, context)
features = self.tokenizer([input_text], return_tensors="pt")

output = self.model.generate(
input_ids=features["input_ids"],
attention_mask=features["attention_mask"],
max_length=max_length,
)

return {"result": self.tokenizer.decode(output[0])}
4 changes: 4 additions & 0 deletions examples/pytorch/question-generator/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
spacy==2.1.8
-e git+https://github.com/huggingface/transformers.git#egg=transformers
--find-links https://download.pytorch.org/whl/torch_stable.html
torch==1.6.0+cpu
4 changes: 4 additions & 0 deletions examples/pytorch/question-generator/sample.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"context": "Sarah works as a software engineer in London",
"answer": "London"
}