-
Notifications
You must be signed in to change notification settings - Fork 350
Generate js and curl snippets using templates #1291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate js and curl snippets using templates #1291
Conversation
…l-snippets-using-templates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't reviewed all the snippets yet, but conceptually looks really good 🔥
I have left some comments
|
||
const client = new InferenceClient("api_token"); | ||
|
||
const data = fs.readFileSync("sample1.flac"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe let's add an import statement for this
Side note: I think the fs
API is only available in a NodeJS context 😅
The equivalent in the Browser is the File Reader API
I think it's fine if the snippets are only compatible with Node, for simplicity - thoughts @julien-c @coyotte508 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the README.md
we have these:
await hf.automaticSpeechRecognition({
model: 'facebook/wav2vec2-large-960h-lv60-self',
data: readFileSync('test/sample1.flac')
})
await hf.imageToImage({
inputs: new Blob([readFileSync("test/stormtrooper_depth.png")]),
parameters: {
prompt: "elmo's lecture",
},
model: "lllyasviel/sd-controlnet-depth",
});
await hf.zeroShotImageClassification({
model: 'openai/clip-vit-large-patch14-336',
inputs: {
image: await (await fetch('https://placekitten.com/300/300')).blob()
},
parameters: {
candidate_labels: ['cat', 'dog']
}
})
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that for now I've only reproduced what we already have in https://huggingface.co/openai/whisper-large-v3-turbo?inference_api=true&inference_provider=hf-inference&language=js. I've fine with changing this but prefer to do it in a follow-up PR.
My personal preference would be to align with the Python snippet:
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="hf-inference",
api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx",
)
output = client.automatic_speech_recognition("sample1.flac", model="openai/whisper-large-v3-turbo")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
opened #1294 as a follow-up issue
return result; | ||
} | ||
|
||
query({ inputs: "My name is Sarah Jessica Parker but you can call me Jessica" }).then((response) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😄 (not a change request)
query({ inputs: "My name is Sarah Jessica Parker but you can call me Jessica" }).then((response) => { | |
query({ inputs: "My name is Giovanni Giorgio, but everybody calls me Giorgio" }).then((response) => { |
|
||
const data = fs.readFileSync("sample1.flac"); | ||
|
||
const output = await client.automaticSpeechRecognition({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More general remark: the types for stand-alone methods (automaticSpeechRecognition
) have a correct typing, while methods on the InferenceClient
class (client.automaticSpeechRecognition
) do not
Until this is fixed, I would advocate to use the stand-alone functions for better user experience
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
opened #1294 as a follow-up issue
import { InferenceClient } from "@huggingface/inference"; | ||
|
||
const client = new InferenceClient("{{ accessToken }}"); | ||
|
||
const output = await client.{{ methodName }}({ | ||
model: "{{ model.id }}", | ||
inputs: {{ inputs.asObj.inputs }}, | ||
provider: "{{ provider }}", | ||
}); | ||
|
||
console.log(output); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following my remark about InferenceClient types - applies to all huggingface.js
snippets I think
import { InferenceClient } from "@huggingface/inference"; | |
const client = new InferenceClient("{{ accessToken }}"); | |
const output = await client.{{ methodName }}({ | |
model: "{{ model.id }}", | |
inputs: {{ inputs.asObj.inputs }}, | |
provider: "{{ provider }}", | |
}); | |
console.log(output); | |
import { {{methodName}} } from "@huggingface/inference"; | |
const output = await {{ methodName }}({ | |
model: "{{ model.id }}", | |
inputs: {{ inputs.asObj.inputs }}, | |
provider: "{{ provider }}", | |
accessToken: "{{ accessToken }}" | |
}); | |
console.log(output); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
opened #1294 as a follow-up issue
@@ -0,0 +1,21 @@ | |||
{% if provider == "hf-inference" %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason for not outputing a snippet when the provider is external?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, don't remember the reason 🤔 I reused what was existing before (see https://huggingface.co/black-forest-labs/FLUX.1-schnell?inference_api=true&inference_provider=hf-inference&language=js vs https://huggingface.co/black-forest-labs/FLUX.1-schnell?inference_api=true&inference_provider=together&language=js).
Probably because inputs are not exactly the same depending on the provider. But now that we use makeRequestOptions
it shouldn't be an issue anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in 18d87cd
…l-snippets-using-templates
Co-authored-by: Simon Brandeis <33657802+SBrandeis@users.noreply.github.com>
…b.com:huggingface/huggingface.js into generate-js-and-curl-snippets-using-templates
Co-authored-by: Simon Brandeis <33657802+SBrandeis@users.noreply.github.com>
Co-authored-by: Simon Brandeis <33657802+SBrandeis@users.noreply.github.com>
Thanks for the review @SBrandeis and @coyotte508! Given how big this PR is, I'd rather not update the current snippets too much. They are mainly based on what existed in |
PR built on top of #1273.
This is supposed to be the last PR refactoring inference snippets 🙉
python.ts
,curl.ts
andjs.ts
have been merged into a singlegetInferenceSnippets.ts
which handles snippet generations for all languages and all providers at once. Here is how to use it:it returns a list
InferenceSnippet[]
defined asHow to review?
It's hard to track all atomic changes made to the inference snippets but the best way IMO to review this PR is to check the generated snippets in the tests. Many inconsistencies in the URLs, sent parameters and indentation have been fixed.
What's next?
makeRequestOptions
to generate inference snippets #1273 approvedmakeRequestOptions
to generate inference snippets #1273makeRequestOptions
to generate inference snippets #1273snippets.getInferenceSnippets
instead ofpython.getPythonSnippets
, etc)