Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for guidance/structured output with prompt API #35

Open
sushraja-msft opened this issue Aug 22, 2024 · 2 comments
Open

Support for guidance/structured output with prompt API #35

sushraja-msft opened this issue Aug 22, 2024 · 2 comments
Labels
enhancement New feature or request interop Potential concerns about interoperability among multiple implementations of the API

Comments

@sushraja-msft
Copy link

sushraja-msft commented Aug 22, 2024

To aid programmability, reduce compatibility risk from the API returning different results across browser, avoid challenges in updating a shipping model in the browser (Google Model V1 to Google Model V2), please consider adding techniques like guidance, structured outputs as an integral part of the prompt API.

Problem Illustration

Consider the following web developer scenarios, where a developer is:

  1. Classifying product review as the user types, to ask follow-up questions.
  2. Building a chat bot and would like to programmatically detect if a question should be routed a particular way.
  3. Building a reading comprehension assistive extension, that poses questions based on the web page content.
image

Web developers who attempt to parse the response are going to have a hard time writing code that is model/browser agnostic.

Constraining Output

One way to solve this problem is to use guidance or techniques like it. At a high level these techniques work by restricting the next allowed token from the LLM to conform to a grammar. Guidance works on top of a model, is model agnostic and only changes logits from the last layer of a model before sampling. There is an additional implementation detail within guidance in that information about all possible tokens prefixed with the next possible token is required for it to function (explanation).

With guidance (demo) we get better consistency across models and responses that are immediately parseable with JavaScript.

image

Proposal

The proposal is to add responseJsonSchema to the AIAssistantPromptOptions.

dictionary AIAssistantPromptOptions { AbortSignal signal; DomString? responseJsonSchema; };

JSON schema is familiar to web developers. However, JSON schema is a super set of what techniques like guidance can achieve today. For example, parts of the schema to enforce JSON schema constraints like dependent required cannot be enforced.
Either the API can state that only Property Name, Value Type, Enum, Arrays would be enforced, or Prompt API should validate the response with a JSON schema validator and indicate that the response is non conformant. Slight preference to the first option because of its simplicity.

Other Approaches

@domenic
Copy link
Collaborator

domenic commented Aug 30, 2024

In general we're excited about exploring this. Minor API surface nitpicks:

  • There's no need to have it be nullable; all dictionary entries are already optional.
  • Per https://w3ctag.github.io/design-principles/#casing-rules it should be something like responseJSONSchema, not responseJsonSchema
  • I think providing JSON as a string is pretty unusual, even though I understand it makes sense theoretically. I would suggest we take it as an object and then post-process it. Probably we would do the equivalent of: JSON.stringify(providedObject) -> pass the resulting JSON string to some JSON schema library. This feels a bit roundabout but I suspect for developer ergonomics it's way better.

So to summarize: object responseJSONSchema in the dictionary.

@domenic domenic added enhancement New feature or request interop Potential concerns about interoperability among multiple implementations of the API labels Oct 9, 2024
@rhys101
Copy link

rhys101 commented Nov 5, 2024

Agree with @sushraja-msft that having structured output really helps with ensuring a correct response.

For reference, we've implemented a parallel implementation of the Prompt API as an extension based on llama.cpp, available at GitHub. We've exposed a grammer object within the implementation that can be passed into the create function.

An example use:

sess = await window.aibrow.languageModel.create({
 grammar:{
  "type": "object",
  "properties": {
    "first_name": {
      "type": "string"
    },
    "last_name": {
      "type": "string"
    },
    "country": {
      "type": "string"
    }
  }
}
})
const stream = await sess.promptStreaming("Extract data from the following text: 'John Doe is an innovative software developer with a passion for creating intuitive user experiences. Based in the heart of England, John has spent the past decade refining his craft, working with both startups and established tech companies. His deep commitment to quality and creativity is evident in the numerous award-winning apps he has developed, which continue to enrich the digital lives of users worldwide. Beyond his technical skills, John is admired for his collaborative spirit and mentorship, always eager to share his knowledge and inspire the next generation of tech enthusiasts.'");
for await (const chunk of stream) {
  console.log(chunk)
}

Having experienced quite a few inconsistencies before when trying to "plead with the prompt" to get it to only output JSON (where it often tries to wrap it in markdown), a constraining structured output seems like the best approach.

@domenic domenic mentioned this issue Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request interop Potential concerns about interoperability among multiple implementations of the API
Projects
None yet
Development

No branches or pull requests

3 participants