You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm not reporting a problem, but thought I'd add some discussion that might be useful to put into the explainer:
Typically, LLM's make use of a random number generator. Some LLM's might be deterministic, given a random seed. It might be useful to do procedural text generation from a user-provided seed. For example, a sharable URL could contain a seed and automatically generate the same text for everyone the URL is shared with. Text generation from a fixed seed could also be very useful for debugging.
But that would invite a dependency on a specific implementation. If an API took a model name and a random seed, and it became popular, it could "rust shut", so that the LLM always needs to be available.
So, while it would be nice, it seems wise that the API doesn't accept these as input. But maybe it should provide some kind of {model, seed} object as output, which can be included in logs and plugged into the browser's debugger? Otherwise, when a user reports an issue, it will be hard to reproduce and debug.
The model name is a bit sensitive, though, since it would also be useful to anyone doing browser fingerprinting. Perhaps an opaque debugging key is better.
The text was updated successfully, but these errors were encountered:
+1 to starting a discussion here. Interop is an important risk factor for a prompt API.
Web APIs are designed to work across products and to endure the passage of time. Language models pose a fundamental challenge here in that models are non-deterministic. Different models respond differently to the same prompt and require different prompt engineering techniques. The results that a browser/model combination (e.g. Gemini XS in Chrome Canary) generates now may be different tomorrow as the model evolves and will be different in another browser/model combinations.
That said, the explainer does mention the only-for-experiment nature of the API:
Even more so than many other behind-a-flag APIs, the prompt API is an experiment, designed to help us understand web developers' use cases to inform a roadmap of purpose-built APIs.
I'm not reporting a problem, but thought I'd add some discussion that might be useful to put into the explainer:
Typically, LLM's make use of a random number generator. Some LLM's might be deterministic, given a random seed. It might be useful to do procedural text generation from a user-provided seed. For example, a sharable URL could contain a seed and automatically generate the same text for everyone the URL is shared with. Text generation from a fixed seed could also be very useful for debugging.
But that would invite a dependency on a specific implementation. If an API took a model name and a random seed, and it became popular, it could "rust shut", so that the LLM always needs to be available.
So, while it would be nice, it seems wise that the API doesn't accept these as input. But maybe it should provide some kind of {model, seed} object as output, which can be included in logs and plugged into the browser's debugger? Otherwise, when a user reports an issue, it will be hard to reproduce and debug.
The model name is a bit sensitive, though, since it would also be useful to anyone doing browser fingerprinting. Perhaps an opaque debugging key is better.
The text was updated successfully, but these errors were encountered: