You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As the POC iterated, we settled on capturing runtime parameters in
a PipelineContext object available to all blocks in the pipeline.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
logger.warning("The custom flow file may have new features that will be ignored.")
83
83
```
84
84
85
-
### Flow Params
85
+
### Pipeline Context
86
86
87
-
The following parameters are supplied to a Flow constructor and used by the render() method:
87
+
The following runtime parameters will no longer be part of the pipeline configuration definition and instead available to blocks via a `PipelineContext` object:
88
88
89
89
- client - an OpenAI completions API client for talking to the teacher model via the serving backend (i.e. llama-cpp or vLLM)
90
90
- model_family - e.g. mixtral or merlinite
91
91
- model_id - a path name for the specific teacher model being used
92
92
- num_instructions_to_generate - how many samples to generate
93
-
- batched - whether the model serving backend supports batched mode
94
93
95
-
For now, we assume that the `Block` classes know how to use these parameters and there is no need to use any sort of templating in the custom flows that could use these parameters.
94
+
For now, we assume there is no need to do any sort of templating in the custom pipelines based on these runtime parameters.
0 commit comments