The haystack-experimental
package provides Haystack users with access to experimental features without immediately
committing to their official release. The main goal is to gather user feedback and iterate on new features quickly.
For simplicity, every release of haystack-experimental
will ship all the available experiments at that time. To
install the latest experimental features, run:
$ pip install -U haystack-experimental
Important
The latest version of the experimental package is only tested against the latest version of Haystack. Compatibility with older versions of Haystack is not guaranteed.
Each experimental feature has a default lifespan of 3 months starting from the date of the first non-pre-release build that includes it. Once it reaches the end of its lifespan, the experiment will be either:
- Merged into Haystack core and published in the next minor release, or
- Released as a Core Integration, or
- Dropped.
The latest version of the package contains the following experiments:
Name | Type | Expected End Date | Dependencies | Cookbook | Discussion |
---|---|---|---|---|---|
EvaluationHarness |
Evaluation orchestrator | October 2024 | None | Discuss | |
OpenAIFunctionCaller |
Function Calling Component | October 2024 | None | π | |
OpenAPITool |
OpenAPITool component | October 2024 | jsonref | Discuss | |
Support for Tools: refactored ChatMessage dataclass, Tool dataclass, refactored OpenAIChatGenerator , refactored OllamaChatGenerator , refactored HuggingFaceAPIChatGenerator , refactored AnthropicChatGenerator , ToolInvoker component |
Tool Calling support | November 2024 | jsonschema | Discuss | |
ChatMessageWriter |
Memory Component | December 2024 | None | Discuss | |
ChatMessageRetriever |
Memory Component | December 2024 | None | Discuss | |
InMemoryChatMessageStore |
Memory Store | December 2024 | None | Discuss | |
Auto-Merging Retriever & HierarchicalDocumentSplitter |
Document Splitting & Retrieval Technique | December 2024 | None | Discuss | |
LLMMetadataExtractor |
Metadata extraction with LLM | December 2024 | None | π | Discuss |
Experimental new features can be imported like any other Haystack integration package:
from haystack.dataclasses import ChatMessage
from haystack_experimental.components.generators import FoobarGenerator
c = FoobarGenerator()
c.run([ChatMessage.from_user("What's an experiment? Be brief.")])
Experiments can also override existing Haystack features. For example, users can opt into an experimental type of
Pipeline
by just changing the usual import:
# from haystack import Pipeline
from haystack_experimental import Pipeline
pipe = Pipeline()
# ...
pipe.run(...)
Some experimental features come with example notebooks and resources that can be found in the examples
folder.
Documentation for haystack-experimental
can be found here.
Experiments should replicate the namespace of the core package. For example, a new generator:
# in haystack_experimental/components/generators/foobar.py
from haystack import component
@component
class FoobarGenerator:
...
When the experiment overrides an existing feature, the new symbol should be created at the same path in the experimental
package. This new symbol will override the original in haystack-ai
: for classes, with a subclass and for bare
functions, with a wrapper. For example:
# in haystack_experiment/src/haystack_experiment/core/pipeline/pipeline.py
from haystack.core.pipeline import Pipeline as HaystackPipeline
class Pipeline(HaystackPipeline):
# Any new experimental method that doesn't exist in the original class
def run_async(self, inputs) -> Dict[str, Dict[str, Any]]:
...
# Existing methods with breaking changes to their signature, like adding a new mandatory param
def to_dict(new_param: str) -> Dict[str, Any]:
# do something with the new parameter
print(new_param)
# call the original method
return super().to_dict()
Direct contributions to haystack-experimental
are not expected, but Haystack maintainers might ask contributors to move pull requests that target the core repository to this repository.
As with the Haystack core package, we rely on anonymous usage statistics to determine the impact and usefulness of the experimental features. For more information on what we collect and how we use the data, as well as instructions to opt-out, please refer to our documentation.