Releases: ittia-research/check
Releases · ittia-research/check
v.0.0.3
v.0.0.2
Highlight
- Fully integrated with DSPy with MIPROv2 optimizer
- Changed default embedding and rerank inference to Infinity API server, faster and more stable
- Fixed CloudFlare 524 timeout error
- Changed search backend to https://search.ittia.net
- Added examples on how to create wiki_dpr index and start a retriever server
What's Changed
- Add DSPy pipeline in #7
- add api key support to LlamaIndex OllamaEmbedding in #9
- change all LLM calling to DSPy, increase citation token limit in #10
- change base image to CUDA, change to dspy.Retrieve in #11
- add script to create dataset based on HotPotQA, update infra in #12
- add wiki_dpr retriever for DSPy compile in #15
- change to multi-sources mode in #17
- change API response to stream in #18
- move pipelines to one single class, change to streaming search backend in #19
- change default search backend, update endpoint in #20
v.0.0.1
Features:
- For each statement, generate multiple verdicts against every web search page.
- Weight verdicts into one final and show related context and sources.
- Retrieval: LlamaIndex auto merging retriever, embedding and rerank.
What's next:
- More sophisticated pipeline using DSPy etc.
- Implement LLM features: multi-shot, chain of thought, etc.