-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Knowledge doesn't read from the knowledge source using StringSource #2315
Comments
…rage.search() Co-Authored-By: Joe Moura <joao@crewai.com>
Hi @chadsly, I think there is a issue with the documentation, can you try passing a embedder at the crew itself. Meanwhile I will raise a PR for this. |
Can you try this once, I am facing some other issue, that's why not able to test, let me know once whether this solves the entire thing or not. |
I changed the crew to include an embedder... It's the same model as the llm. I could have used something else, but I didn't. I also checked to see if I could include a base_url. It didn't seem to make any difference. I'll make this a separate ticket if I ever run into that issue. |
@chadsly use an embedding model: with ollama download:
Embedding models are supported here instead of a text completion model |
@lorenzejay , well now I'm stuck in an awkward situation where I can't change the embedding model, which sounds like an environment issue. In the crewai projec that I'm working with I've tried to
Clearly I'm not using any OpenAI keys. So I simply started a new environment to work in. This worked. I did not know that I couldn't embedding models. Is that a crewai limitation? a library limitation? or just how embedding works? Summary: My original request has been answered. |
Please upgrade to the latest version. That should've been fixed. And for anything of storing to a vector db, we require embedding models to turn text into a representation that can be semantically represented in a vector db. Just how it works compared to a chat completion. |
Description
I've attempted running the example "knowledge" code that is in the docs: https://docs.crewai.com/concepts/knowledge
The first example uses the StringSource. Maybe after I can get this one to work, I'll move on to more complicated options.
I used the code as is (except changing the llm) and run it as a simple python script. I've also rewritten it to run it as a crewai input.
Steps to Reproduce
Expected behavior
Screenshots/Code snippets
from crewai import Agent, Task, Crew, Process, LLM
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
Create a knowledge source
content = "Users name is John. He is 30 years old and lives in San Francisco."
string_source = StringKnowledgeSource(
content=content,
)
Create an LLM with a temperature of 0 to ensure deterministic outputs
llm = LLM(model="gpt-4o-mini", temperature=0)
llm=LLM(model="ollama/llama3.2:1b", base_url="http://localhost:11434")
Create an agent with the knowledge store
agent = Agent(
role="About User",
goal="You know everything about the user.",
backstory="""You are a master at understanding people and their preferences.""",
verbose=True,
allow_delegation=False,
llm=llm,
)
task = Task(
description="Answer the following questions about the user: {question}",
expected_output="An answer to the question.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
knowledge_sources=[string_source], # Enable knowledge by adding the sources here. You can also add more sources to the sources list.
)
result = crew.kickoff(inputs={"question": "What city does John Albertson live in and how old is he?"})
Operating System
Ubuntu 20.04
Python Version
3.10
crewAI Version
0.98.0
crewAI Tools Version
Virtual Environment
Venv
Evidence
The answer told me that John lived in New York City and was 38 years old. In other words, it didn't use the knowledge at all.
Possible Solution
None
Additional context
The text was updated successfully, but these errors were encountered: