Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/memory causes memgpt to crash when using a local LLM. NotImplementedError: ref_doc_info not supported for an empty index. #220

Closed
cevatkerim opened this issue Oct 31, 2023 · 2 comments
Assignees

Comments

@cevatkerim
Copy link

Created new agent agent_1.
Available functions:
 ['send_message', 'pause_heartbeats', 'core_memory_append', 'core_memory_replace', 'conversation_search', 'conversation_search_date', 'archival_memory_insert', 'archival_memory_search']
AgentAsync initialized, self.messages_total=3
Initializing InMemoryStateManager with agent object
InMemoryStateManager.all_messages.len = 4
InMemoryStateManager.messages.len = 4
Hit enter to begin (will request first MemGPT message)

This is the first message. Running extra verifier on AI response.
💭 Persona activated for user cs_phd. Greetings and welcome, how may I assist you today?
🤖 Greetings and welcome, how may I assist you today?
⚡🟢 [function] Success: None
last response total_tokens (0) < 7000
InMemoryStateManager.append_to_messages
> Enter your message: hi
🧑 {'message': 'hi', 'time': '2023-10-31 08:56:50 AM PDT-0700'}
💭 Beginning conversation with user cs_phd. Preparing opening greeting...
🤖 Hello there!
⚡🟢 [function] Success: None
last response total_tokens (0) < 7000
InMemoryStateManager.append_to_messages
> Enter your message: /save
> Enter your message: /memory

Dumping memory contents:


### CORE MEMORY ###
=== Persona ===
sam

=== Human ===
cs_phd
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/john/MemGPT/memgpt/cli/cli.py:148 in run                                                  │
│                                                                                                  │
│   145 │   from memgpt.main import run_agent_loop                                                 │
│   146 │                                                                                          │
│   147 │   loop = asyncio.get_event_loop()                                                        │
│ ❱ 148 │   loop.run_until_complete(run_agent_loop(memgpt_agent, first, no_verify, config))  # T   │
│   149                                                                                            │
│                                                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │               agent = None                                                                   │ │
│ │        agent_config = AgentConfig()                                                          │ │
│ │         agent_files = []                                                                     │ │
│ │              agents = []                                                                     │ │
│ │              config = MemGPTConfig(                                                          │ │
│ │                       │   config_path='/home/john/.memgpt/config',                          │ │
│ │                       │   anon_clientid='00000000000000000000f6aecac3218d',                  │ │
│ │                       │   preset='memgpt_chat',                                              │ │
│ │                       │   model_endpoint='http://192.168.0.109:9090',                        │ │
│ │                       │   model='gpt-4',                                                     │ │
│ │                       │   openai_key='test',                                                 │ │
│ │                       │   azure_key=None,                                                    │ │
│ │                       │   azure_endpoint=None,                                               │ │
│ │                       │   azure_version=None,                                                │ │
│ │                       │   azure_deployment=None,                                             │ │
│ │                       │   azure_embedding_deployment=None,                                   │ │
│ │                       │   default_persona='sam',                                             │ │
│ │                       │   default_human='cs_phd',                                            │ │
│ │                       │   default_agent=None,                                                │ │
│ │                       │   embedding_model='openai',                                          │ │
│ │                       │   embedding_dim=768,                                                 │ │
│ │                       │   embedding_chunk_size=300,                                          │ │
│ │                       │   archival_storage_type='local',                                     │ │
│ │                       │   archival_storage_path=None,                                        │ │
│ │                       │   archival_storage_uri=None,                                         │ │
│ │                       │   recall_storage_type='local',                                       │ │
│ │                       │   recall_storage_path=None,                                          │ │
│ │                       │   recall_storage_uri=None,                                           │ │
│ │                       │   persistence_manager_type=None,                                     │ │
│ │                       │   persistence_manager_save_file=None,                                │ │
│ │                       │   persistence_manager_uri=None                                       │ │
│ │                       )                                                                      │ │
│ │         data_source = None                                                                   │ │
│ │               debug = True                                                                   │ │
│ │         embed_model = HuggingFaceEmbedding(                                                  │ │
│ │                       │   model_name='BAAI/bge-small-en-v1.5',                               │ │
│ │                       │   embed_batch_size=10,                                               │ │
│ │                       │   callback_manager=<llama_index.callbacks.base.CallbackManager       │ │
│ │                       object at 0x7fc32035e4d0>,                                             │ │
│ │                       │   tokenizer_name='BAAI/bge-small-en-v1.5',                           │ │
│ │                       │   max_length=512,                                                    │ │
│ │                       │   pooling=<Pooling.CLS: 'cls'>,                                      │ │
│ │                       │   normalize='True',                                                  │ │
│ │                       │   query_instruction=None,                                            │ │
│ │                       │   text_instruction=None,                                             │ │
│ │                       │   cache_folder=None                                                  │ │
│ │                       )                                                                      │ │
│ │               first = False                                                                  │ │
│ │               human = None                                                                   │ │
│ │                loop = <_UnixSelectorEventLoop running=False closed=False debug=False>        │ │
│ │        memgpt_agent = <memgpt.agent.AgentAsync object at 0x7fc31f6ae550>                     │ │
│ │               model = 'airoboros-l2-70b-2.1'                                                 │ │
│ │           no_verify = False                                                                  │ │
│ │     original_stdout = <colorama.ansitowin32.StreamWrapper object at 0x7fc3f389e250>          │ │
│ │ persistence_manager = <memgpt.persistence_manager.LocalStateManager object at                │ │
│ │                       0x7fc317b67a90>                                                        │ │
│ │             persona = None                                                                   │ │
│ │              preset = None                                                                   │ │
│ │      run_agent_loop = <function run_agent_loop at 0x7fc3cfdae5c0>                            │ │
│ │     service_context = ServiceContext(                                                        │ │
│ │                       │   llm_predictor=LLMPredictor(                                        │ │
│ │                       │   │   system_prompt=None,                                            │ │
│ │                       │   │   query_wrapper_prompt=None,                                     │ │
│ │                       │   │   pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'> │ │
│ │                       │   ),                                                                 │ │
│ │                       │   prompt_helper=PromptHelper(                                        │ │
│ │                       │   │   context_window=3900,                                           │ │
│ │                       │   │   num_output=256,                                                │ │
│ │                       │   │   chunk_overlap_ratio=0.1,                                       │ │
│ │                       │   │   chunk_size_limit=None,                                         │ │
│ │                       │   │   separator=' '                                                  │ │
│ │                       │   ),                                                                 │ │
│ │                       │   embed_model=HuggingFaceEmbedding(                                  │ │
│ │                       │   │   model_name='BAAI/bge-small-en-v1.5',                           │ │
│ │                       │   │   embed_batch_size=10,                                           │ │
│ │                       │   │   callback_manager=<llama_index.callbacks.base.CallbackManager   │ │
│ │                       object at 0x7fc32035e4d0>,                                             │ │
│ │                       │   │   tokenizer_name='BAAI/bge-small-en-v1.5',                       │ │
│ │                       │   │   max_length=512,                                                │ │
│ │                       │   │   pooling=<Pooling.CLS: 'cls'>,                                  │ │
│ │                       │   │   normalize='True',                                              │ │
│ │                       │   │   query_instruction=None,                                        │ │
│ │                       │   │   text_instruction=None,                                         │ │
│ │                       │   │   cache_folder=None                                              │ │
│ │                       │   ),                                                                 │ │
│ │                       │   node_parser=SimpleNodeParser(                                      │ │
│ │                       │   │   text_splitter=SentenceSplitter(                                │ │
│ │                       │   │   │   chunk_size=300,                                            │ │
│ │                       │   │   │   chunk_overlap=20,                                          │ │
│ │                       │   │   │   separator=' ',                                             │ │
│ │                       │   │   │   paragraph_separator='\n\n\n',                              │ │
│ │                       │   │   │   secondary_chunking_regex='[^,.;。?!]+[,.;。?!]?',      │ │
│ │                       │   │   │   chunking_tokenizer_fn=<function                            │ │
│ │                       split_by_sentence_tokenizer.<locals>.split at 0x7fc31f549800>,         │ │
│ │                       │   │   │                                                              │ │
│ │                       callback_manager=<llama_index.callbacks.base.CallbackManager object at │ │
│ │                       0x7fc32035e4d0>,                                                       │ │
│ │                       │   │   │   tokenizer=functools.partial(<bound method Encoding.encode  │ │
│ │                       of <Encoding 'gpt2'>>, allowed_special='all')                          │ │
│ │                       │   │   ),                                                             │ │
│ │                       │   │   include_metadata=True,                                         │ │
│ │                       │   │   include_prev_next_rel=True,                                    │ │
│ │                       │   │   metadata_extractor=None,                                       │ │
│ │                       │   │   callback_manager=<llama_index.callbacks.base.CallbackManager   │ │
│ │                       object at 0x7fc32035e4d0>                                              │ │
│ │                       │   ),                                                                 │ │
│ │                       │   llama_logger=<llama_index.logger.base.LlamaLogger object at        │ │
│ │                       0x7fc31f642290>,                                                       │ │
│ │                       │   callback_manager=<llama_index.callbacks.base.CallbackManager       │ │
│ │                       object at 0x7fc32035e4d0>                                              │ │
│ │                       )                                                                      │ │
│ │                 yes = False                                                                  │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                  │
│ /home/john/miniconda3/envs/memgpt/lib/python3.11/asyncio/base_events.py:653 in                  │
│ run_until_complete                                                                               │
│                                                                                                  │
│    650 │   │   if not future.done():                                                             │
│    651 │   │   │   raise RuntimeError('Event loop stopped before Future completed.')             │
│    652 │   │                                                                                     │
│ ❱  653 │   │   return future.result()                                                            │
│    654 │                                                                                         │
│    655 │   def stop(self):                                                                       │
│    656 │   │   """Stop running the event loop.                                                   │
│                                                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │   future = <Task finished name='Task-1' coro=<run_agent_loop() done, defined at              │ │
│ │            /home/john/MemGPT/memgpt/main.py:361>                                            │ │
│ │            exception=NotImplementedError('ref_doc_info not supported for an empty index.')>  │ │
│ │ new_task = True                                                                              │ │
│ │     self = <_UnixSelectorEventLoop running=False closed=False debug=False>                   │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                  │
│ /home/john/MemGPT/memgpt/main.py:461 in run_agent_loop                                          │
│                                                                                                  │
│   458 │   │   │   │   elif user_input.lower() == "/memory":                                      │
│   459 │   │   │   │   │   print(f"\nDumping memory contents:\n")                                 │
│   460 │   │   │   │   │   print(f"{str(memgpt_agent.memory)}")                                   │
│ ❱ 461 │   │   │   │   │   print(f"{str(memgpt_agent.persistence_manager.archival_memory)}")      │
│   462 │   │   │   │   │   print(f"{str(memgpt_agent.persistence_manager.recall_memory)}")        │
│   463 │   │   │   │   │   continue                                                               │
│   464                                                                                            │
│                                                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │                  cfg = MemGPTConfig(                                                         │ │
│ │                        │   config_path='/home/john/.memgpt/config',                         │ │
│ │                        │   anon_clientid='00000000000000000000f6aecac3218d',                 │ │
│ │                        │   preset='memgpt_chat',                                             │ │
│ │                        │   model_endpoint='http://192.168.0.109:9090',                       │ │
│ │                        │   model='gpt-4',                                                    │ │
│ │                        │   openai_key='test',                                                │ │
│ │                        │   azure_key=None,                                                   │ │
│ │                        │   azure_endpoint=None,                                              │ │
│ │                        │   azure_version=None,                                               │ │
│ │                        │   azure_deployment=None,                                            │ │
│ │                        │   azure_embedding_deployment=None,                                  │ │
│ │                        │   default_persona='sam',                                            │ │
│ │                        │   default_human='cs_phd',                                           │ │
│ │                        │   default_agent=None,                                               │ │
│ │                        │   embedding_model='openai',                                         │ │
│ │                        │   embedding_dim=768,                                                │ │
│ │                        │   embedding_chunk_size=300,                                         │ │
│ │                        │   archival_storage_type='local',                                    │ │
│ │                        │   archival_storage_path=None,                                       │ │
│ │                        │   archival_storage_uri=None,                                        │ │
│ │                        │   recall_storage_type='local',                                      │ │
│ │                        │   recall_storage_path=None,                                         │ │
│ │                        │   recall_storage_uri=None,                                          │ │
│ │                        │   persistence_manager_type=None,                                    │ │
│ │                        │   persistence_manager_save_file=None,                               │ │
│ │                        │   persistence_manager_uri=None                                      │ │
│ │                        )                                                                     │ │
│ │              counter = 2                                                                     │ │
│ │                first = False                                                                 │ │
│ │      function_failed = False                                                                 │ │
│ │    heartbeat_request = None                                                                  │ │
│ │               legacy = False                                                                 │ │
│ │         memgpt_agent = <memgpt.agent.AgentAsync object at 0x7fc31f6ae550>                    │ │
│ │      multiline_input = False                                                                 │ │
│ │         new_messages = [                                                                     │ │
│ │                        │   {                                                                 │ │
│ │                        │   │   'role': 'user',                                               │ │
│ │                        │   │   'content': '{"type": "user_message", "message": "hi", "time": │ │
│ │                        "2023-10-31 08:56:50 AM PDT-07'+4                                     │ │
│ │                        │   },                                                                │ │
│ │                        │   {                                                                 │ │
│ │                        │   │   'role': 'assistant',                                          │ │
│ │                        │   │   'content': 'Beginning conversation with user cs_phd.          │ │
│ │                        Preparing opening greeting... ',                                      │ │
│ │                        │   │   'function_call': {                                            │ │
│ │                        │   │   │   'name': 'send_message',                                   │ │
│ │                        │   │   │   'arguments': '{"message": "Hello there!"}'                │ │
│ │                        │   │   }                                                             │ │
│ │                        │   },                                                                │ │
│ │                        │   {                                                                 │ │
│ │                        │   │   'role': 'function',                                           │ │
│ │                        │   │   'name': 'send_message',                                       │ │
│ │                        │   │   'content': '{"status": "OK", "message": null, "time":         │ │
│ │                        "2023-10-31 08:56:56 AM PDT-0700"}'                                   │ │
│ │                        │   }                                                                 │ │
│ │                        ]                                                                     │ │
│ │            no_verify = False                                                                 │ │
│ │ skip_next_user_input = False                                                                 │ │
│ │               status = <rich.status.Status object at 0x7fc317ad7fd0>                         │ │
│ │        token_warning = False                                                                 │ │
│ │      USER_GOES_FIRST = False                                                                 │ │
│ │           user_input = '/memory'                                                             │ │
│ │         user_message = '{"type": "user_message", "message": "hi", "time": "2023-10-31        │ │
│ │                        08:56:50 AM PDT-07'+4                                                 │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                  │
│ /home/john/MemGPT/memgpt/memory.py:725 in __repr__                                              │
│                                                                                                  │
│   722 │   │   return self.search(query_string, count, start)                                     │
│   723 │                                                                                          │
│   724 │   def __repr__(self) -> str:                                                             │
│ ❱ 725 │   │   print(self.index.ref_doc_info)                                                     │
│   726 │   │   return ""                                                                          │
│   727                                                                                            │
│                                                                                                  │
│ ╭─────────────────────────────── locals ───────────────────────────────╮                         │
│ │ self = <repr-error 'ref_doc_info not supported for an empty index.'> │                         │
│ ╰──────────────────────────────────────────────────────────────────────╯                         │
│                                                                                                  │
│ /home/john/miniconda3/envs/memgpt/lib/python3.11/site-packages/llama_index/indices/empty/base.p │
│ y:85 in ref_doc_info                                                                             │
│                                                                                                  │
│   82 │   @property                                                                               │
│   83 │   def ref_doc_info(self) -> Dict[str, RefDocInfo]:                                        │
│   84 │   │   """Retrieve a dict mapping of ingested documents and their nodes+metadata."""       │
│ ❱ 85 │   │   raise NotImplementedError("ref_doc_info not supported for an empty index.")         │
│   86                                                                                             │
│   87                                                                                             │
│   88 # legacy                                                                                    │
│                                                                                                  │
│ ╭────────────────────────────────── locals ───────────────────────────────────╮                  │
│ │ self = <llama_index.indices.empty.base.EmptyIndex object at 0x7fc31704a810> │                  │
│ ╰─────────────────────────────────────────────────────────────────────────────╯                  │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
NotImplementedError: ref_doc_info not supported for an empty index.```
@cpacker
Copy link
Collaborator

cpacker commented Nov 1, 2023

@sarahwooders llama index bug?

@sarahwooders
Copy link
Collaborator

Addressed by #240

@cpacker cpacker closed this as completed Nov 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants