Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Ensure entity_or_relation_name is a string in _handle_entity_relation_summary #415

Merged
merged 360 commits into from
Dec 9, 2024

Conversation

SaujanyaV
Copy link
Contributor

Description

I encountered an issue where the function _handle_entity_relation_summary raised the following error:

Argument 'entity_or_relation_name' has incorrect type (expected str, got tuple)

This error occurred because the function was being called with a tuple (src_id, tgt_id) instead of a string. To resolve this issue, I made the following changes:

Converted the Tuple to a Formatted String

Replaced the call to _handle_entity_relation_summary with:

description = await _handle_entity_relation_summary(
    f"({src_id}, {tgt_id})", description, global_config
)

This ensures that the entity_or_relation_name argument is always passed as a string.

Maintained Consistency

Updated the code to use formatted strings for better readability and alignment with the function's expectations.


Steps to Reproduce the Issue

  1. Use the _merge_edges_then_upsert function with a graph edge where the tuple (src_id, tgt_id) is passed to _handle_entity_relation_summary.
  2. Observe the type mismatch error when the function is executed.

Solution

Ensure entity_or_relation_name is a string by using a formatted string f"({src_id}, {tgt_id})" instead of the tuple.

tpoisonooo and others added 30 commits October 23, 2024 11:24
fix(lightrag_siliconcloud_demo.py): max_token_size
[FIX] fix infinite loading hf model bug that cause oom
[hotfix-HKUDS#75][embedding] Fix the potential embedding problem
typo(lightrag/lightrag.py): typo
Error Handling:

Handled potential FileNotFoundError for README.md and requirements.txt.
Checked for missing required metadata and raised an informative error if any are missing.
Automated Package Discovery:

Replaced packages=["lightrag"] with setuptools.find_packages() to automatically find sub-packages and exclude test or documentation directories.
Additional Metadata:

Added Development Status in classifiers to indicate a "Beta" release (modify based on the project's maturity).
Used project_urls to link documentation, source code, and an issue tracker, which are standard for open-source projects.
Compatibility:

Included include_package_data=True to include additional files specified in MANIFEST.in.
These changes enhance the readability, reliability, and openness of the code, making it more contributor-friendly and ensuring it’s ready for open-source distribution.
[FIX] fix hf output bug (current output contain user prompt which cause logical error in entity extraction phase)
LarFii and others added 26 commits December 5, 2024 18:21
- 在 LightRAG 类中添加 embedding_cache_config配置项
- 实现基于 embedding 相似度的缓存查询和存储
- 添加量化和反量化函数,用于压缩 embedding 数据
- 新增示例演示 embedding 缓存的使用
- 在 LightRAG 类中添加 embedding_cache_config配置项
- 实现基于 embedding 相似度的缓存查询和存储
- 添加量化和反量化函数,用于压缩 embedding 数据
- 新增示例演示 embedding 缓存的使用
添加查询时使用embedding缓存功能
修复 args_hash在使用常规缓存时候才计算导致embedding缓存时没有计算的bug
Fixed typing error in python3.9
Add support for Ollama streaming output and integrate Open-WebUI as the chat UI demo
- 提取通用缓存处理逻辑到新函数 handle_cache 和 save_to_cache
- 使用 CacheData 类统一缓存数据结构
- 优化嵌入式缓存和常规缓存的处理流程
- 添加模式参数以支持不同查询模式的缓存策略
- 重构 get_best_cached_response 函数,提高缓存查询效率
- 在 json.dumps 中添加 ensure_ascii=False 参数,以支持非 ASCII 字符编码
-这个修改确保了包含中文等非 ASCII 字符的日志信息能够正确处理和显示
# Conflicts:
#	lightrag/llm.py
#	lightrag/operate.py
@LarFii
Copy link
Collaborator

LarFii commented Dec 9, 2024

Thanks!

@LarFii LarFii merged commit e083854 into HKUDS:main Dec 9, 2024
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.