-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]RocksDB磁盘占用问题 #2689
Comments
I found a similar issue that was previously discussed and closed. The issue was about RocksDB's memory occupation being out of control. The suggested solution included replacing the default For optimizing RocksDB configuration to reduce the metadata footprint and save disk space, consider the following adjustments:
These changes should help in reducing the metadata footprint and saving disk space when using RocksDB with HugeGraph. Adjust these settings based on your specific use case and test the impact on performance and disk usage. |
@dosu 如何通过配置减小级别数 |
要减少RocksDB的级别数,可以通过调整
在 |
Due to the lack of activity, the current issue is marked as stale and will be closed after 20 days, any update will remove the stale label |
Problem Type (问题类型)
configs (配置项 / 文档相关)
Before submit
Environment (环境信息)
Your Question (问题描述)
When I use RocksDB as the data storage for HugeGraph, I observe the following phenomenon: every time I create a knowledge graph, I dynamically use the graph name as the storage directory for the database. When the graph has no nodes or edges, the disk usage is nearly 400MB. Currently, my business scenario is similar to a consumer-facing application, and disk usage will be a bottleneck. Could you please advise on how to optimize the RocksDB configuration to reduce the metadata footprint of the graph and save disk space? Thank you. I have configured
rocksdb.compression_per_level
to SNAPPY, but the effect is not significant.Vertex/Edge example (问题点 / 边数据举例)
No response
Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)
No response
The text was updated successfully, but these errors were encountered: