You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Location (Korea, USA, China, India, etc.)
Put your location for prompt support.
: Korea
Describe the bug
A clear and concise description of what the bug is
:Retrieved KVSSD utilization information via nvme-cli & API (KDD) is wrong.
When I use a workload that has many different lengths of keys and randomness of access pattern (less sequentiality, write-only workload) for testing, KVSSD returns unexpected utilization info.
I used the workload mentioned above (size around 7.2GB). Also, use LevelDB as the baseline
I tracked written data size at NVMe submission queue -> around 7.2GB
I checked LevelDB size (no compression) -> around 7.2GB
However, whenever I check utilization of KVSSD, it returns around 28 GB.
To Reproduce
Steps to reproduce the behavior:
Generate workload with variety key length and randomness
Play the workload with KVSSD and LevelDB
Check the size of LevelDB storage and KVSSD utilization
Expected behavior
A clear and concise description of what you expected to happen.
: KVSSD returns the value, 7.2GB.
Screenshots
If applicable, add screenshots to help explain your problem.
System environment (please complete the following information)
Firmware version : ETA51KCA
Number of SSDs : 1
OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: Ubuntu 16.04 Kernel v4.4.0-141-generic
GCC version [e.g., gcc v5.0.0] : v5.4.0
kvbench version if kvbench runs [e.g., v0.6.0]:
KV API version [e.g., v0.6.0]: v1.1.0
User driver version :
Driver [Kernel or user driver or emulator] : KDD
Workload
number of records or data size: 7.2GB
Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]: random insert
key size :
value size :
operation option if available [e.g., sync or async mode] : async write
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
We verified you issue. But unfortunately, we didn't found this issue. We wrote 4.05GB and 20.25DB data KV SSD, and the actual data in KV SSD is 4.39GB and 20.74GB respectively.
Coud you double check your test scenario ?
Not sure if you have already identified the issue or not. Bear in mind that KVSSD adds internal padding for values up to 1KB. Meaning, if you write a 100B value, it will take 1KB space in flash. The difference could be because of that.
Location (Korea, USA, China, India, etc.)
Put your location for prompt support.
: Korea
Describe the bug
A clear and concise description of what the bug is
:Retrieved KVSSD utilization information via nvme-cli & API (KDD) is wrong.
When I use a workload that has many different lengths of keys and randomness of access pattern (less sequentiality, write-only workload) for testing, KVSSD returns unexpected utilization info.
I used the workload mentioned above (size around 7.2GB). Also, use LevelDB as the baseline
However, whenever I check utilization of KVSSD, it returns around 28 GB.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
: KVSSD returns the value, 7.2GB.
Screenshots
If applicable, add screenshots to help explain your problem.
System environment (please complete the following information)
Workload
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: