Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KVSSD utilization information issue #57

Open
joonhyuk96sj opened this issue Jun 25, 2020 · 2 comments
Open

KVSSD utilization information issue #57

joonhyuk96sj opened this issue Jun 25, 2020 · 2 comments

Comments

@joonhyuk96sj
Copy link

Location (Korea, USA, China, India, etc.)
Put your location for prompt support.
: Korea

Describe the bug
A clear and concise description of what the bug is
:Retrieved KVSSD utilization information via nvme-cli & API (KDD) is wrong.
When I use a workload that has many different lengths of keys and randomness of access pattern (less sequentiality, write-only workload) for testing, KVSSD returns unexpected utilization info.

I used the workload mentioned above (size around 7.2GB). Also, use LevelDB as the baseline

  • I tracked written data size at NVMe submission queue -> around 7.2GB
  • I checked LevelDB size (no compression) -> around 7.2GB
    However, whenever I check utilization of KVSSD, it returns around 28 GB.

To Reproduce
Steps to reproduce the behavior:

  1. Generate workload with variety key length and randomness
  2. Play the workload with KVSSD and LevelDB
  3. Check the size of LevelDB storage and KVSSD utilization

Expected behavior
A clear and concise description of what you expected to happen.
: KVSSD returns the value, 7.2GB.

Screenshots
If applicable, add screenshots to help explain your problem.

System environment (please complete the following information)

  • Firmware version : ETA51KCA
  • Number of SSDs : 1
  • OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: Ubuntu 16.04 Kernel v4.4.0-141-generic
  • GCC version [e.g., gcc v5.0.0] : v5.4.0
  • kvbench version if kvbench runs [e.g., v0.6.0]:
  • KV API version [e.g., v0.6.0]: v1.1.0
  • User driver version :
  • Driver [Kernel or user driver or emulator] : KDD

Workload

  • number of records or data size: 7.2GB
  • Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]: random insert
  • key size :
  • value size :
  • operation option if available [e.g., sync or async mode] : async write

Additional context
Add any other context about the problem here.

@hao86yan
Copy link
Collaborator

hao86yan commented Jul 2, 2020

We verified you issue. But unfortunately, we didn't found this issue. We wrote 4.05GB and 20.25DB data KV SSD, and the actual data in KV SSD is 4.39GB and 20.74GB respectively.
Coud you double check your test scenario ?

@manojps
Copy link

manojps commented Sep 1, 2020

Not sure if you have already identified the issue or not. Bear in mind that KVSSD adds internal padding for values up to 1KB. Meaning, if you write a 100B value, it will take 1KB space in flash. The difference could be because of that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants