-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slice out of bounds #22
Comments
Hi! What is your OS version? Is it 32 or 64 bit? |
win 10 pro x64. i tried to insert only 10M values, that should not be issue on any platform. |
I have added sync after every 100 000 Put() operations and immediately when the file got to 1 GiB I got the same error. Not to mention that it took forever(very slow Put() performance). So I think the issue might be the storage driver/format that pogreb uses. Even on older 32bit Windows the limit was 2 GiB so this is not an OS or Go issue but this library's itself. |
@nonpcnpc The "slice bounds out of range" was fixed in version 0.8.3, thanks for the bug report. |
Great, will have a look. Though I would still like to know why it is so incredibly slow to write 10M records? Badger takes 54 seconds, Bolt 4 minutes and BitCask 3 minutes. Pogreb takes so long that I stopped measuring(+it always crashed). |
Looks like the bug was fixed BUT after 40 minutes and 4.3GiB file i stopped the script. This is simply unusable library for me. |
@nonpcnpc I managed to reproduce the issue, write performance on Windows in much lower compared to Linux or Mac, I'll need to investigate that #25. |
i tried this with WSL and I had to stop it after 10 minutes and 5gb file. the write performance is simply not there and the file is humongous. even if the read performance somehow beats bolt, it is not worth it. i will stick with badger for writes and bolt for reads. |
High disk space utilization will be addressed in Pogreb 0.9 #24. |
I wanted to test this db but I got this error:
Code:
I think the db needs to do automatic fsync when it reaches 1gb file?
The text was updated successfully, but these errors were encountered: