You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. I made an application that intensively uses the filesystem and I noticed a crash when writing a lot of files in a directory of an ext4 filesystem. I tried several configurations, including playing with other filesystems (BTRFS and TMPFS) and it seems to crash only on ext4 with 1024-byte inodes (seems to work fine with 4096-byte blocks, but I actually need to lower that for my application).
Here is an application I made to reproduce the bug:
# Create files in #{dbdir}/data/ and store the last entry number in #{dbdir}/last-entry.
dbdir ="db"Dir.mkdir_p "#{dbdir}/data"# Number of the last entry.
last_entry_filename ="#{dbdir}/last-entry"# Where to start? Where to end?
current_entry_number = ((File.read last_entry_filename).to_i rescue0) +1
limit =ARGV[0].to_i rescue1_000_000# Bug happens around 300k files created.while current_entry_number <= limit
filename ="#{dbdir}/data/file-#{current_entry_number}"File.write filename, "hello #{current_entry_number}"# Write the last entry that has successfully been added.File.write last_entry_filename, "#{current_entry_number}"STDOUT.write "\rfile #{current_entry_number}/#{limit}".to_slice
current_entry_number +=1endputsputs"done!"
The bug:
/tmp/ext4 $ burning-some-inodes 5000000
file 308956/5000000Unhandled exception: Error opening file with mode 'w': 'db/data/file-308957': No space left on device (File::Error)
from /home/user/tmp/crystal/src/crystal/system/unix/file.cr:12:7 in '??'
from /home/user/tmp/crystal/src/file.cr:741:12 in 'write'
from /home/user/tmp/crystal/src/string.cr:5667:5 in '__crystal_main'
from /home/user/tmp/crystal/src/crystal/main.cr:129:5 in 'main'
from /home/user/tmp/crystal/src/crystal/system/unix/main.cr:10:3 in 'main'
from /lib/x86_64-linux-gnu/libc.so.6 in '??'
from /lib/x86_64-linux-gnu/libc.so.6 in '__libc_start_main'
from /home/user/burning-some-inodes in '_start'
from ???
Thus, I can still actually write new files when I do that in my terminal!
Furthermore, if I move the db directory (created by the provided code) but keep it in the same partition and restart my application, new files are created up to the same number of files in the directory (in this example: 308957).
However, if I change something rather trivial such as what is being printed in the terminal or the filenames, I get different results.
I'm on Ubuntu 24.10 with the crystal version provided by snap:
In case you want to reproduce the bug without creating a partition on a real disk, here is a makefile to make a fake ext4 partition from a generated empty file in /tmp that is then mounted on /tmp/ext4:
You're reaching an errno, so that's a system error. Isn't it reproducible in another language?
I can still write files so this shouldn't be related to the filesystem. I could write the same application in C if you want but since I can write files from my terminal I don't see the point.
The inodes declare the total number of files for the filesystem. Maybe the parameters to mkfs.ext4 are limiting the number of files per directory?
I can still write files in the same directory. Moreover, as I said in the bug report, when I change what is printed on the terminal the number of files I can write in the directory changes, thus I guess the bug should be memory-related.
Also, you can see the parameters I actually used to create the partition. From what I understand from wikipedia (https://en.wikipedia.org/wiki/Ext4) ext4 now enables billions of files in a single directory so we are far from it. And again, even with ext4 I managed to write millions of files in a single directory (with 4-kiB blocks).
Bug Report
Hello. I made an application that intensively uses the filesystem and I noticed a crash when writing a lot of files in a directory of an ext4 filesystem. I tried several configurations, including playing with other filesystems (BTRFS and TMPFS) and it seems to crash only on ext4 with 1024-byte inodes (seems to work fine with 4096-byte blocks, but I actually need to lower that for my application).
Here is an application I made to reproduce the bug:
The bug:
I still have plenty of space and free inodes.
Thus, I can still actually write new files when I do that in my terminal!
Furthermore, if I move the
db
directory (created by the provided code) but keep it in the same partition and restart my application, new files are created up to the same number of files in the directory (in this example:308957
).However, if I change something rather trivial such as what is being printed in the terminal or the filenames, I get different results.
I'm on Ubuntu 24.10 with the crystal version provided by
snap
:But I also tried it with
master
(same bug at the same file number):In case you want to reproduce the bug without creating a partition on a real disk, here is a makefile to make a fake ext4 partition from a generated empty file in
/tmp
that is then mounted on/tmp/ext4
:Thanks and have a fun time with this one! 😆
The text was updated successfully, but these errors were encountered: