-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No space left on device #1560
Comments
"ORA-27061: waiting for async I/Os failed Linux-x86_64 Error: 28: No space left on device Additional information: 4294967295 Additional information: 1048576" : Kindly check the disk usage of "/mnt/blobfusetmp". Logs indicate the disk might be running out of space. I see you have kept 20 seconds as disk timeout and ~30GB disk space. If your application (RMAN in your case) generates more data than this limit in the given time frame the disk might just exhaust. |
Hello @vibhansa-msft , |
30GB space and 20 second timeout is something that you have configured in the .yaml file. If you have 600+ GB of disk space available you can increase the limit from 30GB to 100 may be and also reduce the timeout from 20 to 0 or 2 seconds. Timeout is useful only when your application reads the same file again and again. If process is going to read a file only once keeping the timeout to 0 saves the disk usage. Also, Blobfuse deletes a file from local cache only if all open handles for the given file are closed. If your application does not close the handle then the file will remain in cache untill you mount. In such cases as well you will observe the disk is getting full. If you suspect this you can force a hard limit where your file open calls will start to fail if the disk is reaching configured capacity. |
Hello @vibhansa-msft , |
How big is the backup you are trying to take? |
For some reason the debug log file is not getting generated |
If you have syslog filters installed it shall be in '/var/log/blobfuse2.log' file, otherwise by default it will go to '/var/log/messages'. If you are using AKS then logs might be directed to the pod directory created on the node. |
Hello @vibhansa-msft , Regards |
This is syslog file and has many logs other than blobfuse. Last few logs from blobfuse end I could see in here are just about failing to mount due to invalid path. |
Hello @vibhansa-msft , Got some more information around this one:
|
Attached the latest logs |
If you are dealing with file as large as 800GB then file-cache is not advised. Kindly migrate to block-cache model and then try your workflow again. |
Hello @vibhansa-msft , Here is my config file: Refer ./setup/baseConfig.yaml for full set of config parameters#allow-other: false logging: components:
libfuse: block_cache: attr_cache: azstorage: |
How did you upload the files to your storage account:
I see block-cache is not able to open this file as the block-size in your config file is set to 32mb and this particular file has smaller block-size. As of now block-cache only works for files that have exactly the same block size on backend. If objective of your workflow is to just read the file then mount blobfuse in read-only mode and it will stop making this strict check. If you wish to overwrite the file then this might not work with block-cache for now unless you create the file with block-cache initially. |
Hello @vibhansa-msft , STEPSYSBLOB 8192 And CPU is
So with this what should my config looks like ? |
As per the below log, there is a block in your file which is 512 size. If this was the last block, Blobfuse2 would have allowed it and file open would have been success. But either it's in between block or all blocks in the file following this are of smaller size hence the open fails. You need to validate how this file was created in the first place.
|
@vibhansa-msft , what if different files are using different block sizes? |
We (I am working with @sandip094) have been using file-cache for quite some time now, but I am wondering when you would suggest using streaming block-cache mode? |
Block size creating trouble here is no dependent on the block-size that RMAN is using, rather its block-size that Blobfuse2 is using. When you use File-cache the block-size is determined dynamically based on the file-size while in case of block-cache block-size is fixed (default 8MB) and can be configured by the user. |
Which version of blobfuse was used?
Which OS distribution and version are you using?
What was the issue encountered?
Getting the below error after running for few minutes on the RMAN backup
released channel: C1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of backup plus archivelog command at 11/07/2024 13:09:32 ORA-19502: write error on file "/rman-backup/step/1/2024-10-20_0115/STEP_1736_1_m839hd9q_20241107.incr1c", block number 91441152 (block size=8192) ORA-27061: waiting for async I/Os failed Linux-x86_64 Error: 28: No space left on device Additional information: 4294967295 Additional information: 1048576
Configuration file is as below -/etc/blobfuse/blobfuseconfig.yaml
`logging:
type: base
level: log_info
max-file-size-mb: 32
file-count: 10
track-time: true
max-concurrency: 40
components:
libfuse:
default-permission: 0644
attribute-expiration-sec: 120
entry-expiration-sec: 120
negative-entry-expiration-sec: 240
ignore-open-flags: true
file_cache:
path: /mnt/blobfusetmp
timeout-sec: 20
max-size-mb: 30720
allow-non-empty-temp: true
cleanup-on-start: true
azstorage:
type: block
account-name: xxxxx
account-key: xxxxx
mode: key
container: xxxxx`
Service file content -/etc/systemd/system/blobfuse2.service
`[Unit]
Description=A virtual file system adapter for Azure Blob storage.
After=network-online.target
Requires=network-online.target
[Service]
User=oracle
Group=dba
Environment=BlobMountingPoint=/rman-backup
Environment=BlobConfigFile=/etc/blobfuse/blobfuseconfig.yaml
Environment=BlobCacheTmpPath=/mnt/blobfusetmp
Environment=BlobLogPath=/var/log/blobfuse
Type=forking
ExecStart=/usr/bin/blobfuse2 mount ${BlobMountingPoint} --config-file=${BlobConfigFile}
ExecStop=/usr/bin/blobfuse2 unmount ${BlobMountingPoint}
ExecStartPre=+/usr/bin/install -d -o oracle -g dba ${BlobCacheTmpPath}
ExecStartPre=+/usr/bin/install -d -o oracle -g dba ${BlobLogPath}
ExecStartPre=+/usr/bin/install -d -o oracle -g dba ${BlobMountingPoint}
[Install]
WantedBy=multi-user.target`
Backup files size is as follows:
28M control01.ctl 8.1G stepsysblob_step_1.dbf 743G stepsysdata_step_1.dbf 4.6G sysaux_step_1.dbf 801M system_step_1.dbf 20G temp_step_1.dbf 80G undo_t1_step_1.dbf 101M users_step_1.dbf
The text was updated successfully, but these errors were encountered: