Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with auto-mounting on boot #207

Open
ramyala opened this issue Jul 25, 2017 · 39 comments
Open

Issue with auto-mounting on boot #207

ramyala opened this issue Jul 25, 2017 · 39 comments

Comments

@ramyala
Copy link

ramyala commented Jul 25, 2017

I'm using goofys to mount (on-demand) S3 buckets to Ubuntu 16.04 images (on AWS). I'm noticing the following issue on mount where the filesystem isn't mounted.

Jul 24 18:31:47 ip-10-0-21-129 systemd[1]: Started LXD - container startup/shutdown.
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1337]: main.ERROR Unable to access 'lb.test': %!v(PANIC=runtime error: invalid memory address or nil pointer dereference)
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1338]: s3.ERROR code=incorrect region, the bucket is not in 'us-east-1' region msg=301 request=
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1338]: s3.ERROR code=incorrect region, the bucket is not in 'us-east-1' region msg=301 request=
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1338]: main.ERROR Unable to access 'lb.test': BucketRegionError: incorrect region, the bucket is not in 'us-east-1' region#012#011status code: 301, request id: , host id:
Jul 24 18:31:48 ip-10-0-21-129 mount[1282]: 2017/07/24 18:31:48.089395 main.FATAL Unable to mount file system, see syslog for details
Jul 24 18:31:48 ip-10-0-21-129 /opt/goofys/bin/goofys[1338]: main.FATAL Mounting file system: Mount: initialization failed
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: mnt-s3-lb.test.mount: Mount process exited, code=exited status=1
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: Failed to mount /mnt/s3/lb.test.
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: Dependency failed for Remote File Systems.
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: remote-fs.target: Job remote-fs.target/start failed with result 'dependency'.
Jul 24 18:31:48 ip-10-0-21-129 systemd[1]: mnt-s3-lb.test.mount: Unit entered failed state.

I have the right IAM permissions and have the following /etc/fstab entry

goofys#lb.test /mnt/s3/lb.test fuse    ro,_netdev,allow_other,--file-mode=0666    0       0

I can confirm goofys works if I try to manually mount after boot with

sudo mount /mnt/s3/lb.test

Is there a way to robustly get goofys to mount files on startup?

@kahing
Copy link
Owner

kahing commented Jul 26, 2017

this is strange, can you include the output of --debug_s3?

@ramyala
Copy link
Author

ramyala commented Jul 26, 2017

Is there a way to pass debug_s3 as a flag via fstab? I wont be able to get you the debug log otherwise, because this happens during boot

@kahing
Copy link
Owner

kahing commented Jul 26, 2017

yup just do --debug_s3 like you would with --file-mode.

@chasebolt
Copy link

chasebolt commented Jul 28, 2017

i am running into the same issue when using chef. i can manually run mount /data with no issue.

goofys#mybucket /data fuse _netdev,allow_other,--dir-mode=0777,--file-mode=0666,--debug_s3 0 2

https://gist.github.com/chasebolt/a28bac3785d2df8d1685d60cf8f19421

@chasebolt
Copy link

using /root/.aws/credentials file works fine. something with using IAM roles it is failing with. I temporarily gave the IAM role full access and it still failed.

@ramyala
Copy link
Author

ramyala commented Jul 29, 2017

I haven't had the chance to spin up a cluster to try and repro this issue. I'll try and get to it next week.

@kahing
Copy link
Owner

kahing commented Aug 1, 2017

seems like retrieving the IAM role is erroring:

Jul 28 19:50:13 i-0eef1f0ad642878c5 /usr/bin/goofys[12999]: s3.DEBUG DEBUG: Validate Response ec2metadata/GetMetadata failed, not retrying, error EC2MetadataError: failed to make EC2Metadata request
                                                                      caused by: <?xml version="1.0" encoding="iso-8859-1"?>
                                                                      <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
                                                                               "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
                                                                      <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
                                                                       <head>
                                                                        <title>404 - Not Found</title>
                                                                       </head>
                                                                       <body>
                                                                        <h1>404 - Not Found</h1>
                                                                       </body>
</html>

@chasebolt are you sure your IAM is setup correctly? What if you write a wrapper script for goofys and sleep a bit first?

@jeff-kilbride
Copy link

jeff-kilbride commented Sep 10, 2017

I am having the same issue on an AWS Ubuntu 14.04 server. I have been running goofys since 0.0.9 and it has been great. I recently upgraded to 0.0.17-3d40e98 and decided to finally add an entry to my fstab file. When I reboot, my bucket is not mounted. However, I can successfully mount the bucket from the command line. Here is my fstab entry:

goofys#my-staging   /mnt/staging    fuse    --uid=106,--gid=111,_netdev,allow_other,--file-mode=0644,--debug_s3    0   0

I have this bucket setup for use with vsftpd, and the uid/gid correspond to the ftp user. This is the successful command line I am using:

goofys --uid 106 --gid 111 -o allow_other my-staging /mnt/staging

I added the --debug_s3 option to my last reboot attempt, but there is no output in any of the system logs that I can find. (grep goofys *.log and manually looking through them...) When I successfully mount using the command line, I get the following in /var/log/syslog:

Sep 10 04:38:47 ip-xxx-xxx-xxx-xxx /usr/local/bin/goofys[1450]: main.INFO File system has been successfully mounted.

After upgrading to 0.0.17, I also noticed a zombie process which I had never seen before on this server. Here is the output of ps and pstree immediately after rebooting

# ps aux | grep Z
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       361  0.0  0.0      0     0 ?        Zs   04:35   0:00 [goofys] <defunct>

# pstree -p -s 361
init(1)───mountall(250)───mount(322)───sh(324)───goofys(325)───goofys(361)

# pstree
init─┬─acpid
     ├─atd
     ├─cron
     ├─dbus-daemon
     ├─dhclient
     ├─7*[getty]
     ├─master─┬─pickup
     │        └─qmgr
     ├─mountall───mount───sh───goofys─┬─goofys
     │                                └─4*[{goofys}]
     ├─rsyslogd───3*[{rsyslogd}]
     ├─sshd───sshd───sshd───bash───sudo───su───bash───pstree
     ├─systemd-logind
     ├─systemd-udevd
     ├─upstart-file-br
     ├─upstart-socket-
     ├─upstart-udev-br
     └─vsftpd

Here is the output of pstree after successfully mounting from the command line:

# pstree -p -s 361
init(1)───mount(322)───sh(324)───goofys(325)───goofys(361)

# pstree
init─┬─acpid
     ├─atd
     ├─cron
     ├─dbus-daemon
     ├─dhclient
     ├─7*[getty]
     ├─goofys───6*[{goofys}]
     ├─master─┬─pickup
     │        └─qmgr
     ├─mount───sh───goofys─┬─goofys
     │                     └─4*[{goofys}]
     ├─rsyslogd───3*[{rsyslogd}]
     ├─sshd───sshd───sshd───bash───sudo───su───bash───pstree
     ├─systemd-logind
     ├─systemd-udevd
     ├─upstart-file-br
     ├─upstart-socket-
     ├─upstart-udev-br
     └─vsftpd

The output of ps aux is the same. It definitely looks like it's hanging on reboot when called by mount, for whatever reason. Not sure what else I can provide, since there is no debug output in syslog. I've copied the output of the entire last reboot, if you'd like to see it. Otherwise, if there is anything else I can provide, let me know.

Besides not mounting on boot, it works perfectly and I've been very happy with the performance uploading files through vsftpd!

@haraa
Copy link

haraa commented Sep 13, 2017

I have the same issue. The version is v0.0.9 with CentOS 6.9 x64 and golang 1.7.6 and fuse 2.8.3.
I have fstab setting as below.

/usr/local/bin/goofys#hoge-contents /mnt/s3src fuse _netdev,allow_other,--uid=501,--gid=501 0 0

After rebooting OS , fstab does not work. not mounting s3bucket.
I have error logs on OS, as below.

Sep13 11:17:13 user /usr/local/bin/goofys[1087]: main.ERROR Unable to access 'hoge-contents': BucketRegionError: incorrect region, the bucket is not in 'us-east-1' region

Sep 13 11:17:13 user /usr/local/bin/goofys[1087]: main.FATAL Mounting file system: Mount: initialization failed

But I can manually run mount command # mount -a , it's successful with logs as below.

Sep 13 11:19:29 user /usr/local/bin/goofys[1499]: s3.INFO Switching from region 'us-east-1' to 'ap-northeast-1'

Sep 13 11:19:29 user kernel: fuse init (API version 7.14)

Sep 13 11:19:29 user /usr/local/bin/goofys[1499]: main.INFO File system has been successfully mounted.

Also /root/.aws/credentials file is full of my key info. /root/.aws/config file is as below.

[default]
output =
region = ap-northeast-1

so, what is wrong with my setting ?
If you have a solution about this , please tell me how . thank you.

@jeff-kilbride
Copy link

@haraa You can add the region option to fstab. See #211 for an example.

@haraa
Copy link

haraa commented Sep 13, 2017

@jeff-kilbride
Thank you for reply to my question !
I am going to try it .

@cblackuk
Copy link

I have the same issue on Ubuntu 14, mount -a works post reboot but if you reboot the box, bucket is not mounting.

@kahing
Copy link
Owner

kahing commented Sep 15, 2017

for people who have reported this, it's not clear to me that they are all the same problem. So could you please attach your syslog with --debug_s3?

@jeff-kilbride
Copy link

Here's a gist with the full syslog output of my last reboot:

https://gist.github.com/jeff-kilbride/984c72e702988172be24a3d36e4e585a

I couldn't find anything in there related to goofys, even with the --debug_s3 option.

@cblackuk
Copy link

Literally the same is true for me. I cannot see anything in the syslog when it does not mount it. It just does not do it and nothing gets logged despite having debug options set.

@jeff-kilbride
Copy link

@cblackuk Do you also have a zombie goofys process after reboot?

@cblackuk
Copy link

@jeff-kilbride I cannot see any zombie processes no... also 1 out of 10 reboots it will actually mount it... the other 9 times it will not and I am making NO changes to it whatsoever... it is just magically working or magically not working.... but when it is working it spams syslog with all the debug logs...

@cblackuk
Copy link

@jeff-kilbride Actually... you are correct! When it does not mount I can see:
ps axo stat,ppid,pid,comm | grep -w defunct
Zs 980 1020 goofys

@kahing
Copy link
Owner

kahing commented Sep 19, 2017

do all of you use IAM or credential from ~/.aws/credentials?

@jeff-kilbride
Copy link

I use ~/.aws/credentials -- as root.

@cblackuk
Copy link

Same

@kahing
Copy link
Owner

kahing commented Oct 12, 2017

I still have no clue about this. For people who use .aws/credentials, could you try adding --profile default?

@jeff-kilbride
Copy link

I tried adding --profile=default to my fstab entry:

goofys#my-staging   /mnt/staging    fuse    --uid=106,--gid=111,_netdev,allow_other,--file-mode=0644,--profile=default,--debug_s3    0   0

I'm still getting a zombie process when I reboot and my mount point is not there:

$ pstree
init─┬─acpid
     ├─atd
     ├─cron
     ├─dbus-daemon
     ├─dhclient
     ├─7*[getty]
     ├─master─┬─pickup
     │        └─qmgr
     ├─mountall───mount───sh───goofys─┬─goofys
     │                                └─4*[{goofys}]
     ├─ondemand───sleep
     ├─rsyslogd───3*[{rsyslogd}]
     ├─sshd───sshd───sshd───bash───pstree
     ├─systemd-logind
     ├─systemd-udevd
     ├─upstart-file-br
     ├─upstart-socket-
     ├─upstart-udev-br
     └─vsftpd

$ top
top - 04:13:13 up 1 min,  1 user,  load average: 0.33, 0.14, 0.05
Tasks: 109 total,   1 running, 107 sleeping,   0 stopped,   1 zombie

@masterchiefaragorn
Copy link

Hi @kahing,

I've read this thread and others and have also not been able to get fstab to mount my S3 drive on bootup either. If I boot my machine and then type (as root):

mount /root/s3

It works fine. My fstab is exactly what's in the README.md:

goofys#bucket /root/s3 fuse _netdev,allow_other,--file-mode=0666 0 0

But nothings shows in /var/log/kernlog. It looks like it doesn't even try. I've added --debug_s3 to my fstab:

goofys#bucket /root/s3 fuse _netdev,allow_other,--file-mode=0666,--debug_s3 0 0

...and rebooted, but still nothing shows. Of course, I do have the /root/.aws/credentials file correct, which is why "mount /root/s3" works. Any breakthrough on this point?

@cr-solutions
Copy link

cr-solutions commented Jun 7, 2018

Same for me, fstab and autofs not working.
Any solution?

debug shows:
mounted indirect on /mnt/goofys with timeout 300, freq 75 seconds
ghosting enabled
attempting to mount entry /mnt/goofys/panthermedia-test

2018/06/07 12:47:03.534282 s3.INFO Switching from region 'us-east-1' to 'eu-west-1'
2018/06/07 12:47:03.572287 main.INFO File system has been successfully mounted.

but ls /mnt/goofys/panthermedia-test/ not returning

@mshakhmaykin
Copy link

mshakhmaykin commented Aug 17, 2018

Not working for me either.
Ubuntu 12.04, latest goofys.

The fstab line is:
/usr/local/sbin/goofys-latest#<bucket-name> /home/s3user/files fuse _netdev,allow_other,--debug_s3,--debug_fuse,--uid=6022,--gid=6022 0 0

The file /root/.aws/credentials has valid creds and I even tried various file and dir permissions on it.
I can manually do "mount -a" as root and get my bucket mounted.

But on reboot, it's not working. I see the pending processes in pstree output:

mountall,450 --daemon
-mount,486 -n -t fuse -o _netdev,allow_other,--debug_s3,--debug_fuse,--uid=6022,--gid=6022 /usr/local/sbin/goofys-latest# /home/s3user/files
-sh,487 -c '/usr/local/sbin/goofys-latest' '' '/home/s3user/files' '-o' 'rw,allow_other,--debug_s3,--debug_fuse,--uid=6022,--gid=6022,dev,suid'
-goofys-latest,490 /home/s3user/files -o rw,allow_other,--debug_s3,--debug_fuse,--uid=6022,--gid=6022,dev,suid

And the syslog has this:
Aug 17 20:34:18 HOSTNAME /usr/local/sbin/goofys-latest[1974]: s3.ERROR code=NoCredentialProviders msg=no valid providers in chain. Deprecated.#012#011For verbose messaging see aws.Config.CredentialsChainVerboseErrors, err=<nil>#012

@kahing
Copy link
Owner

kahing commented Aug 17, 2018

If you look into the environment, is $HOME set correctly?

@mshakhmaykin
Copy link

mshakhmaykin commented Aug 17, 2018 via email

@jeff-kilbride
Copy link

Just an update...

I recently moved my goofys setup from an Ubuntu 14.04 server to one running Amazon Linux 2. Now, the auto mount on boot stuff works. So, at least in my experience, it seems to be something weird with Ubuntu flavors.

@kahing
Copy link
Owner

kahing commented Aug 17, 2018

If you see a stuck process after boot, you can look into /proc/pid/environ to see the environment variables

@mshakhmaykin
Copy link

mshakhmaykin commented Aug 20, 2018

Ah, right, that worked.
So, as far as I can see, the HOME variable is set right:

root@:~# ps -ef|grep [6]40
root 640 635 0 15:28 ? 00:00:00 /bin/sh -c '/usr/local/sbin/goofys-latest' '' '/home/s3user/files' '-o' 'rw,allow_other,--debug_s3,--debug_fuse,--uid=6022,--gid=6022,dev,suid'
root 642 640 0 15:28 ? 00:00:00 /usr/local/sbin/goofys-latest /home/s3user/files -o rw,allow_other,--debug_s3,--debug_fuse,--uid=6022,--gid=6022,dev,suid

root@:~# strings /proc/640/environ
UPSTART_INSTANCE=
UPSTART_JOB=mountall
TERM=linux
UPSTART_EVENTS=startup
PWD=/
HOME=/root

root@:~# strings /proc/642/environ
UPSTART_INSTANCE=
HOME=/root
UPSTART_JOB=mountall
TERM=linux
UPSTART_EVENTS=startup
PWD=/

root@:~# strace -p 642
Process 642 attached - interrupt to quit
futex(0xcc0e90, FUTEX_WAIT, 0, NULL

root@:~# gdb -p 642
GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
http://bugs.launchpad.net/gdb-linaro/.
Attaching to process 642
Reading symbols from /usr/local/sbin/goofys-latest...done.
Reading symbols from /lib/x86_64-linux-gnu/libpthread.so.0...(no debugging symbols found)...done.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fb472979700 (LWP 649)]
[New Thread 0x7fb47317a700 (LWP 648)]
[New Thread 0x7fb47397b700 (LWP 647)]
[New Thread 0x7fb47417c700 (LWP 645)]
Loaded symbols for /lib/x86_64-linux-gnu/libpthread.so.0
Reading symbols from /lib/x86_64-linux-gnu/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib/x86_64-linux-gnu/libnss_files.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libnss_files.so.2
runtime.futex () at /usr/local/go/src/runtime/sys_linux_amd64.s:439
439 /usr/local/go/src/runtime/sys_linux_amd64.s: No such file or directory.
warning: Missing auto-load scripts referenced in section .debug_gdb_scripts
of file /usr/local/sbin/goofys-latest
Use `info auto-load-scripts [REGEXP]' to list them.
(gdb)
(gdb)
(gdb)
(gdb) info auto-load-scripts
Loaded Script
Missing /usr/local/go/src/runtime/runtime-gdb.py
(gdb)

@mshakhmaykin
Copy link

@kahing : does it ring a bell? Any other debugging I could try?

@mshakhmaykin
Copy link

The issue is gone after switching to Ubuntu 16.04

@ryanotella
Copy link

Working fine on Ubuntu 18.04 too.

@builtbylane
Copy link

confirmed as well. issue is gone after upgrading from 14 to Ubuntu 16.04

@cr-solutions
Copy link

AufoFS still not working for us on Ubuntu 16 and 18.

If we call mount -t fuse.goofys -o rw,nosuid,nodev,allow_other,--uid=1000,--gid=1000,--dir-mode=0775,--file-mode=0775,--profile=default cookbutler-data /mnt/goofys/cookbutler-data
manually (same command autofs calling) mount is working.
If aufofs call it, AutoFS hangs.

ll /mnt/goofys/cookbutler-data/ not returning before killing "automount process"

Debug of AutoFS:

automount -vdf
Starting automounter version 5.1.2, master map /etc/auto.master
using kernel protocol version 5.02
lookup_nss_read_master: reading master file /etc/auto.master
do_init: parse(sun): init gathered global options: (null)
lookup_read_master: lookup(file): read entry +dir:/etc/auto.master.d
lookup_nss_read_master: reading master dir /etc/auto.master.d
lookup(dir): dir map /etc/auto.master.d missing or not readable
lookup(file): failed to read included master map dir:/etc/auto.master.d
lookup_read_master: lookup(file): read entry +auto.master
lookup_nss_read_master: reading master files auto.master
do_init: parse(sun): init gathered global options: (null)
lookup(file): failed to read included master map auto.master
lookup_read_master: lookup(file): read entry /mnt/efs
lookup_read_master: lookup(file): read entry /mnt/goofys
master_do_mount: mounting /mnt/efs
automount_path_to_fifo: fifo name /var/run/autofs.fifo-mnt-efs
lookup_nss_read_map: reading map file /etc/auto.efs
do_init: parse(sun): init gathered global options: (null)
mounted indirect on /mnt/efs with timeout 300, freq 75 seconds
st_ready: st_ready(): state = 0 path /mnt/efs
ghosting enabled
master_do_mount: mounting /mnt/goofys
automount_path_to_fifo: fifo name /var/run/autofs.fifo-mnt-goofys
lookup_nss_read_map: reading map file /etc/auto.goofys
do_init: parse(sun): init gathered global options: (null)
mounted indirect on /mnt/goofys with timeout 300, freq 75 seconds
st_ready: st_ready(): state = 0 path /mnt/goofys
ghosting enabled
handle_packet: type = 3
handle_packet_missing_indirect: token 55, name cookbutler-data, request pid 27275
attempting to mount entry /mnt/goofys/cookbutler-data
lookup_mount: lookup(file): looking up cookbutler-data
lookup_mount: lookup(file): cookbutler-data -> -fstype=fuse.goofys,rw,nosuid,nodev,allow_other,--uid=1000,--gid=1000,--dir-mode=0775,--file-mode=0775,--profile=default cookbutler-data
parse_mount: parse(sun): expanded entry: -fstype=fuse.goofys,rw,nosuid,nodev,allow_other,--uid=1000,--gid=1000,--dir-mode=0775,--file-mode=0775,--profile=default cookbutler-data
parse_mount: parse(sun): gathered options: fstype=fuse.goofys,rw,nosuid,nodev,allow_other,--uid=1000,--gid=1000,--dir-mode=0775,--file-mode=0775,--profile=default
parse_mount: parse(sun): dequote("cookbutler-data") -> cookbutler-data
parse_mount: parse(sun): core of entry: options=fstype=fuse.goofys,rw,nosuid,nodev,allow_other,--uid=1000,--gid=1000,--dir-mode=0775,--file-mode=0775,--profile=default, loc=cookbutler-data
sun_mount: parse(sun): mounting root /mnt/goofys, mountpoint cookbutler-data, what cookbutler-data, fstype fuse.goofys, options rw,nosuid,nodev,allow_other,--uid=1000,--gid=1000,--dir-mode=0775,--file-mode=0775,--profile=default
do_mount: cookbutler-data /mnt/goofys/cookbutler-data type fuse.goofys options rw,nosuid,nodev,allow_other,--uid=1000,--gid=1000,--dir-mode=0775,--file-mode=0775,--profile=default using module generic
mount_mount: mount(generic): calling mkdir_path /mnt/goofys/cookbutler-data
mount_mount: mount(generic): calling mount -t fuse.goofys -o rw,nosuid,nodev,allow_other,--uid=1000,--gid=1000,--dir-mode=0775,--file-mode=0775,--profile=default cookbutler-data /mnt/goofys/cookbutler-data

@darkdragon-001
Copy link

@cr-solutions fuse issue seems to be tracked in #230

This issue here contains a lot of valuable debug approaches though!

@aarcro
Copy link

aarcro commented May 12, 2021

Not sure if this is related or helpful, but for me, this works CLI (on Ubuntu 20.04):

goofys -o allow_other --endpoint https://s3.wasabisys.com --region us-east-1 --uid 1000 --gid 1000 bucket /mnt/bucket`

But this does not in the fstab:

goofys#bucket /mnt/bucket fuse _netdev,allow_other,uid=1000,gid=1000,region=us-east-1,endpoint=https://s3.wasabisys.com 0 0

@bayukp
Copy link

bayukp commented Feb 7, 2023

For anyone who still have problem, i follow tutorial on this site https://creodias.eu/-/a-9-18 and it's work using fstab

More less what I implement

/bin/goofys#s3fs            /mnt/wasabi     fuse       _netdev,allow_other,--file-mode=0666,--dir-mode=0777,--region=ap-southeast-1,--endpoint=https://s3.ap-southeast-1.wasabisys.com/ 0 0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests