You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
see the below o/p specially the bold one, dir permission for user s3user-1-dir is 770 and there is bucket associated to it bucket-new-1 for which I set the chmod to 000, no permission at all, then why there is no invalid bucket[ACCESS_DENIED] in health command output of /usr/local/noobaa-core/bin/node /usr/local/noobaa-core/src/cmd/health --all_bucket_details | jq | less
If i did same for s3user-1-dir then am getting the ACCESS_DENIED but for bucket there is no error for same scenario
[root@cluster4-41 ~]# ll /mnt/cesSharedRoot/
...
drwxrwx---. 3 10 wheel 4096 Jul 26 19:03 s3user-10-dir
drwxrwx---. 3 bin bin 4096 Jul 26 19:02 s3user-1-dir
drwxrwx---. 3 daemon daemon 4096 Jul 26 19:03 s3user-2-dir
drwxrwx---. 3 adm sys 4096 Jul 26 19:03 s3user-3-dir
... [root@cluster4-41 ~]# ll /mnt/cesSharedRoot/s3user-1-dir
total 1
d---------. 2 bin bin 4096 Jul 26 19:02 bucket-new-1
[root@cluster4-41 ~]# /usr/local/noobaa-core/bin/node /usr/local/noobaa-core/src/cmd/health --all_bucket_details | jq | less
{
"service_name": "noobaa",
"status": "OK",
"memory": "267.6M",
"checks": {
"services": [
{
"name": "noobaa",
"service_status": "active",
"pid": "71215",
"error_type": "PERSISTENT"
}
],
"endpoint": {
"endpoint_state": {
"response": {
"response_code": "RUNNING",
"response_message": "Endpoint running successfuly."
},
"total_fork_count": 2,
"running_workers": [
2,
1
]
},
"error_type": "TEMPORARY"
},
"buckets_status": {
"invalid_buckets": [], <-- no invalid bucket
"valid_buckets": [
{
"name": "bucket-1",
"storage_path": "/mnt/cesSharedRoot/s3user-1-dir/bucket-new-1"
},
{
"name": "bucket-6",
"storage_path": "/mnt/cesSharedRoot/s3user-6-dir/bucket-new-6"
},
{
"name": "bucket-9",
"storage_path": "/mnt/cesSharedRoot/s3user-9-dir/bucket-new-9"
},
Expected behavior
bucket having no access should be reported in invalid buckets in health CLI
Steps to reproduce
manually changed the permission of the bucket folder to 000
More information - Screenshots / Logs / Other output
The text was updated successfully, but these errors were encountered:
thank you @Roushan45
we can definitely add a check for that, initially, we did only existence check, and I added also permissions check for account's new_buckets_path, so we need to add permissions check for that bucket's path.
Environment info
Actual behavior
see the below o/p specially the bold one, dir permission for user s3user-1-dir is 770 and there is bucket associated to it bucket-new-1 for which I set the chmod to 000, no permission at all, then why there is no invalid bucket[ACCESS_DENIED] in health command output of /usr/local/noobaa-core/bin/node /usr/local/noobaa-core/src/cmd/health --all_bucket_details | jq | less
If i did same for s3user-1-dir then am getting the ACCESS_DENIED but for bucket there is no error for same scenario
[root@cluster4-41 ~]# ll /mnt/cesSharedRoot/
...
drwxrwx---. 3 10 wheel 4096 Jul 26 19:03 s3user-10-dir
drwxrwx---. 3 bin bin 4096 Jul 26 19:02 s3user-1-dir
drwxrwx---. 3 daemon daemon 4096 Jul 26 19:03 s3user-2-dir
drwxrwx---. 3 adm sys 4096 Jul 26 19:03 s3user-3-dir
...
[root@cluster4-41 ~]# ll /mnt/cesSharedRoot/s3user-1-dir
total 1
d---------. 2 bin bin 4096 Jul 26 19:02 bucket-new-1
[root@cluster4-41 ~]# /usr/local/noobaa-core/bin/node /usr/local/noobaa-core/src/cmd/health --all_bucket_details | jq | less
{
"service_name": "noobaa",
"status": "OK",
"memory": "267.6M",
"checks": {
"services": [
{
"name": "noobaa",
"service_status": "active",
"pid": "71215",
"error_type": "PERSISTENT"
}
],
"endpoint": {
"endpoint_state": {
"response": {
"response_code": "RUNNING",
"response_message": "Endpoint running successfuly."
},
"total_fork_count": 2,
"running_workers": [
2,
1
]
},
"error_type": "TEMPORARY"
},
"buckets_status": {
"invalid_buckets": [], <-- no invalid bucket
"valid_buckets": [
{
"name": "bucket-1",
"storage_path": "/mnt/cesSharedRoot/s3user-1-dir/bucket-new-1"
},
{
"name": "bucket-6",
"storage_path": "/mnt/cesSharedRoot/s3user-6-dir/bucket-new-6"
},
{
"name": "bucket-9",
"storage_path": "/mnt/cesSharedRoot/s3user-9-dir/bucket-new-9"
},
Expected behavior
Steps to reproduce
More information - Screenshots / Logs / Other output
The text was updated successfully, but these errors were encountered: