Skip to content
This repository has been archived by the owner on May 3, 2024. It is now read-only.

list unit benchmarks failed #1578

Closed
kimika88 opened this issue Mar 30, 2022 — with Board Genius Sync · 6 comments
Closed

list unit benchmarks failed #1578

kimika88 opened this issue Mar 30, 2022 — with Board Genius Sync · 6 comments
Labels
Status: L1 Triage Initial triage Triage: DevAd Triage owned by DevAd

Comments

Copy link
Contributor

[root@ssc-vm-g4-rhev4-1291 cortx-motr]# scripts/m0 run-ub -l
----- run_ub -l -----
motr[118607]:  e300  FATAL  [lib/assert.c:50:m0_panic]  panic: fatal signal delivered at unknown() (unknown:0)  [git: 2.0.0-585-79-g06bbe268] /var/motr/m0ub/m0trace.118607
Motr panic: fatal signal delivered at unknown() unknown:0 (errno: 0) (last failed: none) [git: 2.0.0-585-79-g06bbe268] pid: 118607  /var/motr/m0ub/m0trace.118607
Motr panic reason: signo: 11
/var/cortx/cortx-motr/motr/.libs/libmotr.so.2(m0_arch_backtrace+0x20)[0x7f80c7f96a20]
/var/cortx/cortx-motr/motr/.libs/libmotr.so.2(m0_arch_panic+0xdf)[0x7f80c7f96bcf]
/var/cortx/cortx-motr/motr/.libs/libmotr.so.2(m0_panic+0x127)[0x7f80c7f84c67]
/var/cortx/cortx-motr/motr/.libs/libmotr.so.2(+0x3a4c18)[0x7f80c7f96c18]
/lib64/libpthread.so.0(+0x12d50)[0x7f80c7662d50]
/var/cortx/cortx-motr/motr/.libs/libmotr.so.2(m0_ub_set_add+0x15)[0x7f80c7f8eb35]
/var/cortx/cortx-motr/ut/.libs/lt-m0ub(main+0x234)[0x401104]
/lib64/libc.so.6(__libc_start_main+0xf3)[0x7f80baf14ca3]
/var/cortx/cortx-motr/ut/.libs/lt-m0ub(_start+0x2e)[0x4011ee]
/var/cortx/cortx-motr/utils/m0run: line 425: 118607 Aborted                 (core dumped) $(srcdir_path_of $binary) "$@"

Copy link

For the convenience of the Seagate development team, this issue has been mirrored in a private Seagate Jira Server: https://jts.seagate.com/browse/CORTX-29857. Note that community members will not be able to access that Jira server but that is not a problem since all activity in that Jira mirror will be copied into this GitHub issue.

@r-wambui r-wambui added Triage: DevAd Triage owned by DevAd Status: L1 Triage Initial triage labels Mar 31, 2022
Copy link

Papan Kumar Singh commented in Jira Server:

(gdb) bt
#0 0x00007fc3004c538f in raise () from /lib64/libc.so.6
#1 0x00007fc3004afdc5 in abort () from /lib64/libc.so.6
#2 0x00007fc30d549c49 in m0_arch_panic (c=c@entry=0x7fc30d9faf60 <signal_panic>, ap=ap@entry=0x7ffd90b8b678) at lib/user_space/uassert.c:131
#3 0x00007fc30d537cd7 in m0_panic (ctx=ctx@entry=0x7fc30d9faf60 <signal_panic>) at lib/assert.c:52
#4 0x00007fc30d549c88 in sigsegv (sig=11) at lib/user_space/ucookie.c:52
#5
#6 0x00007fc30d541ba5 in m0_ub_set_add (set=0x6021e0) at lib/ub.c:73

Copy link

Papan Kumar Singh commented in Jira Server:

Able to run on centos

 
{noformat}
[root@ssc-vm-g4-rhev4-1295 cortx-motr-1]# ./scripts/m0 run-ub -l
----- run_ub -l -----
Available benchmarks:
ad-ub
adieu-ub
fol-ub
fom-ub
list-ub
memory-ub
parity-math-ub
parity-math-mt-ub
thread-ub
time-ub
timer-ub
tlist-ub
trace-ub
varr-ub

[root@ssc-vm-g4-rhev4-1295 cortx-motr-1]# cat /etc/*-release
CentOS Linux release 7.9.2009 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"CentOS Linux release 7.9.2009 (Core)
CentOS Linux release 7.9.2009 (Core)

[root@ssc-vm-g4-rhev4-1295 cortx-motr-1]# git log
commit 2d8a769
Author: Abhishek Saha abhishek.saha@seagate.com
Date: Wed Apr 6 09:29:00 2022 +0530 CORTX-29713: m0_conf_pver_status() now returns CRITICAL if max failures reached at any level (#1571)
When the allowance was set to less than K at any level, and that many numbers of failures
was reached, then bytecount had become critical, but it was still marked as degraded
according to old logic as the failures were still less than K. ex: in a 3 node cluster with SNS 4+2+0, we support at max 1 node failure.
So when a node failed, data should've become critical, but with old logic,
it was marked as degraded, as number of failures was 1 which is < K. Updated the logic to check if failures at any level has reached it's max or not.
If it does then pool version is marked as CRITICAL even if failures < K. tolerance_failure_cmp() returns whether any level has reached max failures or not,
it returns an integer marking MAX_FAILURE_NOT_REACHED, MAX_FAILURE_REACHED,
MAX_FAILURE_EXCEEDED. This further helps in more accurate marking of
DEGRADED, CRITICAL and DAMAGED states. * Log tolerance of pver as well Signed-off-by: Abhishek Saha abhishek.saha@seagate.comcommit 6ae03fd
Author: Atul Deshmukh atul.deshmukh@seagate.com
Date: Tue Apr 5 23:42:11 2022 +0530 CORTX-30135 scripts: m0worklad changes to run in client pod (#1584)
{noformat}

Copy link

Papan Kumar Singh commented in Jira Server:

patch tested with latest main on RockyLinux 

Copy link

Papan Kumar Singh commented in Jira Server:

patch is tested on centos and RockyLinux

Copy link

Papan Kumar Singh commented in Jira Server:

[root@ssc-vm-rhev4-2209 cortx-motr-1]# ./scripts/m0 run-ub -l
----- run_ub -l -----
Available benchmarks:
ad-ub
adieu-ub
fol-ub
fom-ub
list-ub
memory-ub
parity-math-ub
parity-math-mt-ub
thread-ub
time-ub
timer-ub
tlist-ub
trace-ub
varr-ub

 

[root@ssc-vm-rhev4-2209 ~]# cat /etc/*-release
NAME="Rocky Linux"
VERSION="8.4 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel fedora"
VERSION_ID="8.4"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.4 (Green Obsidian)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:rocky:rocky:8.4:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky Linux"
ROCKY_SUPPORT_PRODUCT_VERSION="8"
Rocky Linux release 8.4 (Green Obsidian)
Rocky Linux release 8.4 (Green Obsidian)
Rocky Linux release 8.4 (Green Obsidian)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Status: L1 Triage Initial triage Triage: DevAd Triage owned by DevAd
Projects
None yet
Development

No branches or pull requests

3 participants