Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ingress-nginx-controller pod fails to start on s390x #6504

Closed
vibhutisawant opened this issue Nov 23, 2020 · 20 comments
Closed

ingress-nginx-controller pod fails to start on s390x #6504

vibhutisawant opened this issue Nov 23, 2020 · 20 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@vibhutisawant
Copy link

NGINX Ingress controller version: v0.41.2

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/s390x"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:09:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/s390x"}

Environment:

  • OS (e.g. from /etc/os-release): Ubuntu:20.10
  • Kernel (e.g. uname -a): Linux host 5.8.0-26-generic #27-Ubuntu SMP Wed Oct 21 22:24:40 UTC 2020 s390x s390x s390x GNU/Linux

What happened:

Below error messages were observed in the pods logs:

root@host:/home/ubuntu# docker logs 6dd74d3f2a22
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v0.41.2
  Build:         d8a93551e6e5798fc4af3eb910cef62ecddc8938
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.4

-------------------------------------------------------------------------------

I1123 09:49:31.976294       6 flags.go:205] "Watching for Ingress" class="nginx"
W1123 09:49:31.976358       6 flags.go:210] Ingresses with an empty class will also be processed by this Ingress controller
W1123 09:49:31.976573       6 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1123 09:49:31.976732       6 main.go:241] "Creating API client" host="https://10.96.0.1:443"
I1123 09:49:31.983909       6 main.go:285] "Running in Kubernetes cluster" major="1" minor="19" git="v1.19.4" state="clean" commit="d360454c9bcd1634cf4cc52d1867af5491dc9c5f" platform="linux/s390x"
I1123 09:49:32.260115       6 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I1123 09:49:32.260983       6 main.go:115] "Enabling new Ingress features available since Kubernetes v1.18"
W1123 09:49:32.262482       6 main.go:127] No IngressClass resource with name nginx found. Only annotation will be used.
I1123 09:49:32.271800       6 ssl.go:528] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I1123 09:49:32.300660       6 nginx.go:249] "Starting NGINX Ingress controller"
I1123 09:49:32.304931       6 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"1a020af8-364f-4680-8420-da8f836890e8", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I1123 09:49:33.501193       6 nginx.go:291] "Starting NGINX process"
I1123 09:49:33.501265       6 leaderelection.go:243] attempting to acquire leader lease  ingress-nginx/ingress-controller-leader-nginx...
I1123 09:49:33.501549       6 nginx.go:311] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1123 09:49:33.501817       6 controller.go:144] "Configuration changes detected, backend reload required"
2020/11/23 09:49:33 [error] 31#31: failed to run the Lua code for coroutine_api: 4396981146480: coroutine_api:2: attempt to call global 'require' (a nil value)
nginx: [error] failed to run the Lua code for coroutine_api: 4396981146480: coroutine_api:2: attempt to call global 'require' (a nil value)
2020/11/23 09:49:33 [alert] 31#31: failed to load the 'resty.core' module (https://github.com/openresty/lua-resty-core); ensure you are using an OpenResty release from https://openresty.org/en/download.html (reason: /usr/local/lib/lua/resty/core.lua:3: attempt to index global 'ngx' (a nil value)) in /etc/nginx/nginx.conf:5
nginx: [alert] failed to load the 'resty.core' module (https://github.com/openresty/lua-resty-core); ensure you are using an OpenResty release from https://openresty.org/en/download.html (reason: /usr/local/lib/lua/resty/core.lua:3: attempt to index global 'ngx' (a nil value)) in /etc/nginx/nginx.conf:5
W1123 09:49:33.506095       6 nginx.go:34]
-------------------------------------------------------------------------------
NGINX master process died (1): exit status 1
-------------------------------------------------------------------------------
I1123 09:49:33.507477       6 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I1123 09:49:33.507511       6 status.go:84] "New leader elected" identity="ingress-nginx-controller-5dbd9649d4-jlrn7"
I1123 09:49:33.514461       6 status.go:205] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-5dbd9649d4-jlrn7" node="host"
E1123 09:49:33.547937       6 controller.go:156] Unexpected failure reloading the backend:
exit status 1
2020/11/23 09:49:33 [notice] 39#39: signal process started
2020/11/23 09:49:33 [error] 39#39: invalid PID number "" in "/tmp/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx.pid"
E1123 09:49:33.548014       6 queue.go:130] "requeuing" err="exit status 1\n2020/11/23 09:49:33 [notice] 39#39: signal process started\n2020/11/23 09:49:33 [error] 39#39: invalid PID number \"\" in \"/tmp/nginx.pid\"\nnginx: [error] invalid PID number \"\" in \"/tmp/nginx.pid\"\n" key="initial-sync"
I1123 09:49:33.548191       6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5dbd9649d4-jlrn7", UID:"148c5a71-4812-4827-95ad-ec21784aad8a", APIVersion:"v1", ResourceVersion:"3062", FieldPath:""}): type: 'Warning' reason: 'RELOAD' Error reloading NGINX: exit status 1
2020/11/23 09:49:33 [notice] 39#39: signal process started
2020/11/23 09:49:33 [error] 39#39: invalid PID number "" in "/tmp/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx.pid"
I1123 09:49:36.835301       6 controller.go:144] "Configuration changes detected, backend reload required"
E1123 09:49:36.879828       6 controller.go:156] Unexpected failure reloading the backend:
exit status 1
2020/11/23 09:49:36 [notice] 47#47: signal process started
2020/11/23 09:49:36 [error] 47#47: invalid PID number "" in "/tmp/nginx.pid"

What you expected to happen:

ingress-nginx-controller pod should be up and running

How to reproduce it:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml

Anything else we need to know:

/kind bug

@vibhutisawant vibhutisawant added the kind/bug Categorizes issue or PR as related to a bug. label Nov 23, 2020
@ellieayla
Copy link
Contributor

ellieayla commented Nov 24, 2020

Related to issue #3912

@vibhutisawant
Copy link
Author

@alanjcastonguay As mentioned in #3912 (comment)

Support for s390x was added in release 0.33.0, however the image gives above mentioned error since version 0.41.0. Could you please guide us to what could be the possible cause?

@aledbf
Copy link
Member

aledbf commented Nov 25, 2020

@vibhutisawant I've lost access to the s390x machine where the tests are executed. I hope before the end of the week it will be possible to get access to a new machine.

@aledbf
Copy link
Member

aledbf commented Nov 29, 2020

@vibhutisawant I can reproduce the issue using 0.41.2. The issue is related to the latest changes in the core lua packages.
Please use k8s.gcr.io/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f
while we investigate the origin of the regression.

@vibhutisawant
Copy link
Author

@aledbf Thanks for the update. I was able to deploy NGINX Ingress controller successfully using this image k8s.gcr.io/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f

@vibhutisawant
Copy link
Author

Hi @aledbf
Any updates on this?

@aledbf
Copy link
Member

aledbf commented Dec 18, 2020

Any updates on this?

@vibhutisawant no

@vibhutisawant
Copy link
Author

Hi @aledbf

Any updates on this issue?

And also as seen from the pod logs, resty.core module fails to load. Just to get more insights could you please confirm if s390x specific source code change is required in lua-resty-core or lua-nginx-module ?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 8, 2021
@vibhutisawant
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2021
@iamNoah1
Copy link
Contributor

Hi @vibhutisawant @alanjcastonguay can you guys confirm that the issue still exists with newer versions of ingress-nginx?

@ellieayla
Copy link
Contributor

Unknown. I have no s390x machine.

@vibhutisawant
Copy link
Author

@iamNoah1 The above mentioned error still persists in the latest version.

@aledbf Also Observed that the controller image mentioned in https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml , still points to v0.46.0.

@kycfeel
Copy link

kycfeel commented Aug 24, 2021

It is still happening with the latest v1.0.0-beta.3 image.

2021/08/24 03:42:58 [error] 27#27: failed to run the Lua code for coroutine_api: 4397451434808: coroutine_api:2: attempt to call global 'require' (a nil value)
nginx: [error] failed to run the Lua code for coroutine_api: 4397451434808: coroutine_api:2: attempt to call global 'require' (a nil value)
2021/08/24 03:42:58 [alert] 27#27: failed to load the 'resty.core' module (https://github.com/openresty/lua-resty-core); ensure you are using an OpenResty release from https://openresty.org/en/download.html (reason: /usr/local/lib/lua/resty/core.lua:3: attempt to index global 'ngx' (a nil value)) in /etc/nginx/nginx.conf:5
nginx: [alert] failed to load the 'resty.core' module (https://github.com/openresty/lua-resty-core); ensure you are using an OpenResty release from https://openresty.org/en/download.html (reason: /usr/local/lib/lua/resty/core.lua:3: attempt to index global 'ngx' (a nil value)) in /etc/nginx/nginx.conf:5
W0824 03:42:58.601880 6 nginx.go:34]
-------------------------------------------------------------------------------
NGINX master process died (1): exit status 1
-------------------------------------------------------------------------------
E0824 03:42:58.645545 6 controller.go:162] Unexpected failure reloading the backend:
exit status 1

2021/08/24 03:42:58 [error] 29#29: invalid PID number "" in "/tmp/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx.pid"

This is being a pretty serious issue since Kubernetes dropped the support for networking.k8s.io/v1beta1 from v1.22. For the Kubernetes cluster with v1.21 or lower version on s390x machine, it could use the older ingress-nginx image (v0.40.2 or older AFAIK), but it does not work from the latest v1.22 due to the Ingress API deprecation and removal.

This means, there is currently NO WAY TO RUN the ingress-nginx controller on Kubernetes clusters running on the s390x machine.

We need a solution for this quick as possible.

@longwuyuan
Copy link
Contributor

The recent release activity has taken a lot of focus.
S390 is hard to come by.
Can we check what image you have.
Can you add the command and the output of ;

kubectl version
kubectl get all -n <ingresscontrollernamespace>
kubectl -n <ingresscontrollernamespace> describe po <ingcontrollerpodname>

@kycfeel
Copy link

kycfeel commented Aug 24, 2021

@longwuyuan Yeah sure. Here is the information.

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.1-3+a0de36128ac7e3", GitCommit:"a0de36128ac7e3f8680c195dda4c1722699bbbad", GitTreeState:"clean", BuildDate:"2021-08-23T16:46:17Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/s390x"}
$ kubectl get all -n default

NAME                                            READY   STATUS             RESTARTS      AGE
pod/ingress-nginx-controller-7fdd486794-tlrfj   0/1     CrashLoopBackOff   25 (1s ago)   78m

NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/kubernetes                           ClusterIP   10.152.183.1     <none>        443/TCP                      93m
service/ingress-nginx-controller-admission   ClusterIP   10.152.183.117   <none>        443/TCP                      80m
service/ingress-nginx-controller             NodePort    10.152.183.244   <none>        80:31160/TCP,443:32259/TCP   80m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   0/1     1            0           80m

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-7fdd486794   1         1         0       80m
$ kubectl describe pod ingress-nginx-controller-7fdd486794-6qbtn

Name:         ingress-nginx-controller-7fdd486794-6qbtn
Namespace:    default
Priority:     0
Node:         bitholla-cerberus-node3/192.168.10.42
Start Time:   Tue, 24 Aug 2021 14:02:38 +0900
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=7fdd486794
Annotations:  <none>
Status:       Running
IP:           192.168.10.42
IPs:
  IP:           192.168.10.42
Controlled By:  ReplicaSet/ingress-nginx-controller-7fdd486794
Containers:
  controller:
    Container ID:  containerd://ebf25326e0f7be7d36c541a0a67ebf4815cbc4de4c1a65f8a4ad56ad5b8e5103
    Image:         k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.3@sha256:44a7a06b71187a4529b0a9edee5cc22bdf71b414470eff696c3869ea8d90a695
    Image ID:      k8s.gcr.io/ingress-nginx/controller@sha256:44a7a06b71187a4529b0a9edee5cc22bdf71b414470eff696c3869ea8d90a695
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    80/TCP, 443/TCP, 8443/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    State:          Running
      Started:      Tue, 24 Aug 2021 14:02:39 +0900
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-7fdd486794-6qbtn (v1:metadata.name)
      POD_NAMESPACE:  default (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cz7gz (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-cz7gz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From                      Message
  ----     ------     ----  ----                      -------
  Normal   Scheduled  10s   default-scheduler         Successfully assigned default/ingress-nginx-controller-7fdd486794-6qbtn to bitholla-cerberus-node3
  Normal   Pulled     10s   kubelet                   Container image "k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.3@sha256:44a7a06b71187a4529b0a9edee5cc22bdf71b414470eff696c3869ea8d90a695" already present on machine
  Normal   Created    10s   kubelet                   Created container controller
  Normal   Started    10s   kubelet                   Started container controller
  Warning  RELOAD     8s    nginx-ingress-controller  Error reloading NGINX: exit status 1
2021/08/24 05:02:41 [warn] 30#30: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
2021/08/24 05:02:41 [warn] 30#30: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
2021/08/24 05:02:41 [warn] 30#30: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
2021/08/24 05:02:41 [notice] 30#30: signal process started
2021/08/24 05:02:41 [error] 30#30: invalid PID number "" in "/tmp/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx.pid"
  Warning  RELOAD  5s  nginx-ingress-controller  Error reloading NGINX: exit status 1
2021/08/24 05:02:44 [warn] 32#32: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
2021/08/24 05:02:44 [warn] 32#32: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
2021/08/24 05:02:44 [warn] 32#32: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
2021/08/24 05:02:44 [notice] 32#32: signal process started
2021/08/24 05:02:44 [error] 32#32: invalid PID number "" in "/tmp/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx.pid"
  Warning  RELOAD  2s  nginx-ingress-controller  Error reloading NGINX: exit status 1
2021/08/24 05:02:47 [warn] 34#34: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
2021/08/24 05:02:47 [warn] 34#34: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
2021/08/24 05:02:47 [warn] 34#34: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
2021/08/24 05:02:47 [notice] 34#34: signal process started
2021/08/24 05:02:47 [error] 34#34: invalid PID number "" in "/tmp/nginx.pid"
nginx: [error] invalid PID number "" in "/tmp/nginx.pid"

@shahidhs-ibm
Copy link
Contributor

Added my comment in the PR raised to fix the issue. Following are next steps to work on:

  1. Community to generate new 'k8s.gcr.io/ingress-nginx/nginx' docker image (communication has been initiated)
  2. Using this new nginx image, ingress-nginx controller image needs to be updated (will need a PR)

@shahidhs-ibm
Copy link
Contributor

Tested the latest controller image with tag v1.0.0. This image has the changes in PR #7355 included. I checked the image and it seems to be working fine on s390x architecture.

@iamNoah1
Copy link
Contributor

Awesome :) closing this one

/close

@k8s-ci-robot
Copy link
Contributor

@iamNoah1: Closing this issue.

In response to this:

Awesome :) closing this one

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

velemas pushed a commit to velemas/luajit2 that referenced this issue Nov 8, 2021
and breakage introduced by commit 5980ef9 seen in
kubernetes/ingress-nginx#6504 on s390x.

Additionally implemented:
- Table traversal changes for s390x introduced in commits c6f5ef6 and bb0f241.
- CI for s390x.

Signed-off-by: Artiom Vaskov <artiom.vaskov@ibm.com>
velemas pushed a commit to velemas/luajit2 that referenced this issue Nov 8, 2021
and breakage introduced by commit 5980ef9 seen in
kubernetes/ingress-nginx#6504 on s390x.

Additionally implemented:
- Table traversal changes for s390x introduced in commits c6f5ef6 and bb0f241.
- CI for s390x.

Signed-off-by: Artiom Vaskov <artiom.vaskov@ibm.com>
velemas pushed a commit to velemas/luajit2 that referenced this issue Nov 8, 2021
and breakage introduced by commit 5980ef9 seen in
kubernetes/ingress-nginx#6504 on s390x.

Additionally implemented:
- Table traversal changes for s390x introduced in commits c6f5ef6 and bb0f241.
- CI (valgrind disabled for s390x because of unimplemented instruction).

Signed-off-by: Artiom Vaskov <artiom.vaskov@ibm.com>
zhuizhuhaomeng pushed a commit to openresty/luajit2 that referenced this issue Nov 17, 2021
and breakage introduced by commit 5980ef9 seen in
kubernetes/ingress-nginx#6504 on s390x.

Additionally implemented:
- Table traversal changes for s390x introduced in commits c6f5ef6 and bb0f241.
- CI (valgrind disabled for s390x because of unimplemented instruction).

Signed-off-by: Artiom Vaskov <artiom.vaskov@ibm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

9 participants