Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API Strapi Deployment #29

Closed
h0lybyte opened this issue Sep 13, 2022 · 26 comments
Closed

API Strapi Deployment #29

h0lybyte opened this issue Sep 13, 2022 · 26 comments
Assignees

Comments

@h0lybyte
Copy link
Member

h0lybyte commented Sep 13, 2022

Describe the bug

State | rejected
-- | --
State Message | preparing
Error message | invalid mount config for type "bind": bind source path does not exist: /data/compose/74/config


Currently building out the docker template for the api; going to grab some overpriced caffine and then get back into the flow.

The stack is currently under strapi-traefik-ve2_api


Links / References that matter as of 9/19/22

[8] https://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/
[9] https://stackoverflow.com/questions/53962776/whats-the-difference-between-pm2-and-pm2-runtime
[10] https://yarnpkg.com/package/@pm2/io
[11] https://pm2.io/docs/runtime/integration/docker/
[12] https://stackoverflow.com/questions/51191378/what-is-the-point-of-using-pm2-and-docker-together
[13] https://github.com/Unitech/pm2/blob/master/examples/ecosystem-file/process.yml
[14] https://pm2.keymetrics.io/docs/usage/application-declaration/
[15] https://stackoverflow.com/questions/59046837/what-is-the-pm2-for-command-yarn-run-start
[16] https://stackoverflow.com/questions/67010589/getting-errors-when-trying-to-start-app-via-yarn-pm2

@h0lybyte h0lybyte added the bug Something isn't working label Sep 13, 2022
@h0lybyte
Copy link
Member Author

time="2022-09-14T00:00:04Z" level=fatal msg="failed to evacuate root cgroup: mkdir /sys/fs/cgroup/init: read-only file system"
Error from the k3s.

@h0lybyte
Copy link
Member Author

[1] https://strapi.io/blog/deploying-and-scaling-the-official-strapi-demo-app-foodadvisor-with-kubernetes - Strapi using kubernetes.
[2] /#!/7/docker/tasks/rujkw9rwq8gy6jzey7n9kpdq8

@jzanecook
Copy link
Member

@jzanecook
Copy link
Member

k3d-io/k3d#155 (comment)

@h0lybyte
Copy link
Member Author

@jzanecook
Copy link
Member

I0914 01:52:22.235222       1 controllermanager.go:574] Started "ttl-after-finished"
I0914 01:52:22.235532       1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0914 01:52:22.235770       1 ttlafterfinished_controller.go:109] Starting TTL after finished controller
I0914 01:52:22.235789       1 shared_informer.go:240] Waiting for caches to sync for TTL after finished
I0914 01:52:22.245603       1 shared_informer.go:247] Caches are synced for namespace 
I0914 01:52:22.268746       1 shared_informer.go:247] Caches are synced for ephemeral 
I0914 01:52:22.273834       1 shared_informer.go:247] Caches are synced for service account 
I0914 01:52:22.276415       1 shared_informer.go:247] Caches are synced for endpoint 
I0914 01:52:22.284204       1 shared_informer.go:247] Caches are synced for ReplicaSet 
I0914 01:52:22.287009       1 shared_informer.go:247] Caches are synced for stateful set 
I0914 01:52:22.287160       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
I0914 01:52:22.287388       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
I0914 01:52:22.287551       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
I0914 01:52:22.288428       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I0914 01:52:22.289506       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
I0914 01:52:22.289987       1 shared_informer.go:247] Caches are synced for PVC protection 
E0914 01:52:22.290874       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0914 01:52:22.301016       1 shared_informer.go:247] Caches are synced for endpoint_slice 
I0914 01:52:22.301016       1 shared_informer.go:247] Caches are synced for TTL 
I0914 01:52:22.301032       1 shared_informer.go:247] Caches are synced for persistent volume 
E0914 01:52:22.302822       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0914 01:52:22.306512       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0914 01:52:22.310565       1 shared_informer.go:247] Caches are synced for attach detach 
I0914 01:52:22.317846       1 shared_informer.go:247] Caches are synced for ReplicationController 
I0914 01:52:22.325638       1 shared_informer.go:247] Caches are synced for GC 
I0914 01:52:22.337094       1 shared_informer.go:247] Caches are synced for expand 
I0914 01:52:22.337142       1 shared_informer.go:247] Caches are synced for HPA 
I0914 01:52:22.337160       1 shared_informer.go:247] Caches are synced for deployment 
I0914 01:52:22.339068       1 shared_informer.go:247] Caches are synced for PV protection 
I0914 01:52:22.346532       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I0914 01:52:22.346540       1 shared_informer.go:247] Caches are synced for disruption 
I0914 01:52:22.346752       1 disruption.go:371] Sending events to api server.
I0914 01:52:22.349668       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I0914 01:52:22.355630       1 shared_informer.go:247] Caches are synced for taint 
I0914 01:52:22.355839       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0914 01:52:22.359609       1 shared_informer.go:247] Caches are synced for daemon sets 
I0914 01:52:22.360631       1 shared_informer.go:247] Caches are synced for crt configmap 
I0914 01:52:22.366901       1 shared_informer.go:247] Caches are synced for node 
I0914 01:52:22.367066       1 range_allocator.go:172] Starting range CIDR allocator
I0914 01:52:22.367117       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0914 01:52:22.367210       1 shared_informer.go:247] Caches are synced for cidrallocator 
I0914 01:52:22.435384       1 shared_informer.go:247] Caches are synced for job 
I0914 01:52:22.435828       1 shared_informer.go:247] Caches are synced for TTL after finished 
I0914 01:52:22.552486       1 shared_informer.go:247] Caches are synced for cronjob 
I0914 01:52:22.612720       1 shared_informer.go:247] Caches are synced for resource quota 
I0914 01:52:22.635888       1 shared_informer.go:247] Caches are synced for resource quota 
I0914 01:52:22.743258       1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0914 01:52:22.746826       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-7448499f4d to 1"
I0914 01:52:22.746908       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-86cbb8457f to 1"
I0914 01:52:22.747663       1 event.go:291] "Event occurred" object="kube-system/local-path-provisioner" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-5ff76fc89d to 1"
I0914 01:52:22.896957       1 event.go:291] "Event occurred" object="kube-system/helm-install-traefik" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-jz2kt"
I0914 01:52:22.899854       1 event.go:291] "Event occurred" object="kube-system/helm-install-traefik-crd" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-crd-8c6nh"
I0914 01:52:22.902109       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
time="2022-09-14T01:52:22.922520305Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
I0914 01:52:22.996606       1 event.go:291] "Event occurred" object="kube-system/local-path-provisioner-5ff76fc89d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-5ff76fc89d-4nxgb"
I0914 01:52:22.996659       1 event.go:291] "Event occurred" object="kube-system/metrics-server-86cbb8457f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-86cbb8457f-hpkpm"
I0914 01:52:22.996683       1 event.go:291] "Event occurred" object="kube-system/coredns-7448499f4d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-7448499f4d-tfv7g"
I0914 01:52:23.006812       1 shared_informer.go:247] Caches are synced for garbage collector 
I0914 01:52:23.086028       1 shared_informer.go:247] Caches are synced for garbage collector 
I0914 01:52:23.086057       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
time="2022-09-14T01:52:23.364691165Z" level=info msg="Cluster-Http-Server 2022/09/14 01:52:23 http: TLS handshake error from 127.0.0.1:37140: remote error: tls: bad certificate"
W0914 01:52:23.408030       1 handler_proxy.go:102] no RequestInfo found in the context
E0914 01:52:23.408141       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0914 01:52:23.408158       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
time="2022-09-14T01:52:23.475016478Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1663120321: notBefore=2022-09-14 01:52:01 +0000 UTC notAfter=2023-09-14 01:52:23 +0000 UTC"
time="2022-09-14T01:52:23.588595213Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1663120321: notBefore=2022-09-14 01:52:01 +0000 UTC notAfter=2023-09-14 01:52:23 +0000 UTC"
time="2022-09-14T01:52:23.594730437Z" level=error msg="Failed to configure agent: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: operation not permitted"
time="2022-09-14T01:52:23.925892281Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:24.929101461Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:25.932610564Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:26.935538475Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:27.938550478Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:28.596561968Z" level=info msg="Cluster-Http-Server 2022/09/14 01:52:28 http: TLS handshake error from 127.0.0.1:36356: remote error: tls: bad certificate"
time="2022-09-14T01:52:28.718948812Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1663120321: notBefore=2022-09-14 01:52:01 +0000 UTC notAfter=2023-09-14 01:52:28 +0000 UTC"
time="2022-09-14T01:52:28.838822407Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1663120321: notBefore=2022-09-14 01:52:01 +0000 UTC notAfter=2023-09-14 01:52:28 +0000 UTC"
time="2022-09-14T01:52:28.845111792Z" level=error msg="Failed to configure agent: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: operation not permitted"
time="2022-09-14T01:52:28.941293792Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:29.944919788Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:30.947920811Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:31.951352482Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:32.955234262Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:33.847041043Z" level=info msg="Cluster-Http-Server 2022/09/14 01:52:33 http: TLS handshake error from 127.0.0.1:36446: remote error: tls: bad certificate"
time="2022-09-14T01:52:33.956218976Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1663120321: notBefore=2022-09-14 01:52:01 +0000 UTC notAfter=2023-09-14 01:52:33 +0000 UTC"
time="2022-09-14T01:52:33.958891641Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:34.059734683Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1663120321: notBefore=2022-09-14 01:52:01 +0000 UTC notAfter=2023-09-14 01:52:34 +0000 UTC"
time="2022-09-14T01:52:34.065835056Z" level=error msg="Failed to configure agent: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: operation not permitted"
time="2022-09-14T01:52:34.962296550Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:35.965742276Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:36.968568325Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:37.972189150Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:38.976229808Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:39.067934607Z" level=info msg="Cluster-Http-Server 2022/09/14 01:52:39 http: TLS handshake error from 127.0.0.1:52448: remote error: tls: bad certificate"
time="2022-09-14T01:52:39.176914875Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1663120321: notBefore=2022-09-14 01:52:01 +0000 UTC notAfter=2023-09-14 01:52:39 +0000 UTC"
time="2022-09-14T01:52:39.302238097Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1663120321: notBefore=2022-09-14 01:52:01 +0000 UTC notAfter=2023-09-14 01:52:39 +0000 UTC"
time="2022-09-14T01:52:39.308237781Z" level=error msg="Failed to configure agent: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: operation not permitted"
time="2022-09-14T01:52:39.979333597Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:40.982902545Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:41.986200696Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2022-09-14T01:52:42.990562529Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"

@h0lybyte
Copy link
Member Author

@h0lybyte
Copy link
Member Author

Going to research more of
"Failed to configure agent: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: operation not permitted"

@h0lybyte
Copy link
Member Author

[6] https://docs.docker.com/storage/storagedriver/vfs-driver/
Lets try to add the VFS

@h0lybyte
Copy link
Member Author

      placement:
        constraints:
          - node.role != manager

So we put the workers into VFS and then place the constraint for the role to not be manager.

@jzanecook
Copy link
Member

We should use - node.role == worker

@jzanecook
Copy link
Member

k3d-io/k3d#493

References something about "kernel opts" for cgroupsv2 will check later

@h0lybyte
Copy link
Member Author

k3s-io/k3s#4085

@h0lybyte
Copy link
Member Author

time="2022-09-14T19:42:40Z" level=warning msg="Host resolv.conf includes loopback or multicast nameservers - kubelet will use autogenerated resolv.conf with nameserver 8.8.8.8"
time="2022-09-14T19:42:40Z" level=info msg="Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: operation not permitted"
I0914 19:42:40.676814      49 shared_informer.go:255] Waiting for caches to sync for node_authorizer
I0914 19:42:40.677918      49 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0914 19:42:40.677975      49 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0914 19:42:40.679182      49 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0914 19:42:40.679240      49 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0914 19:42:40.713154      49 genericapiserver.go:656] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0914 19:42:40.714442      49 instance.go:261] Using reconciler: lease
I0914 19:42:40.886063      49 instance.go:574] API group "internal.apiserver.k8s.io" is not enabled, skipping.
W0914 19:42:41.170890      49 genericapiserver.go:656] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.176464      49 genericapiserver.go:656] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.189272      49 genericapiserver.go:656] Skipping API autoscaling/v2beta1 because it has no resources.
W0914 19:42:41.194778      49 genericapiserver.go:656] Skipping API batch/v1beta1 because it has no resources.
W0914 19:42:41.196843      49 genericapiserver.go:656] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.198576      49 genericapiserver.go:656] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.198652      49 genericapiserver.go:656] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.203845      49 genericapiserver.go:656] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.203886      49 genericapiserver.go:656] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
W0914 19:42:41.205324      49 genericapiserver.go:656] Skipping API node.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.205350      49 genericapiserver.go:656] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0914 19:42:41.205423      49 genericapiserver.go:656] Skipping API policy/v1beta1 because it has no resources.
W0914 19:42:41.209459      49 genericapiserver.go:656] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.209489      49 genericapiserver.go:656] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0914 19:42:41.210923      49 genericapiserver.go:656] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.210949      49 genericapiserver.go:656] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0914 19:42:41.215232      49 genericapiserver.go:656] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0914 19:42:41.220195      49 genericapiserver.go:656] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0914 19:42:41.225149      49 genericapiserver.go:656] Skipping API apps/v1beta2 because it has no resources.
W0914 19:42:41.225192      49 genericapiserver.go:656] Skipping API apps/v1beta1 because it has no resources.
W0914 19:42:41.227399      49 genericapiserver.go:656] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
W0914 19:42:41.229037      49 genericapiserver.go:656] Skipping API events.k8s.io/v1beta1 because it has no resources.
I0914 19:42:41.230095      49 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0914 19:42:41.230118      49 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0914 19:42:41.246515      49 genericapiserver.go:656] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.

@h0lybyte
Copy link
Member Author

[7] https://www.youtube.com/watch?v=B3QawTvJ6ww

Okay we might need a new approach to this. I am going to take a break again from k3s/k3d and work on Strapi today.

@h0lybyte
Copy link
Member Author

Docker

Looks like they are supporting docker! Well for now... 2 days ago, hopefully this might help.

Sauce: https://feedback.strapi.io/developer-experience/p/docker-support-for-v4

@jzanecook
Copy link
Member

Very nice. Hopefully we can get this set up today and move forward.

@h0lybyte h0lybyte moved this from Todo to In Progress in KBVE - Old Board Sep 19, 2022
@h0lybyte
Copy link
Member Author

yarn run v1.22.19
$ strapi start
Could not find the secret, probably not running in swarm mode: ADMIN_JWT_SECRET_FILE. Err: Error: ENOENT: no such file or directory, open 'ADMIN_JWT_SECRET_FILE'
Could not find the secret, probably not running in swarm mode: DATABASE_HOST_FILE. Err: Error: ENOENT: no such file or directory, open 'DATABASE_HOST_FILE'
Could not find the secret, probably not running in swarm mode: DATABASE_PORT_FILE. Err: Error: ENOENT: no such file or directory, open 'DATABASE_PORT_FILE'
Could not find the secret, probably not running in swarm mode: DATABASE_NAME_FILE. Err: Error: ENOENT: no such file or directory, open 'DATABASE_NAME_FILE'
Could not find the secret, probably not running in swarm mode: DATABASE_USERNAME_FILE. Err: Error: ENOENT: no such file or directory, open 'DATABASE_USERNAME_FILE'
Could not find the secret, probably not running in swarm mode: DATABASE_PASSWORD_FILE. Err: Error: ENOENT: no such file or directory, open 'DATABASE_PASSWORD_FILE'
Could not find the secret, probably not running in swarm mode: HOST_FILE. Err: Error: ENOENT: no such file or directory, open 'HOST_FILE'
Could not find the secret, probably not running in swarm mode: PORT_FILE. Err: Error: ENOENT: no such file or directory, open 'PORT_FILE'
Could not find the secret, probably not running in swarm mode: APP_KEYS_FILE. Err: Error: ENOENT: no such file or directory, open 'APP_KEYS_FILE'
[2022-09-20 00:52:50.317] debug: ⛔️ Server wasn't able to start properly.
[2022-09-20 00:52:50.319] error: connect ECONNREFUSED 127.0.0.1:6032
Error: connect ECONNREFUSED 127.0.0.1:6032

Hmm now I am wondering why it didnt read it.

@h0lybyte
Copy link
Member Author

h0lybyte commented Sep 20, 2022

So I am thinking that we update the api.kbve.com's Dockerfile, so that instead of CMD ["yarn", "start"] we do CMD where its pm2 that then runs yarn command to start.

This way the image will continue to run and we can access the shell / bash and see what issues we are having.

Dockerfile

@h0lybyte
Copy link
Member Author

[8] https://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/

Hmm but that seems to be for npm.

@h0lybyte
Copy link
Member Author

[9] https://stackoverflow.com/questions/53962776/whats-the-difference-between-pm2-and-pm2-runtime

The main difference between pm2 and pm2-runtime is

pm2-runtime designed for Docker container which keeps an application in the foreground which keep the container running,
pm2 is designed for normal usage where you send or run the application in the background.

@h0lybyte
Copy link
Member Author

[10] https://yarnpkg.com/package/@pm2/io

Thinking - I am not too sure if we want to put pm2 inside of the api, but that might be also an interesting FUTURE aspect to look into.

@h0lybyte
Copy link
Member Author

@h0lybyte
Copy link
Member Author

Going to test case the strapi upgrade right now. If it works, as I am hoping it shall, we should be good to go!

@jzanecook jzanecook moved this from In Progress to In Review in KBVE - Old Board Sep 21, 2022
@h0lybyte h0lybyte moved this from In Review to Accepted in KBVE - Old Board Sep 22, 2022
@h0lybyte h0lybyte added the 0 label Sep 23, 2022
@h0lybyte
Copy link
Member Author

Looks functional as for 30 days. Going to close this up.

Repository owner moved this from Accepted to Closed in KBVE - Old Board Oct 24, 2022
@h0lybyte h0lybyte removed bug Something isn't working 0 labels Oct 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Closed
Development

No branches or pull requests

2 participants