fix the multi-arch Docker peer build again #3839
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Signed-off-by: Josh Kneubuhl jkneubuh@us.ibm.com
Type of change
Description
This PR runs the Fabric release pipeline correctly as a "release" practice triggered through GH.
The
FABRIC_VER
of a component is inferred from the semrev tag applied by the Release process. TheFABRIC_VER
is applied throughout the Makefile, docker images, and metadata encoded into the client binaries.Additional details
This commit:
Adds a work-around for concurrency issues encountered when building multiple target architectures with buildx in parallel. When multiple buildx builders run concurrently, an unpredictable but frequent error occurs when communicating with the buildx builder. This manifests as "Error : 403 Forbidden" when pushing some but not all of the image layers to ghcr.io. This is most likely a resource (CPU, disk, RAM, etc.) being exhausted on the GH executor, and NOT an authentication error when connecting to the container registry. (Serializing the buildx builder steps seems to have eliminated the sporadic crash.)
Solves a
SIGSEGV
error encountered when running the dynamically linked (peer, orderer, etc.) images on the alpine arm64 images. The problem stems from producing a dynamically linked (libc.so) executable on the golang-alpine base image, copying it, and running on a vanilla alpine container. (Alpine does NOT include support for libc, and something is either wrong with the libmusl for arm64, or it worked by accident on the amd64.)Related issues