id | title |
---|---|
getting-started |
Getting started |
- Create certificates authorities (CA)
- Create peers
- Create ordering services
- Create resources without manual provisioning of cryptographic material
- Domain routing with SNI using Istio
- Run chaincode as external chaincode in Kubernetes
- Support Hyperledger Fabric 2.3+ and 3.0
- Managed genesis for Ordering services
- E2E testing including the execution of chaincodes in KIND
- Renewal of certificates
hlf-operator
is currently in stable. Watch releases of this repository to be notified for future updates:
For discussions and questions, please join the Hyperledger Foundation Discord:
https://discord.com/invite/hyperledger
The channel is located under BEVEL
, named bevel-operator-fabric
.
You can watch this video to see how to use it to deploy your own network:
Step-by-step video tutorials to setup hlf-operator in Kubernetes
This workshop provides an in-depth hands on discussion and demonstration of using Bevel and the new Bevel-Operator-Fabric to deploy Hyperledger Fabric on Kubernetes.
This workshop provides an in-depth, hands-on discussion and demonstration of using Bevel and the new Bevel-Operator-Fabric to deploy Hyperledger Fabric on Kubernetes.
Gala Games is a blockchain gaming platform that empowers players to earn cryptocurrencies and NFTs through gameplay. Founded in 2018 by Eric Schiermeyer, co-founder of Zynga, it aims to create a new type of gaming experience. The platform offers limited edition NFTs and allows players to earn Gala tokens | |
If you want to design and deploy a secure Blockchain network based on the latest version of Hyperledger Fabric, feel free to contact dviejo@kungfusoftware.es or visit https://kfs.es/blockchain |
Resources:
To start deploying our red fabric we have to have a Kubernetes cluster. For this we will use KinD.
Ensure you have these ports available before creating the cluster:
- 80
- 443
If these ports are not available this tutorial will not work.
k3d cluster create -p "80:30949@agent:0" -p "443:30950@agent:0" --agents 2 k8s-hlf
cat << EOF > kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.30.2
extraPortMappings:
- containerPort: 30949
hostPort: 80
- containerPort: 30950
hostPort: 443
EOF
kind create cluster --config=./kind-config.yaml
In this step we are going to install the kubernetes operator for Fabric, this will install:
- CRD (Custom Resource Definitions) to deploy Certification Fabric Peers, Orderers and Authorities
- Deploy the program to deploy the nodes in Kubernetes
To install helm: https://helm.sh/docs/intro/install/
helm repo add kfs https://kfsoftware.github.io/hlf-helm-charts --force-update
helm install hlf-operator --version=1.11.1 -- kfs/hlf-operator
To install the kubectl plugin, you must first install Krew: https://krew.sigs.k8s.io/docs/user-guide/setup/install/
Afterwards, the plugin can be installed with the following command:
kubectl krew install hlf
Install Istio binaries on the machine:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.23.3 sh -
Install Istio on the Kubernetes cluster:
kubectl create namespace istio-system
export ISTIO_PATH=$(echo $PWD/istio-*/bin)
export PATH="$PATH:$ISTIO_PATH"
istioctl operator init
kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-gateway
namespace: istio-system
spec:
addonComponents:
grafana:
enabled: false
kiali:
enabled: false
prometheus:
enabled: false
tracing:
enabled: false
components:
ingressGateways:
- enabled: true
k8s:
hpaSpec:
minReplicas: 1
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
service:
ports:
- name: http
port: 80
targetPort: 8080
nodePort: 30949
- name: https
port: 443
targetPort: 8443
nodePort: 30950
type: NodePort
name: istio-ingressgateway
pilot:
enabled: true
k8s:
hpaSpec:
minReplicas: 1
resources:
limits:
cpu: 300m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
meshConfig:
accessLogFile: /dev/stdout
enableTracing: false
outboundTrafficPolicy:
mode: ALLOW_ANY
profile: default
EOF
export PEER_IMAGE=hyperledger/fabric-peer
export PEER_VERSION=3.0.0
export ORDERER_IMAGE=hyperledger/fabric-orderer
export ORDERER_VERSION=3.0.0
export CA_IMAGE=hyperledger/fabric-ca
export CA_VERSION=1.5.13
kubectl apply -f - <<EOF
kind: ConfigMap
apiVersion: v1
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
rewrite name regex (.*)\.localho\.st istio-ingressgateway.istio-system.svc.cluster.local
hosts {
fallthrough
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
EOF
Set storage class depending on the Kubernetes cluster you are using:
# for Kind
export SC_NAME=standard
# for K3D
export SC_NAME=local-path
kubectl hlf ca create --image=$CA_IMAGE --version=$CA_VERSION --storage-class=$SC_NAME --capacity=1Gi --name=org1-ca \
--enroll-id=enroll --enroll-pw=enrollpw --hosts=org1-ca.localho.st --istio-port=443
kubectl wait --timeout=180s --for=condition=Running fabriccas.hlf.kungfusoftware.es --all
Check that the certification authority is deployed and works:
curl -k https://org1-ca.localho.st:443/cainfo
Register a user in the certification authority of the peer organization (Org1MSP)
# register user in CA for peers
kubectl hlf ca register --name=org1-ca --user=peer --secret=peerpw --type=peer \
--enroll-id enroll --enroll-secret=enrollpw --mspid Org1MSP
kubectl hlf peer create --statedb=leveldb --image=$PEER_IMAGE --version=$PEER_VERSION --storage-class=$SC_NAME --enroll-id=peer --mspid=Org1MSP \
--enroll-pw=peerpw --capacity=5Gi --name=org1-peer0 --ca-name=org1-ca.default \
--hosts=peer0-org1.localho.st --istio-port=443
kubectl wait --timeout=180s --for=condition=Running fabricpeers.hlf.kungfusoftware.es --all
Check that the peer is deployed and works:
openssl s_client -connect peer0-org1.localho.st:443
To deploy an Orderer
organization we have to:
- Create a certification authority
- Register user
orderer
with passwordordererpw
- Create orderer
kubectl hlf ca create --image=$CA_IMAGE --version=$CA_VERSION --storage-class=$SC_NAME --capacity=1Gi --name=ord-ca \
--enroll-id=enroll --enroll-pw=enrollpw --hosts=ord-ca.localho.st --istio-port=443
kubectl wait --timeout=180s --for=condition=Running fabriccas.hlf.kungfusoftware.es --all
Check that the certification authority is deployed and works:
curl -vik https://ord-ca.localho.st:443/cainfo
kubectl hlf ca register --name=ord-ca --user=orderer --secret=ordererpw \
--type=orderer --enroll-id enroll --enroll-secret=enrollpw --mspid=OrdererMSP --ca-url="https://ord-ca.localho.st:443"
kubectl hlf ordnode create --image=$ORDERER_IMAGE --version=$ORDERER_VERSION \
--storage-class=$SC_NAME --enroll-id=orderer --mspid=OrdererMSP \
--enroll-pw=ordererpw --capacity=2Gi --name=ord-node1 --ca-name=ord-ca.default \
--hosts=orderer0-ord.localho.st --admin-hosts=admin-orderer0-ord.localho.st --istio-port=443
kubectl hlf ordnode create --image=$ORDERER_IMAGE --version=$ORDERER_VERSION \
--storage-class=$SC_NAME --enroll-id=orderer --mspid=OrdererMSP \
--enroll-pw=ordererpw --capacity=2Gi --name=ord-node2 --ca-name=ord-ca.default \
--hosts=orderer1-ord.localho.st --admin-hosts=admin-orderer1-ord.localho.st --istio-port=443
kubectl hlf ordnode create --image=$ORDERER_IMAGE --version=$ORDERER_VERSION \
--storage-class=$SC_NAME --enroll-id=orderer --mspid=OrdererMSP \
--enroll-pw=ordererpw --capacity=2Gi --name=ord-node3 --ca-name=ord-ca.default \
--hosts=orderer2-ord.localho.st --admin-hosts=admin-orderer2-ord.localho.st --istio-port=443
kubectl hlf ordnode create --image=$ORDERER_IMAGE --version=$ORDERER_VERSION \
--storage-class=$SC_NAME --enroll-id=orderer --mspid=OrdererMSP \
--enroll-pw=ordererpw --capacity=2Gi --name=ord-node4 --ca-name=ord-ca.default \
--hosts=orderer3-ord.localho.st --admin-hosts=admin-orderer3-ord.localho.st --istio-port=443
kubectl wait --timeout=180s --for=condition=Running fabricorderernodes.hlf.kungfusoftware.es --all
Check that the orderer is running:
kubectl get pods
openssl s_client -connect orderer0-ord.localho.st:443
openssl s_client -connect orderer1-ord.localho.st:443
openssl s_client -connect orderer2-ord.localho.st:443
openssl s_client -connect orderer3-ord.localho.st:443
To create the channel we need to first create the wallet secret, which will contain the identities used by the operator to manage the channel
# register
kubectl hlf ca register --name=ord-ca --user=admin --secret=adminpw \
--type=admin --enroll-id enroll --enroll-secret=enrollpw --mspid=OrdererMSP
# enroll
kubectl hlf ca enroll --name=ord-ca --namespace=default \
--user=admin --secret=adminpw --mspid OrdererMSP \
--ca-name tlsca --output orderermsp.yaml
kubectl hlf ca enroll --name=ord-ca --namespace=default \
--user=admin --secret=adminpw --mspid OrdererMSP \
--ca-name ca --output orderermspsign.yaml
# register
kubectl hlf ca register --name=org1-ca --user=admin --secret=adminpw \
--type=admin --enroll-id enroll --enroll-secret=enrollpw --mspid=Org1MSP
# enroll
kubectl hlf ca enroll --name=org1-ca --namespace=default \
--user=admin --secret=adminpw --mspid Org1MSP \
--ca-name tlsca --output org1msp-tlsca.yaml
# register
kubectl hlf ca register --name=org1-ca --namespace=default --user=admin --secret=adminpw \
--type=admin --enroll-id enroll --enroll-secret=enrollpw --mspid=Org1MSP
# enroll
kubectl hlf ca enroll --name=org1-ca --namespace=default \
--user=admin --secret=adminpw --mspid Org1MSP \
--ca-name ca --output org1msp.yaml
# enroll
kubectl hlf identity create --name org1-admin --namespace default \
--ca-name org1-ca --ca-namespace default \
--ca ca --mspid Org1MSP --enroll-id admin --enroll-secret adminpw
kubectl create secret generic wallet --namespace=default \
--from-file=org1msp.yaml=$PWD/org1msp.yaml \
--from-file=orderermsp.yaml=$PWD/orderermsp.yaml \
--from-file=orderermspsign.yaml=$PWD/orderermspsign.yaml
export PEER_ORG_SIGN_CERT=$(kubectl get fabriccas org1-ca -o=jsonpath='{.status.ca_cert}')
export PEER_ORG_TLS_CERT=$(kubectl get fabriccas org1-ca -o=jsonpath='{.status.tlsca_cert}')
export IDENT_8=$(printf "%8s" "")
export ORDERER_TLS_CERT=$(kubectl get fabriccas ord-ca -o=jsonpath='{.status.tlsca_cert}' | sed -e "s/^/${IDENT_8}/" )
export ORDERER0_TLS_CERT=$(kubectl get fabricorderernodes ord-node1 -o=jsonpath='{.status.tlsCert}' | sed -e "s/^/${IDENT_8}/" )
export ORDERER1_TLS_CERT=$(kubectl get fabricorderernodes ord-node2 -o=jsonpath='{.status.tlsCert}' | sed -e "s/^/${IDENT_8}/" )
export ORDERER2_TLS_CERT=$(kubectl get fabricorderernodes ord-node3 -o=jsonpath='{.status.tlsCert}' | sed -e "s/^/${IDENT_8}/" )
export ORDERER3_TLS_CERT=$(kubectl get fabricorderernodes ord-node4 -o=jsonpath='{.status.tlsCert}' | sed -e "s/^/${IDENT_8}/" )
kubectl apply -f - <<EOF
apiVersion: hlf.kungfusoftware.es/v1alpha1
kind: FabricMainChannel
metadata:
name: demo
spec:
name: demo
adminOrdererOrganizations:
- mspID: OrdererMSP
adminPeerOrganizations:
- mspID: Org1MSP
channelConfig:
application:
acls: null
capabilities:
- V2_0
- V2_5
policies: null
capabilities:
- V2_0
orderer:
batchSize:
absoluteMaxBytes: 1048576
maxMessageCount: 10
preferredMaxBytes: 524288
batchTimeout: 2s
capabilities:
- V2_0
etcdRaft:
options:
electionTick: 10
heartbeatTick: 1
maxInflightBlocks: 5
snapshotIntervalSize: 16777216
tickInterval: 500ms
ordererType: etcdraft
policies: null
state: STATE_NORMAL
policies: null
externalOrdererOrganizations: []
externalPeerOrganizations: []
peerOrganizations:
- mspID: Org1MSP
caName: "org1-ca"
caNamespace: "default"
identities:
OrdererMSP:
secretKey: orderermsp.yaml
secretName: wallet
secretNamespace: default
OrdererMSP-tls:
secretKey: orderermsp.yaml
secretName: wallet
secretNamespace: default
OrdererMSP-sign:
secretKey: orderermspsign.yaml
secretName: wallet
secretNamespace: default
Org1MSP:
secretKey: org1msp.yaml
secretName: wallet
secretNamespace: default
ordererOrganizations:
- caName: "ord-ca"
caNamespace: "default"
externalOrderersToJoin:
- host: ord-node1.default
port: 7053
- host: ord-node2.default
port: 7053
- host: ord-node3.default
port: 7053
- host: ord-node4.default
port: 7053
mspID: OrdererMSP
ordererEndpoints:
- orderer0-ord.localho.st:443
- orderer1-ord.localho.st:443
- orderer2-ord.localho.st:443
- orderer3-ord.localho.st:443
orderersToJoin: []
orderers:
- host: orderer0-ord.localho.st
port: 443
tlsCert: |-
${ORDERER0_TLS_CERT}
- host: orderer1-ord.localho.st
port: 443
tlsCert: |-
${ORDERER1_TLS_CERT}
- host: orderer2-ord.localho.st
port: 443
tlsCert: |-
${ORDERER2_TLS_CERT}
- host: orderer3-ord.localho.st
port: 443
tlsCert: |-
${ORDERER3_TLS_CERT}
EOF
export IDENT_8=$(printf "%8s" "")
export ORDERER0_TLS_CERT=$(kubectl get fabricorderernodes ord-node1 -o=jsonpath='{.status.tlsCert}' | sed -e "s/^/${IDENT_8}/" )
kubectl apply -f - <<EOF
apiVersion: hlf.kungfusoftware.es/v1alpha1
kind: FabricFollowerChannel
metadata:
name: demo-org1msp
spec:
anchorPeers:
- host: peer0-org1.localho.st
port: 443
hlfIdentity:
secretKey: org1msp.yaml
secretName: wallet
secretNamespace: default
mspId: Org1MSP
name: demo
externalPeersToJoin: []
orderers:
- certificate: |
${ORDERER0_TLS_CERT}
url: grpcs://ord-node1.default:7050
peersToJoin:
- name: org1-peer0
namespace: default
EOF
To prepare the connection string, we have to:
-
Get connection string without users for organization Org1MSP and OrdererMSP
-
Register a user in the certification authority for signing (register)
-
Obtain the certificates using the previously created user (enroll)
-
Attach the user to the connection string
-
Get connection string without users for organization Org1MSP and OrdererMSP
kubectl hlf inspect --output org1.yaml -o Org1MSP -o OrdererMSP
- Register a user in the certification authority for signing
kubectl hlf ca register --name=org1-ca --user=admin --secret=adminpw --type=admin \
--enroll-id enroll --enroll-secret=enrollpw --mspid Org1MSP
- Get the certificates using the user created above
kubectl hlf ca enroll --name=org1-ca --user=admin --secret=adminpw --mspid Org1MSP \
--ca-name ca --output peer-org1.yaml
- Attach the user to the connection string
kubectl hlf utils adduser --userPath=peer-org1.yaml --config=org1.yaml --username=admin --mspid=Org1MSP
# remove the code.tar.gz chaincode.tgz if they exist
rm code.tar.gz chaincode.tgz
export CHAINCODE_NAME=asset
export CHAINCODE_LABEL=asset
cat << METADATA-EOF > "metadata.json"
{
"type": "ccaas",
"label": "${CHAINCODE_LABEL}"
}
METADATA-EOF
## chaincode as a service
cat > "connection.json" <<CONN_EOF
{
"address": "${CHAINCODE_NAME}:7052",
"dial_timeout": "10s",
"tls_required": false
}
CONN_EOF
tar cfz code.tar.gz connection.json
tar cfz chaincode.tgz metadata.json code.tar.gz
export PACKAGE_ID=$(kubectl hlf chaincode calculatepackageid --path=chaincode.tgz --language=node --label=$CHAINCODE_LABEL)
echo "PACKAGE_ID=$PACKAGE_ID"
kubectl hlf chaincode install --path=./chaincode.tgz \
--config=org1.yaml --language=golang --label=$CHAINCODE_LABEL --user=admin --peer=org1-peer0.default
The following command will create or update the CRD based on the packageID, chaincode name, and docker image.
kubectl hlf externalchaincode sync --image=kfsoftware/chaincode-external:latest \
--name=$CHAINCODE_NAME \
--namespace=default \
--package-id=$PACKAGE_ID \
--tls-required=false \
--replicas=1
kubectl hlf chaincode queryinstalled --config=org1.yaml --user=admin --peer=org1-peer0.default
export SEQUENCE=1
export VERSION="1.0"
kubectl hlf chaincode approveformyorg --config=org1.yaml --user=admin --peer=org1-peer0.default \
--package-id=$PACKAGE_ID \
--version "$VERSION" --sequence "$SEQUENCE" --name=asset \
--policy="OR('Org1MSP.member')" --channel=demo
kubectl hlf chaincode commit --config=org1.yaml --user=admin --mspid=Org1MSP \
--version "$VERSION" --sequence "$SEQUENCE" --name=asset \
--policy="OR('Org1MSP.member')" --channel=demo
kubectl hlf chaincode invoke --config=org1.yaml \
--user=admin --peer=org1-peer0.default \
--chaincode=asset --channel=demo \
--fcn=initLedger -a '[]'
kubectl hlf chaincode query --config=org1.yaml \
--user=admin --peer=org1-peer0.default \
--chaincode=asset --channel=demo \
--fcn=GetAllAssets -a '[]'
At this point, you should have:
- Ordering service with 1 nodes and a CA
- Peer organization with a peer and a CA
- A channel demo
- A chaincode install in peer0
- A chaincode approved and committed
If something went wrong or didn't work, please, open an issue.
kubectl delete fabricorderernodes.hlf.kungfusoftware.es --all-namespaces --all
kubectl delete fabricpeers.hlf.kungfusoftware.es --all-namespaces --all
kubectl delete fabriccas.hlf.kungfusoftware.es --all-namespaces --all
kubectl delete fabricchaincode.hlf.kungfusoftware.es --all-namespaces --all
kubectl delete fabricmainchannels --all-namespaces --all
kubectl delete fabricfollowerchannels --all-namespaces --all
Chaincode installation/build can fail due to unsupported local kubertenes version such as minikube.
$ kubectl hlf chaincode install --path=./fixtures/chaincodes/fabcar/go \
--config=org1.yaml --language=golang --label=fabcar --user=admin --peer=org1-peer0.default
Error: Transaction processing for endorser [192.168.49.2:31278]: Chaincode status Code: (500) UNKNOWN.
Description: failed to invoke backing implementation of 'InstallChaincode': could not build chaincode:
external builder failed: external builder failed to build: external builder 'my-golang-builder' failed:
exit status 1
If your purpose is to test the hlf-operator please consider to switch to kind that is tested and supported.