diff --git a/off_chain_data/README.md b/off_chain_data/README.md new file mode 100644 index 0000000000..d5b64d4725 --- /dev/null +++ b/off_chain_data/README.md @@ -0,0 +1,375 @@ +# Off Chain data + +This sample demonstrates how you can use [Peer channel-based event services](https://hyperledger-fabric.readthedocs.io/en/release-1.4/peer_event_services.html) +to replicate the data on your blockchain network to an off chain database. +Using an off chain database allows you to analyze the data from your network or +build a dashboard without degrading the performance of your application. + +This sample uses the [Fabric network event listener](https://fabric-sdk-node.github.io/release-1.4/tutorial-listening-to-events.html) from the Node.JS Fabric SDK to write data to local instance of +CouchDB. + +## Getting started + +This sample uses Node Fabric SDK application code similar to the `fabcar` sample +to connect to a network created using the `first-network` sample. + +### Install dependencies + +You need to install Node.js version 8.9.x to use the sample application code. +Execute the following commands to install the required dependencies: + +``` +cd fabric-samples/off_chain_data +npm install +``` + +### Configuration + +The configuration for the listener is stored in the `config.json` file: + +``` +{ + "peer_name": "peer0.org1.example.com", + "channelid": "mychannel", + "use_couchdb":true, + "create_history_log":true, + "couchdb_address": "http://localhost:5990" +} +``` + +`peer_name:` is the target peer for the listener. +`channelid:` is the channel name for block events. +`use_couchdb:` If set to true, events will be stored in a local instance of +CouchDB. If set to false, only a local log of events will be stored. +`create_history_log:` If true, a local log file will be created with all of the +block changes. +`couchdb_address:` is the local address for an off chain CouchDB database. + +### Create an instance of CouchDB + +If you set the "use_couchdb" option to true in `config.json`, you can run the +following command start a local instance of CouchDB using docker: + +``` +docker run --publish 5990:5984 --detach --name offchaindb hyperledger/fabric-couchdb +docker start offchaindb +``` + +### Starting the Network + +Use the following command to start the sample network: + +``` +./startFabric.sh +``` + +This command uses the `first-network` sample to deploy a fabric network with an +ordering service, two peer organizations with two peers each, and a channel +named `mychannel`. The marbles chaincode will be installed on all four peers and +instantiated on the channel. + +### Starting the Channel Event Listener + +Once the network has started, we can use the Node.js SDK to create the user and +certificates our listener application will use to interact with the network. Run +the following command to enroll the admin user: + +``` +node enrollAdmin.js +``` + +You can then run the following command to register and enroll an application +user: + +``` +node registerUser.js +``` + +We can then use our application user to start the block event listener: + +``` +node blockEventListener.js +``` + +If the command is successful, you should see the output of the listener reading +the configuration blocks of `mychannel` in addition to the blocks that recorded +the approval and commitment of the marbles chaincode definition. + +``` +Listening for block events, nextblock: 0 +Added block 0 to ProcessingMap +Added block 1 to ProcessingMap +Added block 2 to ProcessingMap +Added block 3 to ProcessingMap +Added block 4 to ProcessingMap +Added block 5 to ProcessingMap +Added block 6 to ProcessingMap +------------------------------------------------ +Block Number: 0 +------------------------------------------------ +Block Number: 1 +------------------------------------------------ +Block Number: 2 +------------------------------------------------ +Block Number: 3 +Block Timestamp: 2019-08-08T19:47:56.148Z +ChaincodeID: _lifecycle +[] +------------------------------------------------ +Block Number: 4 +Block Timestamp: 2019-08-08T19:48:00.234Z +ChaincodeID: _lifecycle +[] +------------------------------------------------ +Block Number: 5 +Block Timestamp: 2019-08-08T19:48:14.092Z +ChaincodeID: _lifecycle +[ { key: 'namespaces/fields/marbles/Collections', + is_delete: false, + value: '\u0012\u0000' }, + { key: 'namespaces/fields/marbles/EndorsementInfo', + is_delete: false, + value: '\u0012\r\n\u00031.0\u0010\u0001\u001a\u0004escc' }, + { key: 'namespaces/fields/marbles/Sequence', + is_delete: false, + value: '\b\u0001' }, + { key: 'namespaces/fields/marbles/ValidationInfo', + is_delete: false, + value: '\u00122\n\u0004vscc\u0012*\n(\u0012\f\u0012\n\b\u0002\u0012\u0002\b\u0000\u0012\u0002\b\u0001\u001a\u000b\u0012\t\n\u0007Org1MSP\u001a\u000b\u0012\t\n\u0007Org2MSP' }, + { key: 'namespaces/metadata/marbles', + is_delete: false, + value: '\n\u0013ChaincodeDefinition\u0012\bSequence\u0012\u000fEndorsementInfo\u0012\u000eValidationInfo\u0012\u000bCollections' } ] +``` + +`blockEventListener.js` creates a listener named "offchain-listener" on the +channel `mychannel`. The listener writes each block added to the channel to a +processing map called BlockMap for temporary storage and ordering purposes. +`blockEventListener.js` uses `nextblock.txt` to keep track of the latest block +that was retrieved by the listener. The block number in `nextblock.txt` may be +set to a previous block number in order to replay previous blocks. The file +may also be deleted and all blocks will be replayed when the block listener is +started. + +`BlockProcessing.js` runs as a daemon and pulls each block in order from the +BlockMap. It then uses the read-write set of that block to extract the latest +key value data and store it in the database. The configuration blocks of +mychannel did not any data to the database because the blocks did not contain a +read-write set. + +The channel event listener also writes metadata from each block to a log file +defined as channelid_chaincodeid.log. In this example, events will be written to +a file named `mychannel_marbles.log`. This allows you to record a history of +changes made by each block for each key in addition to storing the latest value +of the world state. + +**Note:** Leave the blockEventListener.js running in a terminal window. Open a +new window to execute the next parts of the demo. + +### Generate data on the blockchain + +Now that our listener is setup, we can generate data using the marbles chaincode +and use our application to replicate the data to our database. Open a new +terminal and navigate to the `fabric-samples/off_chain_data` directory. + +You can use the `addMarbles.js` file to add random sample data to blockchain. +The file uses the configuration information stored in `addMarbles.json` to +create a series of marbles. This file will be created during the first execution +of `addMarbles.js` if it does not exist. This program can be run multiple times +without changing the properties. The `nextMarbleNumber` will be incremented and +stored in the `addMarbles.json` file. + +``` + { + "nextMarbleNumber": 100, + "numberMarblesToAdd": 20 + } +``` + +Open a new window and run the following command to add 20 marbles to the +blockchain: + +``` +node addMarbles.js +``` + +After the marbles have been added to the ledger, use the following command to +transfer one of the marbles to a new owner: + +``` +node transferMarble.js marble110 james +``` + +Now run the following command to delete the marble that was transferred: + +``` +node deleteMarble.js marble110 +``` + +## Offchain CouchDB storage: + +If you followed the instructions above and set `use_couchdb` to true, +`blockEventListener.js` will create two tables in the local instance of CouchDB. +`blockEventListener.js` is written to create two tables for each channel and for +each chaincode. + +The first table is an offline representation of the current world state of the +blockchain ledger. This table was created using the read-write set data from +the blocks. If the listener is running, this table should be the same as the +latest values in the state database running on your peer. The table is named +after the channelid and chaincodeid, and is named mychannel_marbles in this +example. You can navigate to this table using your browser: +http://127.0.0.1:5990/mychannel_marbles/_all_docs + +A second table records each block as a historical record entry, and was created +using the block data that was recorded in the log file. The table name appends +history to the name of the first table, and is named mychannel_marbles_history +in this example. You can also navigate to this table using your browser: +http://127.0.0.1:5990/mychannel_marbles_history/_all_docs + +### Configure a map/reduce view for summarizing counts of marbles by color: + +Now that we have state and history data replicated to tables in CouchDB, we +can use the following commands query our off-chain data. We will also add an +index to support a more complex query. Note that if the `blockEventListener.js` +is not running, the database commands below may fail since the database is only +created when events are received. + +Open a new terminal window and execute the following: + +``` +curl -X PUT http://127.0.0.1:5990/mychannel_marbles/_design/colorviewdesign -d '{"views":{"colorview":{"map":"function (doc) { emit(doc.color, 1);}","reduce":"function ( keys , values , combine ) {return sum( values )}"}}}' -H 'Content-Type:application/json' +``` + +Execute a query to retrieve the total number of marbles (reduce function): + +``` +curl -X GET http://127.0.0.1:5990/mychannel_marbles/_design/colorviewdesign/_view/colorview?reduce=true +``` + +If successful, this command will return the number of marbles in the blockchain +world state, without having to query the blockchain ledger: + +``` +{"rows":[ + {"key":null,"value":19} + ]} +``` + +Execute a new query to retrieve the number of marbles by color (map function): + +``` +curl -X GET http://127.0.0.1:5990/mychannel_marbles/_design/colorviewdesign/_view/colorview?group=true +``` + +The command will return a list of marbles by color from the CouchDB database. + +``` +{"rows":[ + {"key":"blue","value":2}, + {"key":"green","value":2}, + {"key":"purple","value":3}, + {"key":"red","value":4}, + {"key":"white","value":6}, + {"key":"yellow","value":2} + ]} +``` + +To run a more complex command that reads through the block history database, we +will create an index of the blocknumber, sequence, and key fields. This index +will support a query that traces the history of each marble. Execute the +following command to create the index: + +``` +curl -X POST http://127.0.0.1:5990/mychannel_marbles_history/_index -d '{"index":{"fields":["blocknumber", "sequence", "key"]},"name":"marble_history"}' -H 'Content-Type:application/json' +``` + +Now execute a query to retrieve the history for the marble we transferred and +then deleted: + +``` +curl -X POST http://127.0.0.1:5990/mychannel_marbles_history/_find -d '{"selector":{"key":{"$eq":"marble110"}}, "fields":["blocknumber","is_delete","value"],"sort":[{"blocknumber":"asc"}, {"sequence":"asc"}]}' -H 'Content-Type:application/json' +``` + +You should see the transaction history of the marble that was created, +transferred, and then removed from the ledger. + +``` +{"docs":[ +{"blocknumber":12,"is_delete":false,"value":"{\"docType\":\"marble\",\"name\":\"marble110\",\"color\":\"blue\",\"size\":60,\"owner\":\"debra\"}"}, +{"blocknumber":22,"is_delete":false,"value":"{\"docType\":\"marble\",\"name\":\"marble110\",\"color\":\"blue\",\"size\":60,\"owner\":\"james\"}"}, +{"blocknumber":23,"is_delete":true,"value":""} + ]} +``` + +## Getting historical data from the network + +You can also use the `blockEventListener.js` program to retrieve historical data +from your network. This allows you to create a database that is up to date with +the latest data from the network or recover any blocks that the program may +have missed. + +If you ran through the example steps above, navigate back to the terminal window +where `blockEventListener.js` is running and close it. Once the listener is no +longer running, use the following command to add 20 more marbles to the +ledger: + +``` +node addMarbles.js +``` + +The listener will not be able to add the new marbles to your CouchDB database. +If you check the current state table using the reduce command, you will only +be able to see the original marbles in your database. + +``` +curl -X GET http://127.0.0.1:5990/mychannel_marbles/_design/colorviewdesign/_view/colorview?reduce=true +``` + +To add the new data to your off-chain database, remove the `nextblock.txt` +file that kept track of the latest block read by `blockEventListener.js`: + +``` +rm nextblock.txt +``` + +You can new re-run the channel listener to read every block from the channel: + +``` +node blockEventListener.js +``` + +This will rebuild the CouchDB tables and include the 20 marbles that have been +added to the ledger. If you run the reduce command against your database one +more time, + +``` +curl -X GET http://127.0.0.1:5990/mychannel_marbles/_design/colorviewdesign/_view/colorview?reduce=true +``` + +you will be able to see that all of the marbles have been added to your +database: + +``` +{"rows":[ +{"key":null,"value":39} +]} +``` + +## Clean up + +If you are finished using the sample application, you can bring down the network +and any accompanying artifacts. + +* Change to `fabric-samples/first-network` directory. +* To stop the network, run `./byfn.sh down`. +* Change back to `fabric-samples/off_chain_data` directory. +* Remove the certificates you generated by deleting the `wallet` folder. +* Delete `nextblock.txt` so you can start with the first block next time you + operate the listener. You can also reset the `nextMarbleNumber` in + `addMarbles.json` to 100. +* To take down the local CouchDB database, first stop and then remove the + docker container: + ``` + docker stop offchaindb + docker rm offchaindb + ``` diff --git a/off_chain_data/addMarbles.js b/off_chain_data/addMarbles.js new file mode 100644 index 0000000000..6219a2df1f --- /dev/null +++ b/off_chain_data/addMarbles.js @@ -0,0 +1,117 @@ +/* + * Copyright IBM Corp. All Rights Reserved. + * + * SPDX-License-Identifier: Apache-2.0 + * + */ + +/* + * + * addMarbles.js will add random sample data to blockchain. + * + * $ node addMarbles.js + * + * addMarbles will add 10 marbles by default with a starting marble name of "marble100". + * Additional marbles will be added by incrementing the number at the end of the marble name. + * + * The properties for adding marbles are stored in addMarbles.json. This file will be created + * during the first execution of the utility if it does not exist. The utility can be run + * multiple times without changing the properties. The nextMarbleNumber will be incremented and + * stored in the JSON file. + * + * { + * "nextMarbleNumber": 100, + * "numberMarblesToAdd": 10 + * } + * + */ + +'use strict'; + +const { FileSystemWallet, Gateway } = require('fabric-network'); +const fs = require('fs'); +const path = require('path'); + +const addMarblesConfigFile = path.resolve(__dirname, 'addMarbles.json'); + +const colors=[ 'blue', 'red', 'yellow', 'green', 'white', 'purple' ]; +const owners=[ 'tom', 'fred', 'julie', 'james', 'janet', 'henry', 'alice', 'marie', 'sam', 'debra', 'nancy']; +const sizes=[ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ]; +const docType='marble' + +const config = require('./config.json'); +const channelid = config.channelid; + +async function main() { + + try { + + let nextMarbleNumber; + let numberMarblesToAdd; + let addMarblesConfig; + + // check to see if there is a config json defined + if (fs.existsSync(addMarblesConfigFile)) { + // read file the next marble and number of marbles to create + let addMarblesConfigJSON = fs.readFileSync(addMarblesConfigFile, 'utf8'); + addMarblesConfig = JSON.parse(addMarblesConfigJSON); + nextMarbleNumber = addMarblesConfig.nextMarbleNumber; + numberMarblesToAdd = addMarblesConfig.numberMarblesToAdd; + } else { + nextMarbleNumber = 100; + numberMarblesToAdd = 20; + // create a default config and save + addMarblesConfig = new Object; + addMarblesConfig.nextMarbleNumber = nextMarbleNumber; + addMarblesConfig.numberMarblesToAdd = numberMarblesToAdd; + fs.writeFileSync(addMarblesConfigFile, JSON.stringify(addMarblesConfig, null, 2)); + } + + // Parse the connection profile. This would be the path to the file downloaded + // from the IBM Blockchain Platform operational console. + const ccpPath = path.resolve(__dirname, '..', 'first-network', 'connection-org1.json'); + const ccp = JSON.parse(fs.readFileSync(ccpPath, 'utf8')); + + // Configure a wallet. This wallet must already be primed with an identity that + // the application can use to interact with the peer node. + const walletPath = path.resolve(__dirname, 'wallet'); + const wallet = new FileSystemWallet(walletPath); + + // Create a new gateway, and connect to the gateway peer node(s). The identity + // specified must already exist in the specified wallet. + const gateway = new Gateway(); + await gateway.connect(ccpPath, { wallet, identity: 'user1', discovery: { enabled: true, asLocalhost: true } }); + + // Get the network channel that the smart contract is deployed to. + const network = await gateway.getNetwork(channelid); + + // Get the smart contract from the network channel. + const contract = network.getContract('marbles'); + + for (var counter = nextMarbleNumber; counter < nextMarbleNumber + numberMarblesToAdd; counter++) { + + var randomColor = Math.floor(Math.random() * (6)); + var randomOwner = Math.floor(Math.random() * (11)); + var randomSize = Math.floor(Math.random() * (10)); + + // Submit the 'initMarble' transaction to the smart contract, and wait for it + // to be committed to the ledger. + await contract.submitTransaction('initMarble', docType+counter, colors[randomColor], ''+sizes[randomSize], owners[randomOwner]); + console.log("Adding marble: " + docType + counter + " owner:" + owners[randomOwner] + " color:" + colors[randomColor] + " size:" + '' + sizes[randomSize] ); + + } + + await gateway.disconnect(); + + addMarblesConfig.nextMarbleNumber = nextMarbleNumber + numberMarblesToAdd; + + fs.writeFileSync(addMarblesConfigFile, JSON.stringify(addMarblesConfig, null, 2)); + + } catch (error) { + console.error(`Failed to submit transaction: ${error}`); + process.exit(1); + } + +} + +main(); diff --git a/off_chain_data/blockEventListener.js b/off_chain_data/blockEventListener.js new file mode 100644 index 0000000000..5b6a5ec8c0 --- /dev/null +++ b/off_chain_data/blockEventListener.js @@ -0,0 +1,186 @@ +/* + * Copyright IBM Corp. All Rights Reserved. + * + * SPDX-License-Identifier: Apache-2.0 + * + */ + +/* + +blockEventListener.js is an nodejs application to listen for block events from +a specified channel. + +Configuration is stored in config.json: + +{ + "peer_name": "peer0.org1.example.com", + "channelid": "mychannel", + "use_couchdb":false, + "couchdb_address": "http://localhost:5990" +} + +peer_name: target peer for the listener +channelid: channel name for block events +use_couchdb: if set to true, events will be stored in a local couchdb +couchdb_address: local address for an off chain couchdb database + +Note: If use_couchdb is set to false, only a local log of events will be stored. + +Usage: + +node bockEventListener.js + +The block event listener will log events received to the console and write event blocks to +a log file based on the channelid and chaincode name. + +The event listener stores the next block to retrieve in a file named nextblock.txt. This file +is automatically created and initialized to zero if it does not exist. + +*/ + +'use strict'; + +const { FileSystemWallet, Gateway } = require('fabric-network'); +const fs = require('fs'); +const path = require('path'); + +const couchdbutil = require('./couchdbutil.js'); +const blockProcessing = require('./blockProcessing.js'); + +const ccpPath = path.resolve(__dirname, '..', 'first-network', 'connection-org1.json'); +const ccpJSON = fs.readFileSync(ccpPath, 'utf8'); +const ccp = JSON.parse(ccpJSON); + +const config = require('./config.json'); +const channelid = config.channelid; +const peer_name = config.peer_name; +const use_couchdb = config.use_couchdb; +const couchdb_address = config.couchdb_address; + +const configPath = path.resolve(__dirname, 'nextblock.txt'); + +const nano = require('nano')(couchdb_address); + +// simple map to hold blocks for processing +class BlockMap { + constructor() { + this.list = [] + } + get(key) { + key = parseInt(key, 10).toString(); + return this.list[`block${key}`]; + } + set(key, value) { + this.list[`block${key}`] = value; + } + remove(key) { + key = parseInt(key, 10).toString(); + delete this.list[`block${key}`]; + } +} + +let ProcessingMap = new BlockMap() + +async function main() { + try { + + // initialize the next block to be 0 + let nextBlock = 0; + + // check to see if there is a next block already defined + if (fs.existsSync(configPath)) { + // read file containing the next block to read + nextBlock = fs.readFileSync(configPath, 'utf8'); + } else { + // store the next block as 0 + fs.writeFileSync(configPath, parseInt(nextBlock, 10)) + } + + // Create a new file system based wallet for managing identities. + const walletPath = path.join(process.cwd(), 'wallet'); + const wallet = new FileSystemWallet(walletPath); + console.log(`Wallet path: ${walletPath}`); + + // Check to see if we've already enrolled the user. + const userExists = await wallet.exists('user1'); + if (!userExists) { + console.log('An identity for the user "user1" does not exist in the wallet'); + console.log('Run the enrollUser.js application before retrying'); + return; + } + + // Create a new gateway for connecting to our peer node. + const gateway = new Gateway(); + await gateway.connect(ccpPath, { wallet, identity: 'user1', discovery: { enabled: true, asLocalhost: true } }); + + // Get the network (channel) our contract is deployed to. + const network = await gateway.getNetwork('mychannel'); + + const listener = await network.addBlockListener('offchain-listener', + async (err, block) => { + if (err) { + console.error(err); + return; + } + // Add the block to the processing map by block number + await ProcessingMap.set(block.header.number, block); + + console.log(`Added block ${block.header.number} to ProcessingMap`) + }, + // set the starting block for the listener + { startBlock: parseInt(nextBlock, 10) } + ); + + console.log(`Listening for block events, nextblock: ${nextBlock}`); + + // start processing, looking for entries in the ProcessingMap + processPendingBlocks(ProcessingMap); + + } catch (error) { + console.error(`Failed to evaluate transaction: ${error}`); + process.exit(1); + } +} + +// listener function to check for blocks in the ProcessingMap +async function processPendingBlocks(ProcessingMap) { + + setTimeout(async () => { + + // get the next block number from nextblock.txt + let nextBlockNumber = fs.readFileSync(configPath, 'utf8'); + let processBlock; + + do { + + // get the next block to process from the ProcessingMap + processBlock = ProcessingMap.get(nextBlockNumber) + + if (processBlock == undefined) { + break; + } + + try { + await blockProcessing.processBlockEvent(channelid, processBlock, use_couchdb, nano) + } catch (error) { + console.error(`Failed to process block: ${error}`); + } + + // if successful, remove the block from the ProcessingMap + ProcessingMap.remove(nextBlockNumber); + + // increment the next block number to the next block + fs.writeFileSync(configPath, parseInt(nextBlockNumber, 10) + 1) + + // retrive the next block number to process + nextBlockNumber = fs.readFileSync(configPath, 'utf8'); + + } while (true); + + processPendingBlocks(ProcessingMap); + + }, 250); + +} + +main(); diff --git a/off_chain_data/blockProcessing.js b/off_chain_data/blockProcessing.js new file mode 100644 index 0000000000..cac895a9b1 --- /dev/null +++ b/off_chain_data/blockProcessing.js @@ -0,0 +1,201 @@ +/* + * Copyright IBM Corp. All Rights Reserved. + * + * SPDX-License-Identifier: Apache-2.0 + * + */ + +'use strict'; + +const fs = require('fs'); +const path = require('path'); + +const couchdbutil = require('./couchdbutil.js'); + +const configPath = path.resolve(__dirname, 'nextblock.txt'); + +exports.processBlockEvent = async function (channelname, block, use_couchdb, nano) { + + return new Promise((async (resolve, reject) => { + + // reject the block if the block number is not defined + if (block.header.number == undefined) { + reject(new Error('Undefined block number')); + } + + const blockNumber = block.header.number + + console.log(`------------------------------------------------`); + console.log(`Block Number: ${blockNumber}`); + + // reject if the data is not set + if (block.data.data == undefined) { + reject(new Error('Data block is not defined')); + } + + const dataArray = block.data.data; + + for (var dataItem in dataArray) { + + // reject if a timestamp is not set + if (dataArray[dataItem].payload.header.channel_header.timestamp == undefined) { + reject(new Error('Block timestamp is not defined')); + } + + const timestamp = dataArray[dataItem].payload.header.channel_header.timestamp; + + // reject if no actions are set + if (dataArray[dataItem].payload.data.actions == undefined) { + break; + } + + const actions = dataArray[dataItem].payload.data.actions; + + // iterate through all actions + for (var actionItem in actions) { + + // reject if a chaincode id is not defined + if (actions[actionItem].payload.chaincode_proposal_payload.input.chaincode_spec.chaincode_id.name == undefined) { + reject(new Error('Chaincode name is not defined')); + } + + const chaincodeID = actions[actionItem].payload.chaincode_proposal_payload.input.chaincode_spec.chaincode_id.name + + // reject if there is no readwrite set + if (actions[actionItem].payload.action.proposal_response_payload.extension.results.ns_rwset == undefined) { + reject(new Error('No readwrite set is defined')); + } + + const rwSet = actions[actionItem].payload.action.proposal_response_payload.extension.results.ns_rwset + + for (var record in rwSet) { + + // ignore lscc events + if (rwSet[record].namespace != 'lscc') { + // create object to store properties + const writeObject = new Object(); + writeObject.blocknumber = blockNumber; + writeObject.chaincodeid = chaincodeID; + writeObject.timestamp = timestamp; + writeObject.values = rwSet[record].rwset.writes; + + console.log(`Block Timestamp: ${writeObject.timestamp}`); + console.log(`ChaincodeID: ${writeObject.chaincodeid}`); + console.log(writeObject.values); + + const logfilePath = path.resolve(__dirname, 'nextblock.txt'); + + // send the object to a log file + fs.appendFileSync(channelname + '_' + chaincodeID + '.log', JSON.stringify(writeObject) + "\n"); + + // if couchdb is configured, then write to couchdb + if (use_couchdb) { + try { + await writeValuesToCouchDBP(nano, channelname, writeObject); + } catch (error) { + + } + } + } + }; + }; + }; + + // update the nextblock.txt file to retrieve the next block + fs.writeFileSync(configPath, parseInt(blockNumber, 10) + 1) + + resolve(true); + + })); +} + +async function writeValuesToCouchDBP(nano, channelname, writeObject) { + + return new Promise((async (resolve, reject) => { + + try { + + // define the database for saving block events by key - this emulates world state + const dbname = channelname + '_' + writeObject.chaincodeid; + // define the database for saving all block events - this emulates history + const historydbname = channelname + '_' + writeObject.chaincodeid + '_history'; + // set values to the array of values received + const values = writeObject.values; + + try { + for (var sequence in values) { + let keyvalue = + values[ + sequence + ]; + + if ( + keyvalue.is_delete == + true + ) { + await couchdbutil.deleteRecord( + nano, + dbname, + keyvalue.key + ); + } else { + if ( + isJSON( + keyvalue.value + ) + ) { + // insert or update value by key - this emulates world state behavior + await couchdbutil.writeToCouchDB( + nano, + dbname, + keyvalue.key, + JSON.parse( + keyvalue.value + ) + ); + } + } + + // add additional fields for history + keyvalue.timestamp = + writeObject.timestamp; + keyvalue.blocknumber = parseInt( + writeObject.blocknumber, + 10 + ); + keyvalue.sequence = parseInt( + sequence, + 10 + ); + + await couchdbutil.writeToCouchDB( + nano, + historydbname, + null, + keyvalue + ); + } + } catch (error) { + console.log(error); + reject(error); + } + + } catch (error) { + console.error(`Failed to write to couchdb: ${error}`); + reject(error); + } + + resolve(true); + + })); + +} + +function isJSON(value) { + try { + JSON.parse(value); + } catch (e) { + return false; + } + return true; +} \ No newline at end of file diff --git a/off_chain_data/config.json b/off_chain_data/config.json new file mode 100644 index 0000000000..4df92335b7 --- /dev/null +++ b/off_chain_data/config.json @@ -0,0 +1,7 @@ +{ + "peer_name": "peer0.org1.example.com", + "channelid": "mychannel", + "use_couchdb":true, + "create_history_log":true, + "couchdb_address": "http://localhost:5990" +} diff --git a/off_chain_data/couchdbutil.js b/off_chain_data/couchdbutil.js new file mode 100644 index 0000000000..5e6a532961 --- /dev/null +++ b/off_chain_data/couchdbutil.js @@ -0,0 +1,111 @@ +/* + * Copyright IBM Corp. All Rights Reserved. + * + * SPDX-License-Identifier: Apache-2.0 + * + */ + +'use strict'; + +exports.createDatabaseIfNotExists = function (nano, dbname) { + + return new Promise((async (resolve, reject) => { + await nano.db.get(dbname, async function (err, body) { + if (err) { + if (err.statusCode == 404) { + await nano.db.create(dbname, function (err, body) { + if (!err) { + resolve(true); + } else { + reject(err); + } + }); + } else { + reject(err); + } + } else { + resolve(true); + } + }); + })); +} + +exports.writeToCouchDB = async function (nano, dbname, key, value) { + + return new Promise((async (resolve, reject) => { + + try { + await this.createDatabaseIfNotExists(nano, dbname); + } catch (error) { + + } + + const db = nano.use(dbname); + + // If a key is not specified, then this is an insert + if (key == null) { + db.insert(value, async function (err, body, header) { + if (err) { + reject(err); + } + } + ); + } else { + + // If a key is specified, then attempt to retrieve the record by key + db.get(key, async function (err, body) { + // parse the value + const updateValue = value; + // if the record was found, then update the revision to allow the update + if (err == null) { + updateValue._rev = body._rev + } + // update or insert the value + db.insert(updateValue, key, async function (err, body, header) { + if (err) { + reject(err); + } + }); + }); + } + + resolve(true); + + })); +} + + +exports.deleteRecord = async function (nano, dbname, key) { + + return new Promise((async (resolve, reject) => { + + try { + await this.createDatabaseIfNotExists(nano, dbname); + } catch (error) { + + } + + const db = nano.use(dbname); + + // If a key is specified, then attempt to retrieve the record by key + db.get(key, async function (err, body) { + + // if the record was found, then update the revision to allow the update + if (err == null) { + + let revision = body._rev + + // update or insert the value + db.destroy(key, revision, async function (err, body, header) { + if (err) { + reject(err); + } + }); + + } + }); + + resolve(true); + + })); +} diff --git a/off_chain_data/deleteMarble.js b/off_chain_data/deleteMarble.js new file mode 100644 index 0000000000..dacf029bb0 --- /dev/null +++ b/off_chain_data/deleteMarble.js @@ -0,0 +1,70 @@ +/* + * Copyright IBM Corp. All Rights Reserved. + * + * SPDX-License-Identifier: Apache-2.0 + * + */ + +/* + * + * deleteMarble.js will delete a specified marble. Example: + * + * $ node deleteMarble.js marble100 + * + * The utility is meant to demonstrate delete block events. + */ + +'use strict'; + +const { FileSystemWallet, Gateway } = require('fabric-network'); +const fs = require('fs'); +const path = require('path'); + +const config = require('./config.json'); +const channelid = config.channelid; + +async function main() { + + if (process.argv[2] == undefined) { + console.log("Usage: node deleteMarble marbleId"); + process.exit(1); + } + + const deletekey = process.argv[2]; + + try { + + // Parse the connection profile. This would be the path to the file downloaded + // from the IBM Blockchain Platform operational console. + const ccpPath = path.resolve(__dirname, '..', 'first-network', 'connection-org1.json'); + const ccp = JSON.parse(fs.readFileSync(ccpPath, 'utf8')); + + // Configure a wallet. This wallet must already be primed with an identity that + // the application can use to interact with the peer node. + const walletPath = path.resolve(__dirname, 'wallet'); + const wallet = new FileSystemWallet(walletPath); + + // Create a new gateway, and connect to the gateway peer node(s). The identity + // specified must already exist in the specified wallet. + const gateway = new Gateway(); + await gateway.connect(ccpPath, { wallet, identity: 'user1', discovery: { enabled: true, asLocalhost: true } }); + + // Get the network channel that the smart contract is deployed to. + const network = await gateway.getNetwork(channelid); + + // Get the smart contract from the network channel. + const contract = network.getContract('marbles'); + + await contract.submitTransaction('delete', deletekey); + console.log("Deleted marble: " + deletekey); + + await gateway.disconnect(); + + } catch (error) { + console.error(`Failed to submit transaction: ${error}`); + process.exit(1); + } + +} + +main(); diff --git a/off_chain_data/enrollAdmin.js b/off_chain_data/enrollAdmin.js new file mode 100644 index 0000000000..1b2e332bb8 --- /dev/null +++ b/off_chain_data/enrollAdmin.js @@ -0,0 +1,50 @@ +/* + * Copyright IBM Corp. All Rights Reserved. + * + * SPDX-License-Identifier: Apache-2.0 + * + */ + +'use strict'; + +const FabricCAServices = require('fabric-ca-client'); +const { FileSystemWallet, X509WalletMixin } = require('fabric-network'); +const fs = require('fs'); +const path = require('path'); + +const ccpPath = path.resolve(__dirname, '..', 'first-network', 'connection-org1.json'); +const ccpJSON = fs.readFileSync(ccpPath, 'utf8'); +const ccp = JSON.parse(ccpJSON); + +async function main() { + try { + + // Create a new CA client for interacting with the CA. + const caURL = ccp.certificateAuthorities['ca.org1.example.com'].url; + const ca = new FabricCAServices(caURL); + + // Create a new file system based wallet for managing identities. + const walletPath = path.join(process.cwd(), 'wallet'); + const wallet = new FileSystemWallet(walletPath); + console.log(`Wallet path: ${walletPath}`); + + // Check to see if we've already enrolled the admin user. + const adminExists = await wallet.exists('admin'); + if (adminExists) { + console.log('An identity for the admin user "admin" already exists in the wallet'); + return; + } + + // Enroll the admin user, and import the new identity into the wallet. + const enrollment = await ca.enroll({ enrollmentID: 'admin', enrollmentSecret: 'adminpw' }); + const identity = X509WalletMixin.createIdentity('Org1MSP', enrollment.certificate, enrollment.key.toBytes()); + wallet.import('admin', identity); + console.log('Successfully enrolled admin user "admin" and imported it into the wallet'); + + } catch (error) { + console.error(`Failed to enroll admin user "admin": ${error}`); + process.exit(1); + } +} + +main(); diff --git a/off_chain_data/package.json b/off_chain_data/package.json new file mode 100644 index 0000000000..eca1b51832 --- /dev/null +++ b/off_chain_data/package.json @@ -0,0 +1,45 @@ +{ + "name": "offchaindata", + "version": "1.0.0", + "description": "Offchain Data application implemented in JavaScript", + "engines": { + "node": ">=8", + "npm": ">=5" + }, + "scripts": { + "lint": "eslint .", + "pretest": "npm run lint", + "test": "nyc mocha --recursive" + }, + "engineStrict": true, + "author": "Hyperledger", + "license": "Apache-2.0", + "dependencies": { + "fabric-ca-client": "~1.4.0", + "fabric-network": "~1.4.0" + }, + "devDependencies": { + "chai": "^4.2.0", + "eslint": "^5.9.0", + "mocha": "^5.2.0", + "nyc": "^13.1.0", + "sinon": "^7.1.1", + "sinon-chai": "^3.3.0" + }, + "nyc": { + "exclude": [ + "coverage/**", + "test/**" + ], + "reporter": [ + "text-summary", + "html" + ], + "all": true, + "check-coverage": true, + "statements": 100, + "branches": 100, + "functions": 100, + "lines": 100 + } +} diff --git a/off_chain_data/registerUser.js b/off_chain_data/registerUser.js new file mode 100644 index 0000000000..cc73f58af7 --- /dev/null +++ b/off_chain_data/registerUser.js @@ -0,0 +1,62 @@ +/* + * Copyright IBM Corp. All Rights Reserved. + * + * SPDX-License-Identifier: Apache-2.0 + * + */ + +'use strict'; + +const { FileSystemWallet, Gateway, X509WalletMixin } = require('fabric-network'); +const fs = require('fs'); +const path = require('path'); + +const ccpPath = path.resolve(__dirname, '..', 'first-network', 'connection-org1.json'); +const ccpJSON = fs.readFileSync(ccpPath, 'utf8'); +const ccp = JSON.parse(ccpJSON); + +async function main() { + try { + + // Create a new file system based wallet for managing identities. + const walletPath = path.join(process.cwd(), 'wallet'); + const wallet = new FileSystemWallet(walletPath); + console.log(`Wallet path: ${walletPath}`); + + // Check to see if we've already enrolled the user. + const userExists = await wallet.exists('user1'); + if (userExists) { + console.log('An identity for the user "user1" already exists in the wallet'); + return; + } + + // Check to see if we've already enrolled the admin user. + const adminExists = await wallet.exists('admin'); + if (!adminExists) { + console.log('An identity for the admin user "admin" does not exist in the wallet'); + console.log('Run the enrollAdmin.js application before retrying'); + return; + } + + // Create a new gateway for connecting to our peer node. + const gateway = new Gateway(); + await gateway.connect(ccp, { wallet, identity: 'admin', discovery: { enabled: false } }); + + // Get the CA client object from the gateway for interacting with the CA. + const ca = gateway.getClient().getCertificateAuthority(); + const adminIdentity = gateway.getCurrentIdentity(); + + // Register the user, enroll the user, and import the new identity into the wallet. + const secret = await ca.register({ affiliation: 'org1.department1', enrollmentID: 'user1', role: 'client' }, adminIdentity); + const enrollment = await ca.enroll({ enrollmentID: 'user1', enrollmentSecret: secret }); + const userIdentity = X509WalletMixin.createIdentity('Org1MSP', enrollment.certificate, enrollment.key.toBytes()); + wallet.import('user1', userIdentity); + console.log('Successfully registered and enrolled admin user "user1" and imported it into the wallet'); + + } catch (error) { + console.error(`Failed to register user "user1": ${error}`); + process.exit(1); + } +} + +main(); diff --git a/off_chain_data/startFabric.sh b/off_chain_data/startFabric.sh new file mode 100755 index 0000000000..8822b7bf46 --- /dev/null +++ b/off_chain_data/startFabric.sh @@ -0,0 +1,179 @@ +#!/bin/bash +# +# Copyright IBM Corp All Rights Reserved +# +# SPDX-License-Identifier: Apache-2.0 +# +# Exit on first error +set -e pipefail + + +# don't rewrite paths for Windows Git Bash users +export MSYS_NO_PATHCONV=1 +starttime=$(date +%s) +CC_SRC_LANGUAGE=${1:-"golang"} +CC_SRC_LANGUAGE=`echo "$CC_SRC_LANGUAGE" | tr [:upper:] [:lower:]` +CC_RUNTIME_LANGUAGE=golang +CC_SRC_PATH=github.com/hyperledger/fabric-samples/chaincode/marbles02/go + +# clean the keystore +rm -rf ./hfc-key-store + +# launch network; create channel and join peer to channel +pushd ../first-network +echo y | ./byfn.sh down +echo y | ./byfn.sh up -a -n -s couchdb +popd + +CONFIG_ROOT=/opt/gopath/src/github.com/hyperledger/fabric/peer +ORG1_MSPCONFIGPATH=${CONFIG_ROOT}/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp +ORG1_TLS_ROOTCERT_FILE=${CONFIG_ROOT}/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt +ORG2_MSPCONFIGPATH=${CONFIG_ROOT}/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp +ORG2_TLS_ROOTCERT_FILE=${CONFIG_ROOT}/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt +ORDERER_TLS_ROOTCERT_FILE=${CONFIG_ROOT}/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem + +echo "Packaging the marbles smart contract" +docker exec \ + cli \ + peer lifecycle chaincode package marbles.tar.gz \ + --path $CC_SRC_PATH \ + --lang $CC_RUNTIME_LANGUAGE \ + --label marblesv1 + +echo "Installing smart contract on peer0.org1.example.com" +docker exec \ + -e CORE_PEER_LOCALMSPID=Org1MSP \ + -e CORE_PEER_ADDRESS=peer0.org1.example.com:7051 \ + -e CORE_PEER_MSPCONFIGPATH=${ORG1_MSPCONFIGPATH} \ + -e CORE_PEER_TLS_ROOTCERT_FILE=${ORG1_TLS_ROOTCERT_FILE} \ + cli \ + peer lifecycle chaincode install marbles.tar.gz + +echo "Installing smart contract on peer1.org1.example.com" +docker exec \ + -e CORE_PEER_LOCALMSPID=Org1MSP \ + -e CORE_PEER_ADDRESS=peer1.org1.example.com:8051 \ + -e CORE_PEER_MSPCONFIGPATH=${ORG1_MSPCONFIGPATH} \ + -e CORE_PEER_TLS_ROOTCERT_FILE=${ORG1_TLS_ROOTCERT_FILE} \ + cli \ + peer lifecycle chaincode install marbles.tar.gz + +echo "Installing smart contract on peer0.org2.example.com" +docker exec \ + -e CORE_PEER_LOCALMSPID=Org2MSP \ + -e CORE_PEER_ADDRESS=peer0.org2.example.com:9051 \ + -e CORE_PEER_MSPCONFIGPATH=${ORG2_MSPCONFIGPATH} \ + -e CORE_PEER_TLS_ROOTCERT_FILE=${ORG2_TLS_ROOTCERT_FILE} \ + cli \ + peer lifecycle chaincode install marbles.tar.gz + +echo "Installing smart contract on peer1.org2.example.com" +docker exec \ + -e CORE_PEER_LOCALMSPID=Org2MSP \ + -e CORE_PEER_ADDRESS=peer1.org2.example.com:10051 \ + -e CORE_PEER_MSPCONFIGPATH=${ORG2_MSPCONFIGPATH} \ + -e CORE_PEER_TLS_ROOTCERT_FILE=${ORG2_TLS_ROOTCERT_FILE} \ + cli \ + peer lifecycle chaincode install marbles.tar.gz + +echo "Query the chaincode package id" +docker exec \ + -e CORE_PEER_LOCALMSPID=Org1MSP \ + -e CORE_PEER_ADDRESS=peer0.org1.example.com:7051 \ + -e CORE_PEER_MSPCONFIGPATH=${ORG1_MSPCONFIGPATH} \ + -e CORE_PEER_TLS_ROOTCERT_FILE=${ORG1_TLS_ROOTCERT_FILE} \ + cli \ + /bin/bash -c "peer lifecycle chaincode queryinstalled > log" + PACKAGE_ID=`docker exec cli sed -nr '/Label: marblesv1/s/Package ID: (.*), Label: marblesv1/\1/p;' log` + +echo "Approving the chaincode definition for org1.example.com" +docker exec \ + -e CORE_PEER_LOCALMSPID=Org1MSP \ + -e CORE_PEER_ADDRESS=peer0.org1.example.com:7051 \ + -e CORE_PEER_MSPCONFIGPATH=${ORG1_MSPCONFIGPATH} \ + -e CORE_PEER_TLS_ROOTCERT_FILE=${ORG1_TLS_ROOTCERT_FILE} \ + cli \ + peer lifecycle chaincode approveformyorg \ + -o orderer.example.com:7050 \ + --channelID mychannel \ + --name marbles \ + --version 1.0 \ + --init-required \ + --signature-policy AND"('Org1MSP.member','Org2MSP.member')" \ + --sequence 1 \ + --package-id $PACKAGE_ID \ + --tls \ + --cafile ${ORDERER_TLS_ROOTCERT_FILE} + +echo "Approving the chaincode definition for org2.example.com" +docker exec \ + -e CORE_PEER_LOCALMSPID=Org2MSP \ + -e CORE_PEER_ADDRESS=peer0.org2.example.com:9051 \ + -e CORE_PEER_MSPCONFIGPATH=${ORG2_MSPCONFIGPATH} \ + -e CORE_PEER_TLS_ROOTCERT_FILE=${ORG2_TLS_ROOTCERT_FILE} \ + cli \ + peer lifecycle chaincode approveformyorg \ + -o orderer.example.com:7050 \ + --channelID mychannel \ + --name marbles \ + --version 1.0 \ + --init-required \ + --signature-policy AND"('Org1MSP.member','Org2MSP.member')" \ + --sequence 1 \ + --package-id $PACKAGE_ID \ + --tls \ + --cafile ${ORDERER_TLS_ROOTCERT_FILE} + +echo "Waiting for the approvals to be committed ..." + +sleep 10 + +echo "Commit the chaincode definition to the channel" +docker exec \ + -e CORE_PEER_LOCALMSPID=Org1MSP \ + -e CORE_PEER_MSPCONFIGPATH=${ORG1_MSPCONFIGPATH} \ + cli \ + peer lifecycle chaincode commit \ + -o orderer.example.com:7050 \ + --channelID mychannel \ + --name marbles \ + --version 1.0 \ + --init-required \ + --signature-policy AND"('Org1MSP.member','Org2MSP.member')" \ + --sequence 1 \ + --tls \ + --cafile ${ORDERER_TLS_ROOTCERT_FILE} \ + --peerAddresses peer0.org1.example.com:7051 \ + --tlsRootCertFiles ${ORG1_TLS_ROOTCERT_FILE} \ + --peerAddresses peer0.org2.example.com:9051 \ + --tlsRootCertFiles ${ORG2_TLS_ROOTCERT_FILE} + +echo "Waiting for the chaincode to be committed ..." + +sleep 10 + +echo "invoke the marbles chaincode init function ... " +docker exec \ + -e CORE_PEER_LOCALMSPID=Org1MSP \ + -e CORE_PEER_ADDRESS=peer0.org1.example.com:7051 \ + cli \ + peer chaincode invoke \ + -o orderer.example.com:7050 \ + -C mychannel \ + -n marbles \ + --isInit \ + -c '{"Args":["Init"]}' \ + --tls \ + --cafile ${ORDERER_TLS_ROOTCERT_FILE} \ + --peerAddresses peer0.org1.example.com:7051 \ + --tlsRootCertFiles ${ORG1_TLS_ROOTCERT_FILE} \ + --peerAddresses peer0.org2.example.com:9051 \ + --tlsRootCertFiles ${ORG2_TLS_ROOTCERT_FILE} + +sleep 10 + +cat <