diff --git a/high-throughput/README.md b/high-throughput/README.md new file mode 100644 index 0000000000..428eab0131 --- /dev/null +++ b/high-throughput/README.md @@ -0,0 +1,173 @@ +# High-Throughput Network + +## Purpose +This network is used to understand how to properly design the chaincode data model when handling thousands of transactions per second which all +update the same asset in the ledger. A naive implementation would use a single key to represent the data for the asset, and the chaincode would +then attempt to update this key every time a transaction involving it comes in. However, when many transactions all come in at once, in the time +between when the transaction is simulated on the peer (i.e. read-set is created) and it's ready to be committed to the ledger, another transaction +may have already updated the same value. Thus, in the simple implementation, the read-set version will no longer match the version in the orderer, +and a large number of parallel transactions will fail. To solve this issue, the frequently updated value is instead stored as a series of deltas +which are aggregated when the value must be retrieved. In this way, no single row is frequently read and updated, but rather a collection of rows +is considered. + +## Use Case +The primary use case for this chaincode data model design is for applications in which a particular asset has an associated amount that is +frequently added to or removed from. For example, with a bank or credit card account, money is either paid to or paid out of it, and the amount +of money in the account is the result of all of these additions and subtractions aggregated together. A typical person's bank account may not be +used frequently enough to require highly-parallel throughput, but an organizational account used to store the money collected from customers on an +e-commerce platform may very well receive a very high number of transactions from all over the world all at once. In fact, this use case is the only +use case for cryptocurrencies like Bitcoin: a user's unspent transaction output (UTXO) is the result of all transactions he or she has been a part of +since joining the blockchain. Other use cases that can employ this technique might be IOT sensors which frequently update their sensed value in the +cloud. + +By adopting this method of storing data, an organization can optimize their chaincode to store and record transactions as quickly as possible and can +aggregate ledger records into one value at the time of their choosing without sacrificing transaction performance. Given the state-machine design of +Hyperledger Fabric, however, careful considerations need to be given to the data model design for the chaincode. + +Let's look at some concrete use cases and how an organization might implement high-throughput storage. These cases will try and explore some of the +advantages and disadvantages of such a system, and how to overcome them. + +#### Example 1 (IOT): Boxer Construction Analysts + +Boxer Construction Analysts is an IOT company focused on enabling real-time monitoring of large, expensive assets (machinery) on commercial +construction projects. They've partnered with the only construction vehicle company in New York, Condor Machines Inc., to provide a reliable, +auditable, and replayable monitoring system on their machines. This allows Condor to monitor their machines and address problems as soon as +they occur while providing end-users with a transparent report on machine health, which helps keep the customers satisfied. + +The vehicles are outfitted with many sensors each of which broadcasts updated values at frequencies ranging from several times a second to +several times a minute. Boxer initially sets up their chaincode so that the central machine computer pushes these values out to the blockchain +as soon as they're produced, and each sensor has its own row in the ledger which is updated when a new value comes in. While they find that +this works fine for the sensors which only update several times a minute, they run into some issues when updating the faster sensors. Often, +the blockchain skips several sensor readings before adding a new one, defeating the purpose of having a fast, always-on sensor. The issue they're +running into is that they're sending update transactions so fast that the version of the row is changed between the creation of a transaction's +read-set and committing that transaction to the ledger. The result is that while a transaction is in the process of being committed, all future +transactions are rejected until the commitment process is complete and a new, much later reading updates the ledger. + +To address this issue, they adopt a high-throughput design for the chaincode data model instead. Each sensor has a key which identifies it within the +ledger, and the difference between the previous reading and the current reading is published as a transaction. For example, if a sensor is monitoring +engine temperature, rather than sending the following list: 220F, 223F, 233F, 227F, the sensor would send: +220, +3, +10, -6 (the sensor is assumed +to start a 0 on initialization). This solves the throughput problem, as the machine can post delta transactions as fast as it wants and they will all +eventually be committed to the ledger in the order they were received. Additionally, these transactions can be processed as they appear in the ledger +by a dashboard to provide live monitoring data. The only difference the engineers have to pay attention to in this case is to make sure the sensors can +send deltas from the previous reading, rather than fixed readings. + +#### Example 2 (Balance Transfer): Robinson Credit Co. + +Robinson Credit Co. provides credit and financial services to large businesses. As such, their accounts are large, complex, and accessed by many +people at once at any time of the day. They want to switch to blockchain, but are having trouble keeping up with the number of deposits and +withdrawals happening at once on the same account. Additionally, they need to ensure users never withdraw more money than is available +on an account, and transactions that do get rejected. The first problem is easy to solve, the second is more nuanced and requires a variety of +strategies to accommodate high-throughput storage model design. + +To solve throughput, this new storage model is leveraged to allow every user performing transactions against the account to make that transaction in terms +of a delta. For example, global e-commerce company America Inc. must be able to accept thousands of transactions an hour in order to keep up with +their customer's demands. Rather than attempt to update a single row with the total amount of money in America Inc's account, Robinson Credit Co. +accepts each transaction as an additive delta to America Inc's account. At the end of the day, America Inc's accounting department can quickly +retrieve the total value in the account when the sums are aggregated. + +However, what happens when American Inc. now wants to pay its suppliers out of the same account, or a different account also on the blockchain? +Robinson Credit Co. would like to be assured that America Inc.'s accounting department can't simply overdraw their account, which is difficult to +do while at the same enabling transactions to happen quickly, as deltas are added to the ledger without any sort of bounds checking on the final +aggregate value. There are a variety of solutions which can be used in combination to address this. + +Solution 1 involves polling the aggregate value regularly. This happens separate from any delta transaction, and can be performed by a monitoring +service setup by Robinson themselves so that they can at least be guaranteed that if an overdraw does occur, they can detect it within a known +number of seconds and respond to it appropriately (e.g. by temporarily shutting off transactions on that account), all of which can be automated. +Furthermore, thanks to the decentralized nature of Fabric, this operation can be performed on a peer dedicated to this function that would not +slow down or impact the performance of peers processing customer transactions. + +Solution 2 involves breaking up the submission and verification steps of the balance transfer. Balance transfer submissions happen very quickly +and don't bother with checking overdrawing. However, a secondary process reviews each transaction sent to the chain and keeps a running total, +verifying that none of them overdraw the account, or at the very least that aggregated withdrawals vs deposits balance out at the end of the day. +Similar to Solution 1, this system would run separate from any transaction processing hardware and would not incur a performance hit on the +customer-facing chain. + +Solution 3 involves individually tailoring the smart contracts between Robinson and America Inc, leveraging the power of chaincode to customize +spending limits based on solvency proofs. Perhaps a limit is set on withdrawal transactions such that anything below \$1000 is automatically processed +and assumed to be correct and at minimal risk to either company simply due to America Inc. having proved solvency. However, withdrawals above \$1000 +must be verified before approval and admittance to the chain. + +## How +This sample provides the chaincode and scripts required to run a high-throughput application. For ease of use, it runs on the same network which is brought +up by `byfn.sh` in the `first-network` folder within `fabric-samples`, albeit with a few small modifications. The instructions to build the network +and run some invocations are provided below. + +### Build your network +1. `cd` into the `first-network` folder within `fabric-samples`, e.g. `cd ~/fabric-samples/first-network` +2. Open `docker-compose-cli.yaml` in your favorite editor, and edit the following lines: + * In the `volumes` section of the `cli` container, edit the second line which refers to the chaincode folder to point to the chaincode folder + within the `high-throughput` folder, e.g. + + `./../chaincode/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go` --> + `./../high-throughput/chaincode/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go` + * Again in the `volumes` section, edit the fourth line which refers to the scripts folder so it points to the scripts folder within the + `high-throughput` folder, e.g. + + `./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/` --> + `./../high-throughput/scripts/:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/` + + * Finally, comment out the `command` section by placing a `#` before it, e.g. + + `#command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'` + +3. We can now bring our network up by typing in `./byfn.sh -m up -c mychannel` +4. Open a new terminal window and enter the CLI container using `docker exec -it cli bash`, all operations on the network will happen within + this container from now on. + +### Install and instantiate the chaincode +1. Once you're in the CLI container run `cd scripts` to enter the `scripts` folder +2. Set-up the environment variables by running `source setclienv.sh` +3. Set-up your channels and anchor peers by running `./channel-setup.sh` +4. Install your chaincode by running `./install-chaincode.sh 1.0`. The only argument is a number representing the chaincode version, every time + you want to install and upgrade to a new chaincode version simply increment this value by 1 when running the command, e.g. `./install-chaincode.sh 2.0` +5. Instantiate your chaincode by running `./instantiate-chaincode.sh 1.0`. The version argument serves the same purpose as in `./install-chaincode.sh 1.0` + and should match the version of the chaincode you just installed. In the future, when upgrading the chaincode to a newer version, + `./upgrade-chaincode.sh 2.0` should be used instead of `./instantiate-chaincode.sh 1.0`. +6. Your chaincode is now installed and ready to receive invocations + +### Invoke the chaincode +All invocations are provided as scripts in `scripts` folder; these are detailed below. + +#### Update +The format for update is: `./update-invoke.sh name value operation` where `name` is the name of the variable to update, `value` is the value to +add to the variable, and `operation` is either `+` or `-` depending on what type of operation you'd like to add to the variable. In the future, +multiply/divide operations will be supported (or add them yourself to the chaincode as an exercise!) + +Example: `./update-invoke.sh myvar 100 +` + +#### Get +The format for get is: `./get-invoke.sh name` where `name` is the name of the variable to get. + +Example: `./get-invoke.sh myvar` + +#### Delete +The format for delete is: `./delete-invoke.sh name` where `name` is the name of the variable to delete. + +Example: `./delete-invoke.sh myvar` + +#### Prune +Pruning takes all the deltas generated for a variable and combines them all into a single row, deleting all previous rows. This helps cleanup +the ledger when many updates have been performed. There are two types of pruning: `prunefast` and `prunesafe`. Prune fast performs the deletion +and aggregation simultaneously, so if an error happens along the way data integrity is not guaranteed. Prune safe performs the aggregation first, +backs up the results, then performs the deletion. This way, if an error occurs along the way, data integrity is maintained. + +The format for pruning is: `./[prunesafe|prunefast]-invoke.sh name` where `name` is the name of the variable to prune. + +Example: `./prunefast-invoke.sh myvar` or `./prunesafe-invoke.sh myvar` + +### Test the Network +Two scripts are provided to show the advantage of using this system when running many parallel transactions at once: `many-updates.sh` and +`many-updates-traditional.sh`. The first script accepts the same arguments as `update-invoke.sh` but duplicates the invocation 1000 times +and in parallel. The final value, therefore, should be the given update value * 1000. Run this script to confirm that your network is functioning +properly. You can confirm this by checking your peer and orderer logs and verifying that no invocations are rejected due to improper versions. + +The second script, `many-updates-traditional.sh`, also sends 1000 transactions but using the traditional storage system. It'll update a single +row in the ledger 1000 times, with a value incrementing by one each time (i.e. the first invocation sets it to 0 and the last to 1000). The +expectation would be that the final value of the row is 999. However, the final value changes each time this script is run and you'll find +errors in the peer and orderer logs. + +There is one other script, `get-traditional.sh`, which simply gets the value of a row in the traditional way, with no deltas. + +Examples: +`./many-updates.sh testvar 100 +` --> final value from `./get-invoke.sh` should be 100000 +`./many-updates-traditional.sh testvar` --> final value from `./get-traditional.sh testvar` is undefined diff --git a/high-throughput/chaincode/high-throughput.go b/high-throughput/chaincode/high-throughput.go new file mode 100644 index 0000000000..f54d09e765 --- /dev/null +++ b/high-throughput/chaincode/high-throughput.go @@ -0,0 +1,459 @@ +/* + * Demonstrates how to handle data in an application with a high transaction volume where the transactions + * all attempt to change the same key-value pair in the ledger. Such an application will have trouble + * as multiple transactions may read a value at a certain version, which will then be invalid when the first + * transaction updates the value to a new version, thus rejecting all other transactions until they're + * re-executed. + * Rather than relying on serialization of the transactions, which is slow, this application initializes + * a value and then accepts deltas of that value which are added as rows to the ledger. The actual value + * is then an aggregate of the initial value combined with all of the deltas. Additionally, a pruning + * function is provided which aggregates and deletes the deltas to update the initial value. This should + * be done during a maintenance window or when there is a lowered transaction volume, to avoid the proliferation + * of millions of rows of data. + * + * @author Alexandre Pauwels for IBM + * @created 17 Aug 2017 + */ + +package main + +/* Imports + * 4 utility libraries for formatting, handling bytes, reading and writing JSON, and string manipulation + * 2 specific Hyperledger Fabric specific libraries for Smart Contracts + */ +import ( + "strconv" + "fmt" + + "github.com/hyperledger/fabric/core/chaincode/shim" + sc "github.com/hyperledger/fabric/protos/peer" +) + +//SmartContract is the data structure which represents this contract and on which various contract lifecycle functions are attached +type SmartContract struct { +} + +// Define Status codes for the response +const ( + OK = 200 + ERROR = 500 +) + +// Init is called when the smart contract is instantiated +func (s *SmartContract) Init(APIstub shim.ChaincodeStubInterface) sc.Response { + return shim.Success(nil) +} + +// Invoke routes invocations to the appropriate function in chaincode +// Current supported invocations are: +// - update, adds a delta to an aggregate variable in the ledger, all variables are assumed to start at 0 +// - get, retrieves the aggregate value of a variable in the ledger +// - pruneFast, deletes all rows associated with the variable and replaces them with a single row containing the aggregate value +// - pruneSafe, same as pruneFast except it pre-computed the value and backs it up before performing any destructive operations +// - delete, removes all rows associated with the variable +func (s *SmartContract) Invoke(APIstub shim.ChaincodeStubInterface) sc.Response { + // Retrieve the requested Smart Contract function and arguments + function, args := APIstub.GetFunctionAndParameters() + + // Route to the appropriate handler function to interact with the ledger appropriately + if function == "update" { + return s.update(APIstub, args) + } else if function == "get" { + return s.get(APIstub, args) + } else if function == "prunefast" { + return s.pruneFast(APIstub, args) + } else if function == "prunesafe" { + return s.pruneSafe(APIstub, args) + } else if function == "delete" { + return s.delete(APIstub, args) + } else if function == "putstandard" { + return s.putStandard(APIstub, args) + } else if function == "getstandard" { + return s.getStandard(APIstub, args) + } + + return shim.Error("Invalid Smart Contract function name.") +} + +/** + * Updates the ledger to include a new delta for a particluar variable. If this is the first time + * this variable is being added to the ledger, then its initial value is assumed to be 0. The arguments + * to give in the args array are as follows: + * - args[0] -> name of the variable + * - args[1] -> new delta (float) + * - args[2] -> operation (currently supported are addition "+" and subtraction "-") + * + * @param APIstub The chaincode shim + * @param args The arguments array for the update invocation + * + * @return A response structure indicating success or failure with a message + */ +func (s *SmartContract) update(APIstub shim.ChaincodeStubInterface, args []string) sc.Response { + // Check we have a valid number of args + if len(args) != 3 { + return shim.Error("Incorrect number of arguments, expecting 3") + } + + // Extract the args + name := args[0] + op := args[2] + _, err := strconv.ParseFloat(args[1], 64) + if err != nil { + return shim.Error("Provided value was not a number") + } + + // Make sure a valid operator is provided + if op != "+" && op != "-" { + return shim.Error(fmt.Sprintf("Operator %s is unrecognized", op)) + } + + // Retrieve info needed for the update procedure + txid := APIstub.GetTxID() + compositeIndexName := "varName~op~value~txID" + + // Create the composite key that will allow us to query for all deltas on a particular variable + compositeKey, compositeErr := APIstub.CreateCompositeKey(compositeIndexName, []string{name, op, args[1], txid}) + if compositeErr != nil { + return shim.Error(fmt.Sprintf("Could not create a composite key for %s: %s", name, compositeErr.Error())) + } + + // Save the composite key index + compositePutErr := APIstub.PutState(compositeKey, []byte{0x00}) + if compositePutErr != nil { + return shim.Error(fmt.Sprintf("Could not put operation for %s in the ledger: %s", name, compositePutErr.Error())) + } + + return shim.Success([]byte(fmt.Sprintf("Successfully added %s%s to %s", op, args[1], name))) +} + +/** + * Retrieves the aggregate value of a variable in the ledger. Gets all delta rows for the variable + * and computes the final value from all deltas. The args array for the invocation must contain the + * following argument: + * - args[0] -> The name of the variable to get the value of + * + * @param APIstub The chaincode shim + * @param args The arguments array for the get invocation + * + * @return A response structure indicating success or failure with a message + */ +func (s *SmartContract) get(APIstub shim.ChaincodeStubInterface, args []string) sc.Response { + // Check we have a valid number of args + if len(args) != 1 { + return shim.Error("Incorrect number of arguments, expecting 1") + } + + name := args[0] + // Get all deltas for the variable + deltaResultsIterator, deltaErr := APIstub.GetStateByPartialCompositeKey("varName~op~value~txID", []string{name}) + if deltaErr != nil { + return shim.Error(fmt.Sprintf("Could not retrieve value for %s: %s", name, deltaErr.Error())) + } + defer deltaResultsIterator.Close() + + // Check the variable existed + if !deltaResultsIterator.HasNext() { + return shim.Error(fmt.Sprintf("No variable by the name %s exists", name)) + } + + // Iterate through result set and compute final value + var finalVal float64 + var i int + for i = 0; deltaResultsIterator.HasNext(); i++ { + // Get the next row + responseRange, nextErr := deltaResultsIterator.Next() + if nextErr != nil { + return shim.Error(nextErr.Error()) + } + + // Split the composite key into its component parts + _, keyParts, splitKeyErr := APIstub.SplitCompositeKey(responseRange.Key) + if splitKeyErr != nil { + return shim.Error(splitKeyErr.Error()) + } + + // Retrieve the delta value and operation + operation := keyParts[1] + valueStr := keyParts[2] + + // Convert the value string and perform the operation + value, convErr := strconv.ParseFloat(valueStr, 64) + if convErr != nil { + return shim.Error(convErr.Error()) + } + + switch operation { + case "+": + finalVal += value + case "-": + finalVal -= value + default: + return shim.Error(fmt.Sprintf("Unrecognized operation %s", operation)) + } + } + + return shim.Success([]byte(strconv.FormatFloat(finalVal, 'f', -1, 64))) +} + +/** + * Prunes a variable by deleting all of its delta rows while computing the final value. Once all rows + * have been processed and deleted, a single new row is added which defines a delta containing the final + * computed value of the variable. This function is NOT safe as any failures or errors during pruning + * will result in an undefined final value for the variable and loss of data. Use pruneSafe if data + * integrity is important. The args array contains the following argument: + * - args[0] -> The name of the variable to prune + * + * @param APIstub The chaincode shim + * @param args The args array for the pruneFast invocation + * + * @return A response structure indicating success or failure with a message + */ +func (s *SmartContract) pruneFast(APIstub shim.ChaincodeStubInterface, args []string) sc.Response { + // Check we have a valid number of ars + if len(args) != 1 { + return shim.Error("Incorrect number of arguments, expecting 1") + } + + // Retrieve the name of the variable to prune + name := args[0] + + // Get all delta rows for the variable + deltaResultsIterator, deltaErr := APIstub.GetStateByPartialCompositeKey("varName~op~value~txID", []string{name}) + if deltaErr != nil { + return shim.Error(fmt.Sprintf("Could not retrieve value for %s: %s", name, deltaErr.Error())) + } + defer deltaResultsIterator.Close() + + // Check the variable existed + if !deltaResultsIterator.HasNext() { + return shim.Error(fmt.Sprintf("No variable by the name %s exists", name)) + } + + // Iterate through result set computing final value while iterating and deleting each key + var finalVal float64 + var i int + for i = 0; deltaResultsIterator.HasNext(); i++ { + // Get the next row + responseRange, nextErr := deltaResultsIterator.Next() + if nextErr != nil { + return shim.Error(nextErr.Error()) + } + + // Split the key into its composite parts + _, keyParts, splitKeyErr := APIstub.SplitCompositeKey(responseRange.Key) + if splitKeyErr != nil { + return shim.Error(splitKeyErr.Error()) + } + + // Retrieve the operation and value + operation := keyParts[1] + valueStr := keyParts[2] + + // Convert the value to a float + value, convErr := strconv.ParseFloat(valueStr, 64) + if convErr != nil { + return shim.Error(convErr.Error()) + } + + // Delete the row from the ledger + deltaRowDelErr := APIstub.DelState(responseRange.Key) + if deltaRowDelErr != nil { + return shim.Error(fmt.Sprintf("Could not delete delta row: %s", deltaRowDelErr.Error())) + } + + // Add the value of the deleted row to the final aggregate + switch operation { + case "+": + finalVal += value + case "-": + finalVal -= value + default: + return shim.Error(fmt.Sprintf("Unrecognized operation %s", operation)) + } + } + + // Update the ledger with the final value and return + updateResp := s.update(APIstub, []string{name, strconv.FormatFloat(finalVal, 'f', -1, 64), "+"}) + if updateResp.Status == OK { + return shim.Success([]byte(fmt.Sprintf("Successfully pruned variable %s, final value is %f, %d rows pruned", args[0], finalVal, i))) + } + + return shim.Error(fmt.Sprintf("Failed to prune variable: all rows deleted but could not update value to %f, variable no longer exists in ledger", finalVal)) +} + +/** + * This function performs the same function as pruneFast except it provides data backups in case the + * prune fails. The final aggregate value is computed before any deletion occurs and is backed up + * to a new row. This back-up row is deleted only after the new aggregate delta has been successfully + * written to the ledger. The args array contains the following argument: + * args[0] -> The name of the variable to prune + * + * @param APIstub The chaincode shim + * @param args The arguments array for the pruneSafe invocation + * + * @result A response structure indicating success or failure with a message + */ +func (s *SmartContract) pruneSafe(APIstub shim.ChaincodeStubInterface, args []string) sc.Response { + // Verify there are a correct number of arguments + if len(args) != 1 { + return shim.Error("Incorrect number of arguments, expecting 1 (the name of the variable to prune)") + } + + // Get the var name + name := args[0] + + // Get the var's value and process it + getResp := s.get(APIstub, args) + if getResp.Status == ERROR { + return shim.Error(fmt.Sprintf("Could not retrieve the value of %s before pruning, pruning aborted: %s", name, getResp.Message)) + } + + valueStr := string(getResp.Payload) + val, convErr := strconv.ParseFloat(valueStr, 64) + if convErr != nil { + return shim.Error(fmt.Sprintf("Could not convert the value of %s to a number before pruning, pruning aborted: %s", name, convErr.Error())) + } + + // Store the var's value temporarily + backupPutErr := APIstub.PutState(fmt.Sprintf("%s_PRUNE_BACKUP", name), []byte(valueStr)) + if backupPutErr != nil { + return shim.Error(fmt.Sprintf("Could not backup the value of %s before pruning, pruning aborted: %s", name, backupPutErr.Error())) + } + + // Get all deltas for the variable + deltaResultsIterator, deltaErr := APIstub.GetStateByPartialCompositeKey("varName~op~value~txID", []string{name}) + if deltaErr != nil { + return shim.Error(fmt.Sprintf("Could not retrieve value for %s: %s", name, deltaErr.Error())) + } + defer deltaResultsIterator.Close() + + // Delete each row + var i int + for i = 0; deltaResultsIterator.HasNext(); i++ { + responseRange, nextErr := deltaResultsIterator.Next() + if nextErr != nil { + return shim.Error(fmt.Sprintf("Could not retrieve next row for pruning: %s", nextErr.Error())) + } + + deltaRowDelErr := APIstub.DelState(responseRange.Key) + if deltaRowDelErr != nil { + return shim.Error(fmt.Sprintf("Could not delete delta row: %s", deltaRowDelErr.Error())) + } + } + + // Insert new row for the final value + updateResp := s.update(APIstub, []string{name, valueStr, "+"}) + if updateResp.Status == ERROR { + return shim.Error(fmt.Sprintf("Could not insert the final value of the variable after pruning, variable backup is stored in %s_PRUNE_BACKUP: %s", name, updateResp.Message)) + } + + // Delete the backup value + delErr := APIstub.DelState(fmt.Sprintf("%s_PRUNE_BACKUP", name)) + if delErr != nil { + return shim.Error(fmt.Sprintf("Could not delete backup value %s_PRUNE_BACKUP, this does not affect the ledger but should be removed manually", name)) + } + + return shim.Success([]byte(fmt.Sprintf("Successfully pruned variable %s, final value is %f, %d rows pruned", name, val, i))) +} + +/** + * Deletes all rows associated with an aggregate variable from the ledger. The args array + * contains the following argument: + * - args[0] -> The name of the variable to delete + * + * @param APIstub The chaincode shim + * @param args The arguments array for the delete invocation + * + * @return A response structure indicating success or failure with a message + */ +func (s *SmartContract) delete(APIstub shim.ChaincodeStubInterface, args []string) sc.Response { + // Check there are a correct number of arguments + if len(args) != 1 { + return shim.Error("Incorrect number of arguments, expecting 1") + } + + // Retrieve the variable name + name := args[0] + + // Delete all delta rows + deltaResultsIterator, deltaErr := APIstub.GetStateByPartialCompositeKey("varName~op~value~txID", []string{name}) + if deltaErr != nil { + return shim.Error(fmt.Sprintf("Could not retrieve delta rows for %s: %s", name, deltaErr.Error())) + } + defer deltaResultsIterator.Close() + + // Ensure the variable exists + if !deltaResultsIterator.HasNext() { + return shim.Error(fmt.Sprintf("No variable by the name %s exists", name)) + } + + // Iterate through result set and delete all indices + var i int + for i = 0; deltaResultsIterator.HasNext(); i++ { + responseRange, nextErr := deltaResultsIterator.Next() + if nextErr != nil { + return shim.Error(fmt.Sprintf("Could not retrieve next delta row: %s", nextErr.Error())) + } + + deltaRowDelErr := APIstub.DelState(responseRange.Key) + if deltaRowDelErr != nil { + return shim.Error(fmt.Sprintf("Could not delete delta row: %s", deltaRowDelErr.Error())) + } + } + + return shim.Success([]byte(fmt.Sprintf("Deleted %s, %d rows removed", name, i))) +} + +/** + * Converts a float64 to a byte array + * + * @param f The float64 to convert + * + * @return The byte array representation + */ +func f2barr(f float64) []byte { + str := strconv.FormatFloat(f, 'f', -1, 64) + + return []byte(str) +} + +// The main function is only relevant in unit test mode. Only included here for completeness. +func main() { + + // Create a new Smart Contract + err := shim.Start(new(SmartContract)) + if err != nil { + fmt.Printf("Error creating new Smart Contract: %s", err) + } +} + +/** + * All functions below this are for testing traditional editing of a single row + */ +func (s *SmartContract) putStandard(APIstub shim.ChaincodeStubInterface, args []string) sc.Response { + name := args[0] + valStr := args[1] + + _, getErr := APIstub.GetState(name) + if getErr != nil { + return shim.Error(fmt.Sprintf("Failed to retrieve the statr of %s: %s", name, getErr.Error())) + } + + putErr := APIstub.PutState(name, []byte(valStr)) + if putErr != nil { + return shim.Error(fmt.Sprintf("Failed to put state: %s", putErr.Error())) + } + + return shim.Success(nil) +} + +func (s *SmartContract) getStandard(APIstub shim.ChaincodeStubInterface, args []string) sc.Response { + name := args[0] + + val, getErr := APIstub.GetState(name) + if getErr != nil { + return shim.Error(fmt.Sprintf("Failed to get state: %s", getErr.Error())) + } + + return shim.Success(val) +} diff --git a/high-throughput/scripts/channel-setup.sh b/high-throughput/scripts/channel-setup.sh new file mode 100755 index 0000000000..b92326024d --- /dev/null +++ b/high-throughput/scripts/channel-setup.sh @@ -0,0 +1,40 @@ +ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem + +# Channel creation +echo "========== Creating channel: "$CHANNEL_NAME" ==========" +peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ../channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem + +# peer0.org1 channel join +echo "========== Joining peer0.org1.example.com to channel mychannel ==========" +export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp +export CORE_PEER_ADDRESS=peer0.org1.example.com:7051 +export CORE_PEER_LOCALMSPID="Org1MSP" +export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt +peer channel join -b ${CHANNEL_NAME}.block +peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ../channel-artifacts/${CORE_PEER_LOCALMSPID}anchors.tx --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA + +# peer1.org1 channel join +echo "========== Joining peer1.org1.example.com to channel mychannel ==========" +export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp +export CORE_PEER_ADDRESS=peer1.org1.example.com:7051 +export CORE_PEER_LOCALMSPID="Org1MSP" +export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt +peer channel join -b ${CHANNEL_NAME}.block + +# peer0.org2 channel join +echo "========== Joining peer0.org2.example.com to channel mychannel ==========" +export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp +export CORE_PEER_ADDRESS=peer0.org2.example.com:7051 +export CORE_PEER_LOCALMSPID="Org2MSP" +export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt +peer channel join -b ${CHANNEL_NAME}.block +peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ../channel-artifacts/${CORE_PEER_LOCALMSPID}anchors.tx --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA + +# peer1.org2 channel join +echo "========== Joining peer1.org2.example.com to channel mychannel ==========" +export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp +export CORE_PEER_ADDRESS=peer1.org2.example.com:7051 +export CORE_PEER_LOCALMSPID="Org2MSP" +export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt +peer channel join -b ${CHANNEL_NAME}.block + diff --git a/high-throughput/scripts/delete-invoke.sh b/high-throughput/scripts/delete-invoke.sh new file mode 100755 index 0000000000..be892a47a0 --- /dev/null +++ b/high-throughput/scripts/delete-invoke.sh @@ -0,0 +1,2 @@ +peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args":["delete","'$1'"]}' + diff --git a/high-throughput/scripts/get-invoke.sh b/high-throughput/scripts/get-invoke.sh new file mode 100755 index 0000000000..aae4eee7bf --- /dev/null +++ b/high-throughput/scripts/get-invoke.sh @@ -0,0 +1,2 @@ +peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args":["get","'$1'"]}' + diff --git a/high-throughput/scripts/get-traditional.sh b/high-throughput/scripts/get-traditional.sh new file mode 100755 index 0000000000..c532790902 --- /dev/null +++ b/high-throughput/scripts/get-traditional.sh @@ -0,0 +1 @@ +peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args":["getstandard","'$1'"]}' diff --git a/high-throughput/scripts/install-chaincode.sh b/high-throughput/scripts/install-chaincode.sh new file mode 100755 index 0000000000..014ad0fd76 --- /dev/null +++ b/high-throughput/scripts/install-chaincode.sh @@ -0,0 +1,27 @@ +echo "========== Installing chaincode on peer0.org1 ==========" +export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp +export CORE_PEER_ADDRESS=peer0.org1.example.com:7051 +export CORE_PEER_LOCALMSPID="Org1MSP" +export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt +peer chaincode install -n $CC_NAME -v $1 -p github.com/hyperledger/fabric/examples/chaincode/go + +echo "========== Installing chaincode on peer1.org1 ==========" +export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp +export CORE_PEER_ADDRESS=peer1.org1.example.com:7051 +export CORE_PEER_LOCALMSPID="Org1MSP" +export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt +peer chaincode install -n $CC_NAME -v $1 -p github.com/hyperledger/fabric/examples/chaincode/go + +echo "========== Installing chaincode on peer0.org2 ==========" +export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp +export CORE_PEER_ADDRESS=peer0.org2.example.com:7051 +export CORE_PEER_LOCALMSPID="Org2MSP" +export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt +peer chaincode install -n $CC_NAME -v $1 -p github.com/hyperledger/fabric/examples/chaincode/go + +echo "========== Installing chaincode on peer1.org2 ==========" +export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp +export CORE_PEER_ADDRESS=peer1.org2.example.com:7051 +export CORE_PEER_LOCALMSPID="Org2MSP" +export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt +peer chaincode install -n $CC_NAME -v $1 -p github.com/hyperledger/fabric/examples/chaincode/go diff --git a/high-throughput/scripts/instantiate-chaincode.sh b/high-throughput/scripts/instantiate-chaincode.sh new file mode 100755 index 0000000000..45eacbc3a4 --- /dev/null +++ b/high-throughput/scripts/instantiate-chaincode.sh @@ -0,0 +1,3 @@ +echo "========== Instantiating chaincode v$1 ==========" +peer chaincode instantiate -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args": []}' -v $1 -P "OR ('Org1MSP.member','Org2MSP.member')" + diff --git a/high-throughput/scripts/many-updates-traditional.sh b/high-throughput/scripts/many-updates-traditional.sh new file mode 100755 index 0000000000..3f57e57de7 --- /dev/null +++ b/high-throughput/scripts/many-updates-traditional.sh @@ -0,0 +1,4 @@ +for (( i = 0; i < 1000; ++i )) +do + peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args":["putstandard","'$1'","'$i'"]}' +done diff --git a/high-throughput/scripts/many-updates.sh b/high-throughput/scripts/many-updates.sh new file mode 100755 index 0000000000..ffedb7c9bd --- /dev/null +++ b/high-throughput/scripts/many-updates.sh @@ -0,0 +1,4 @@ +for (( i = 0; i < 1000; ++i )) +do + peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args":["update","'$1'","'$2'","'$3'"]}' & +done diff --git a/high-throughput/scripts/prunefast-invoke.sh b/high-throughput/scripts/prunefast-invoke.sh new file mode 100755 index 0000000000..0db2d9bcf7 --- /dev/null +++ b/high-throughput/scripts/prunefast-invoke.sh @@ -0,0 +1,2 @@ +peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args":["prunefast","'$1'"]}' + diff --git a/high-throughput/scripts/prunesafe-invoke.sh b/high-throughput/scripts/prunesafe-invoke.sh new file mode 100755 index 0000000000..8c39e422c7 --- /dev/null +++ b/high-throughput/scripts/prunesafe-invoke.sh @@ -0,0 +1,2 @@ +peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args":["prunesafe","'$1'"]}' + diff --git a/high-throughput/scripts/setclienv.sh b/high-throughput/scripts/setclienv.sh new file mode 100644 index 0000000000..9e7e8bc861 --- /dev/null +++ b/high-throughput/scripts/setclienv.sh @@ -0,0 +1,2 @@ +export CHANNEL_NAME=mychannel +export CC_NAME=bigdatacc diff --git a/high-throughput/scripts/update-invoke.sh b/high-throughput/scripts/update-invoke.sh new file mode 100755 index 0000000000..7f51f9baf8 --- /dev/null +++ b/high-throughput/scripts/update-invoke.sh @@ -0,0 +1,2 @@ +peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args":["update","'$1'","'$2'","'$3'"]}' + diff --git a/high-throughput/scripts/upgrade-chaincode.sh b/high-throughput/scripts/upgrade-chaincode.sh new file mode 100755 index 0000000000..78d0b83fa3 --- /dev/null +++ b/high-throughput/scripts/upgrade-chaincode.sh @@ -0,0 +1,3 @@ +echo "========== Upgrade chaincode to version $1 ==========" +peer chaincode upgrade -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n $CC_NAME -c '{"Args": []}' -v $1 -P "OR ('Org1MSP.member','Org2MSP.member')" +