Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(CL): merge main into concentrated-liquidity-main #3211

Merged
merged 38 commits into from
Nov 2, 2022
Merged
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
42d73f1
Add servers to openapi spec file (#2980)
daniel-farina Oct 19, 2022
ec7a7a8
Chain.schema.json: Added ibc data, fix genesis name, formatting (#3077)
JeremyParish69 Oct 20, 2022
fdbdc78
labels (#3082)
czarcas7ic Oct 21, 2022
4a67cdd
[x/gamm][stableswap]: Expand inverse relation tests to multi assets a…
AlpinYukseloglu Oct 21, 2022
d812334
remove all uses of two-asset binary search solver (#3084)
AlpinYukseloglu Oct 21, 2022
edfb19b
[stableswap]: Implement simplified direct multi-asset CFMM solver (#3…
AlpinYukseloglu Oct 21, 2022
c51a248
refactor: remove PokePool from the PoolI interface, define on extensi…
p0mvn Oct 21, 2022
f0f31d0
ci(CL): run tests on CL branch and add backport config (#3095)
p0mvn Oct 21, 2022
7f62b6d
After create pool test (#2783)
hieuvubk Oct 22, 2022
b552bab
fix GetModuleToDistributeCoins (#2957)
catShaark Oct 24, 2022
975aeb9
update comment in gamm module (#3103)
ThanhNhann Oct 24, 2022
9176a51
chore(deps): Bump github.com/tendermint/tendermint (#3106)
dependabot[bot] Oct 24, 2022
18f4afa
chore(deps): Bump github.com/golangci/golangci-lint (#3104)
dependabot[bot] Oct 24, 2022
6548902
chore(deps): Bump github.com/stretchr/testify from 1.8.0 to 1.8.1 (#3…
dependabot[bot] Oct 24, 2022
7119419
chore: update concentrated liquidity backport label (#3115)
p0mvn Oct 24, 2022
8590b80
feat(osmomath): log2 approximation (#2788)
p0mvn Oct 24, 2022
ced84c1
[stableswap]: Cap number of assets and post-scaled asset amounts to e…
AlpinYukseloglu Oct 24, 2022
0aa0263
Remove unused versions in mergify (#3121)
niccoloraspa Oct 24, 2022
df9102e
Query lockup params (#3098)
hieuvubk Oct 24, 2022
6274617
CI: Delete failing step (#3124)
ValarDragon Oct 24, 2022
184a85c
[x/gamm][stableswap]: Add inverse join/exit tests, fix single asset j…
AlpinYukseloglu Oct 24, 2022
805f80c
Stableswap implement JoinPoolNoSwap (#2942)
ValarDragon Oct 24, 2022
2ce796c
Make testing suite to ensure queries never alter state (#3001)
hieuvubk Oct 25, 2022
1e80a2a
Remove streamswap (#3146)
hieuvubk Oct 25, 2022
20c72cc
updated the contract to cosmwasm 1.1 and Uint256 for amounts (#2950)
nicolaslara Oct 29, 2022
46d0053
chore: use environment variable instead of build tags to control e2e …
p0mvn Oct 31, 2022
bb5c1c9
Rate limit - Cleaner tests (#3183)
nicolaslara Oct 31, 2022
d8b6d54
changed lints to stable so they change less often (#3184)
nicolaslara Oct 31, 2022
5abf7a4
update version numbers (#3168)
doggystylez Oct 31, 2022
434b42f
chore(deps): Bump github.com/spf13/cobra from 1.6.0 to 1.6.1 (#3187)
dependabot[bot] Oct 31, 2022
a3d0110
chore(deps): Bump github.com/mattn/go-sqlite3 from 1.14.15 to 1.14.16…
dependabot[bot] Oct 31, 2022
5c9bc1b
chore(e2e): add vscode debug configurations (#3180)
p0mvn Oct 31, 2022
5c408f9
osmomath(log/CL): ln(x), log_1.0001(x), log_custom(x) (#3169)
pysel Nov 1, 2022
8cd07fd
feat(release): Automated post-upgrade tasks by code generating upgrad…
pysel Nov 1, 2022
2620afd
chore: Tx post-handler example snippet #3194
alexanderbez Nov 1, 2022
d874d6d
Progress on IBC rate limit spec (#3190)
ValarDragon Nov 1, 2022
29c8561
fix(e2e): various e2e build issues breaking debugging on linux amd64 …
p0mvn Nov 2, 2022
a7acb4f
Merge branch 'main' into roman/merge-main
p0mvn Nov 2, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .dockerignore
Original file line number Diff line number Diff line change
@@ -4,7 +4,6 @@ docs/
networks/
proto/
scripts/
tests/
tools/
.github/
.git/
38 changes: 38 additions & 0 deletions .github/workflows/auto-update-upgrade.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# When new major release is created this workflow will be triggered and will do 3 things:
# 1) it will create a directory with an empty upgrade handler in app/upgrades folder
# 2) will increase an E2E_UPGRADE_VERSION variable in Makefile
# 3) create a pull request with these changes to main

name: On Release Auto Upgrade

on:
release:
types: [published]

jobs:
post_release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2.3.4

- name: Run version script
run: bash ./scripts/check_release.sh ${{ github.event.release.tag_name }}

- name: Run post release script
if: env.MAJOR == 1 # 1 means vX of existing upgrade handler is smaller than A in tag vA.B.C
run: bash ./scripts/empty_upgrade_handler_gen.sh

- name: Create PR
if: env.MAJOR == 1
uses: peter-evans/create-pull-request@v4
with:
base: ${{ github.event.repository.default_branch }}
body: |
Update report
- Created a new empty upgrade handler
- Increased E2E_UPGRADE_VERSION in Makefile by 1
labels: |
T:auto
C:e2e
C:app-wiring
48 changes: 41 additions & 7 deletions .github/workflows/contracts.yml
Original file line number Diff line number Diff line change
@@ -12,11 +12,11 @@ on:

jobs:
test:
name: Test Suite
name: Test Cosmwasm Contracts
runs-on: ubuntu-latest
strategy:
matrix:
contract: [{workdir: ./x/ibc-rate-limit/, output: testdata/rate_limiter.wasm, build: artifacts/rate_limiter-x86_64.wasm, name: rate_limiter}]
contract: [{workdir: ./x/ibc-rate-limit/, output: bytecode/rate_limiter.wasm, build: artifacts/rate_limiter-x86_64.wasm, name: rate_limiter}]

steps:
- name: Checkout sources
@@ -82,8 +82,42 @@ jobs:
path: ${{ matrix.contract.workdir }}${{ matrix.contract.build }}
retention-days: 1

# - name: Check Test Data
# working-directory: ${{ matrix.contract.workdir }}
# if: ${{ matrix.contract.output != null }}
# run: >
# diff ${{ matrix.contract.output }} ${{ matrix.contract.build }}
- name: Check Test Data
working-directory: ${{ matrix.contract.workdir }}
if: ${{ matrix.contract.output != null }}
run: >
diff ${{ matrix.contract.output }} ${{ matrix.contract.build }}

lints:
name: Cosmwasm Lints
runs-on: ubuntu-latest
strategy:
matrix:
workdir: [./x/ibc-rate-limit]

steps:
- name: Checkout sources
uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v6.0.1
with:
PATTERNS: |
**/**.rs
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt, clippy

- name: Format
working-directory: ${{ matrix.workdir }}
run: >
cargo fmt --all -- --check
- name: run cargo clippy
working-directory: ${{ matrix.workdir }}
run: >
cargo clippy -- -D warnings
7 changes: 1 addition & 6 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
@@ -54,9 +54,4 @@ jobs:
# within `super-linter`.
fetch-depth: 0
- name: Run documentation linter
uses: github/super-linter@v4
env:
VALIDATE_ALL_CODEBASE: false
VALIDATE_MARKDOWN: true
DEFAULT_BRANCH: main
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: make mdlint
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -230,3 +230,7 @@ Cargo.lock
.beaker
blocks.db
**/blocks.db*

# Ignore e2e test artifacts (which clould leak information if commited)
.ash_history
.bash_history
2 changes: 0 additions & 2 deletions .markdownlint.yml
Original file line number Diff line number Diff line change
@@ -12,8 +12,6 @@ MD042: true
MD048: true
MD051: true
# MD004: false
# # Can't disable MD007 :/
# MD007: false
# MD009: false
# MD010:
# code_blocks: false
30 changes: 30 additions & 0 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "E2E: (make test-e2e-short)",
"type": "go",
"request": "launch",
"mode": "test",
"program": "${workspaceFolder}/tests/e2e",
"args": [
"-test.timeout",
"30m",
"-test.run",
"IntegrationTestSuite",
"-test.v"
],
"buildFlags": "-tags e2e",
"env": {
"OSMOSIS_E2E": "True",
"OSMOSIS_E2E_SKIP_IBC": "true",
"OSMOSIS_E2E_SKIP_UPGRADE": "true",
"OSMOSIS_E2E_SKIP_CLEANUP": "true",
"OSMOSIS_E2E_SKIP_STATE_SYNC": "true",
"OSMOSIS_E2E_UPGRADE_VERSION": "v13",
"OSMOSIS_E2E_DEBUG_LOG": "true",
},
"preLaunchTask": "e2e-setup"
}
]
}
12 changes: 12 additions & 0 deletions .vscode/tasks.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "e2e-setup",
"type": "shell",
"command": "make e2e-setup"
}
]
}
14 changes: 7 additions & 7 deletions Makefile
Original file line number Diff line number Diff line change
@@ -224,7 +224,7 @@ run-querygen:
###############################################################################

PACKAGES_UNIT=$(shell go list ./... | grep -E -v 'tests/simulator|e2e')
PACKAGES_E2E=$(shell go list -tags e2e ./... | grep '/e2e')
PACKAGES_E2E=$(shell go list ./... | grep '/e2e')
PACKAGES_SIM=$(shell go list ./... | grep '/tests/simulator')
TEST_PACKAGES=./...

@@ -261,25 +261,25 @@ test-sim-bench:
# In that case, run `make e2e-remove-resources`
# manually.
# Utilizes Go cache.
test-e2e: e2e-setup test-e2e-ci
test-e2e: OSMOSIS_E2E=True e2e-setup test-e2e-ci

# test-e2e-ci runs a full e2e test suite
# does not do any validation about the state of the Docker environment
# As a result, avoid using this locally.
test-e2e-ci:
@VERSION=$(VERSION) OSMOSIS_E2E_DEBUG_LOG=True OSMOSIS_E2E_UPGRADE_VERSION=$(E2E_UPGRADE_VERSION) go test -tags e2e -mod=readonly -timeout=25m -v $(PACKAGES_E2E)
@VERSION=$(VERSION) OSMOSIS_E2E=True OSMOSIS_E2E_DEBUG_LOG=True OSMOSIS_E2E_UPGRADE_VERSION=$(E2E_UPGRADE_VERSION) go test -mod=readonly -timeout=25m -v $(PACKAGES_E2E)

# test-e2e-debug runs a full e2e test suite but does
# not attempt to delete Docker resources at the end.
test-e2e-debug: e2e-setup
@VERSION=$(VERSION) OSMOSIS_E2E_UPGRADE_VERSION=$(E2E_UPGRADE_VERSION) OSMOSIS_E2E_SKIP_CLEANUP=True go test -tags e2e -mod=readonly -timeout=25m -v $(PACKAGES_E2E) -count=1
@VERSION=$(VERSION) OSMOSIS_E2E=True OSMOSIS_E2E_DEBUG_LOG=True OSMOSIS_E2E_UPGRADE_VERSION=$(E2E_UPGRADE_VERSION) OSMOSIS_E2E_SKIP_CLEANUP=True go test -mod=readonly -timeout=25m -v $(PACKAGES_E2E) -count=1

# test-e2e-short runs the e2e test with only short tests.
# Does not delete any of the containers after running.
# Deletes any existing containers before running.
# Does not use Go cache.
test-e2e-short: e2e-setup
@VERSION=$(VERSION) OSMOSIS_E2E_DEBUG_LOG=True OSMOSIS_E2E_SKIP_UPGRADE=True OSMOSIS_E2E_SKIP_IBC=True OSMOSIS_E2E_SKIP_STATE_SYNC=True OSMOSIS_E2E_SKIP_CLEANUP=True go test -tags e2e -mod=readonly -timeout=25m -v $(PACKAGES_E2E) -count=1
@VERSION=$(VERSION) OSMOSIS_E2E=True OSMOSIS_E2E_DEBUG_LOG=True OSMOSIS_E2E_SKIP_UPGRADE=True OSMOSIS_E2E_SKIP_IBC=True OSMOSIS_E2E_SKIP_STATE_SYNC=True OSMOSIS_E2E_SKIP_CLEANUP=True go test -mod=readonly -timeout=25m -v $(PACKAGES_E2E) -count=1

test-mutation:
@bash scripts/mutation-test.sh $(MODULES)
@@ -296,10 +296,10 @@ docker-build-debug:
@DOCKER_BUILDKIT=1 docker tag osmosis:${COMMIT} osmosis:debug

docker-build-e2e-init-chain:
@DOCKER_BUILDKIT=1 docker build -t osmosis-e2e-init-chain:debug --build-arg E2E_SCRIPT_NAME=chain -f tests/e2e/initialization/init.Dockerfile .
@DOCKER_BUILDKIT=1 docker build -t osmosis-e2e-init-chain:debug --build-arg E2E_SCRIPT_NAME=chain --platform=linux/x86_64 -f tests/e2e/initialization/init.Dockerfile .

docker-build-e2e-init-node:
@DOCKER_BUILDKIT=1 docker build -t osmosis-e2e-init-node:debug --build-arg E2E_SCRIPT_NAME=node -f tests/e2e/initialization/init.Dockerfile .
@DOCKER_BUILDKIT=1 docker build -t osmosis-e2e-init-node:debug --build-arg E2E_SCRIPT_NAME=node --platform=linux/x86_64 -f tests/e2e/initialization/init.Dockerfile .

e2e-setup: e2e-check-image-sha e2e-remove-resources
@echo Finished e2e environment setup, ready to start the test
2 changes: 2 additions & 0 deletions app/app.go
Original file line number Diff line number Diff line change
@@ -267,6 +267,8 @@ func NewOsmosisApp(
app.IBCKeeper,
),
)
// Uncomment to enable postHandlers:
// app.SetPostHandler(NewTxPostHandler())
app.SetEndBlocker(app.EndBlocker)

// Register snapshot extensions to enable state-sync for wasm.
6 changes: 6 additions & 0 deletions app/apptesting/events.go
Original file line number Diff line number Diff line change
@@ -21,11 +21,17 @@ func (s *KeeperTestHelper) AssertEventEmitted(ctx sdk.Context, eventTypeExpected

func (s *KeeperTestHelper) FindEvent(events []sdk.Event, name string) sdk.Event {
index := slices.IndexFunc(events, func(e sdk.Event) bool { return e.Type == name })
if index == -1 {
return sdk.Event{}
}
return events[index]
}

func (s *KeeperTestHelper) ExtractAttributes(event sdk.Event) map[string]string {
attrs := make(map[string]string)
if event.Attributes == nil {
return attrs
}
for _, a := range event.Attributes {
attrs[string(a.Key)] = string(a.Value)
}
32 changes: 29 additions & 3 deletions app/keepers/keepers.go
Original file line number Diff line number Diff line change
@@ -2,6 +2,7 @@ package keepers

import (
"github.com/CosmWasm/wasmd/x/wasm"
wasmkeeper "github.com/CosmWasm/wasmd/x/wasm/keeper"
"github.com/cosmos/cosmos-sdk/baseapp"
"github.com/cosmos/cosmos-sdk/codec"
sdk "github.com/cosmos/cosmos-sdk/types"
@@ -33,6 +34,9 @@ import (
upgradekeeper "github.com/cosmos/cosmos-sdk/x/upgrade/keeper"
upgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types"

ibcratelimit "github.com/osmosis-labs/osmosis/v12/x/ibc-rate-limit"
ibcratelimittypes "github.com/osmosis-labs/osmosis/v12/x/ibc-rate-limit/types"

icahost "github.com/cosmos/ibc-go/v3/modules/apps/27-interchain-accounts/host"
icahostkeeper "github.com/cosmos/ibc-go/v3/modules/apps/27-interchain-accounts/host/keeper"
icahosttypes "github.com/cosmos/ibc-go/v3/modules/apps/27-interchain-accounts/host/types"
@@ -115,12 +119,14 @@ type AppKeepers struct {
SuperfluidKeeper *superfluidkeeper.Keeper
GovKeeper *govkeeper.Keeper
WasmKeeper *wasm.Keeper
ContractKeeper *wasmkeeper.PermissionedKeeper
TokenFactoryKeeper *tokenfactorykeeper.Keeper
SwapRouterKeeper *swaprouter.Keeper
ConcentratedLiquidityKeeper *concentratedliquidity.Keeper
// IBC modules
// transfer module
TransferModule transfer.AppModule
TransferModule transfer.AppModule
RateLimitingICS4Wrapper *ibcratelimit.ICS4Wrapper

// keys to access the substores
keys map[string]*sdk.KVStoreKey
@@ -202,12 +208,24 @@ func (appKeepers *AppKeepers) InitNormalKeepers(
appKeepers.ScopedIBCKeeper,
)

// ChannelKeeper wrapper for rate limiting SendPacket(). The wasmKeeper needs to be added after it's created
rateLimitingParams := appKeepers.GetSubspace(ibcratelimittypes.ModuleName)
rateLimitingParams = rateLimitingParams.WithKeyTable(ibcratelimittypes.ParamKeyTable())
rateLimitingICS4Wrapper := ibcratelimit.NewICS4Middleware(
appKeepers.IBCKeeper.ChannelKeeper,
appKeepers.AccountKeeper,
nil,
appKeepers.BankKeeper,
rateLimitingParams,
)
appKeepers.RateLimitingICS4Wrapper = &rateLimitingICS4Wrapper

// Create Transfer Keepers
transferKeeper := ibctransferkeeper.NewKeeper(
appCodec,
appKeepers.keys[ibctransfertypes.StoreKey],
appKeepers.GetSubspace(ibctransfertypes.ModuleName),
appKeepers.IBCKeeper.ChannelKeeper,
appKeepers.RateLimitingICS4Wrapper, // The ICS4Wrapper is replaced by the rateLimitingICS4Wrapper instead of the channel
appKeepers.IBCKeeper.ChannelKeeper,
&appKeepers.IBCKeeper.PortKeeper,
appKeepers.AccountKeeper,
@@ -218,6 +236,9 @@ func (appKeepers *AppKeepers) InitNormalKeepers(
appKeepers.TransferModule = transfer.NewAppModule(*appKeepers.TransferKeeper)
transferIBCModule := transfer.NewIBCModule(*appKeepers.TransferKeeper)

// RateLimiting IBC Middleware
rateLimitingTransferModule := ibcratelimit.NewIBCModule(transferIBCModule, appKeepers.RateLimitingICS4Wrapper)

icaHostKeeper := icahostkeeper.NewKeeper(
appCodec, appKeepers.keys[icahosttypes.StoreKey],
appKeepers.GetSubspace(icahosttypes.SubModuleName),
@@ -233,7 +254,8 @@ func (appKeepers *AppKeepers) InitNormalKeepers(
// Create static IBC router, add transfer route, then set and seal it
ibcRouter := porttypes.NewRouter()
ibcRouter.AddRoute(icahosttypes.SubModuleName, icaHostIBCModule).
AddRoute(ibctransfertypes.ModuleName, transferIBCModule)
// The transferIBC module is replaced by rateLimitingTransferModule
AddRoute(ibctransfertypes.ModuleName, &rateLimitingTransferModule)
// Note: the sealing is done after creating wasmd and wiring that up

// create evidence keeper with router
@@ -360,6 +382,9 @@ func (appKeepers *AppKeepers) InitNormalKeepers(
wasmOpts...,
)
appKeepers.WasmKeeper = &wasmKeeper
// Update the ICS4Wrapper with the proper contractKeeper
appKeepers.ContractKeeper = wasmkeeper.NewDefaultPermissionKeeper(appKeepers.WasmKeeper)
appKeepers.RateLimitingICS4Wrapper.ContractKeeper = appKeepers.ContractKeeper

// wire up x/wasm to IBC
ibcRouter.AddRoute(wasm.ModuleName, wasm.NewIBCHandler(appKeepers.WasmKeeper, appKeepers.IBCKeeper.ChannelKeeper))
@@ -455,6 +480,7 @@ func (appKeepers *AppKeepers) initParamsKeeper(appCodec codec.BinaryCodec, legac
paramsKeeper.Subspace(tokenfactorytypes.ModuleName)
paramsKeeper.Subspace(twaptypes.ModuleName)
paramsKeeper.Subspace(swaproutertypes.ModuleName)
paramsKeeper.Subspace(ibcratelimittypes.ModuleName)

return paramsKeeper
}
9 changes: 9 additions & 0 deletions app/tx_post_handler.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
package app

import (
sdk "github.com/cosmos/cosmos-sdk/types"
)

func NewTxPostHandler() sdk.AnteHandler {
panic("not implemented")
}
8 changes: 4 additions & 4 deletions chain.schema.json
Original file line number Diff line number Diff line change
@@ -2,13 +2,13 @@
"$schema": "http://json-schema.org/draft-07/schema#",
"codebase":{
"git_repo": "https://github.com/osmosis-labs/osmosis",
"recommended_version": "12.2.0",
"recommended_version": "v12.2.0",
"compatible_versions": [
"12.2.0"
"v12.2.0"
],
"binaries": {
"linux/amd64": "https://github.com/osmosis-labs/osmosis/releases/download/v12.1.0/osmosisd-12.1.0-linux-amd64?checksum=sha256:44433f93946338b8cb167d9030ebbcfe924294d95d745026ada5dbe8f50d5010",
"linux/arm64": "https://github.com/osmosis-labs/osmosis/releases/download/v12.1.0/osmosisd-12.1.0-linux-arm64?checksum=sha256:ef2c3d60156be5481534ecb33f9d94d73afa38a1b016e7e1c6d3fe10e3e69b3a"
"linux/amd64": "https://github.com/osmosis-labs/osmosis/releases/download/v12.2.0/osmosisd-12.2.0-linux-amd64?checksum=sha256:b4a0296b142b1a535f3116021d39660868a83fc66b290ab0891b06211f86fd31",
"linux/arm64": "https://github.com/osmosis-labs/osmosis/releases/download/v12.2.0/osmosisd-12.2.0-linux-arm64?checksum=sha256:84717e741b61ef3616fef403aae42067614e58a0208347cd642f6d03240b7778"
},
"cosmos_sdk_version": "0.45",
"tendermint_version": "0.34",
6 changes: 3 additions & 3 deletions go.mod
Original file line number Diff line number Diff line change
@@ -15,13 +15,13 @@ require (
github.com/golangci/golangci-lint v1.50.1
github.com/gorilla/mux v1.8.0
github.com/grpc-ecosystem/grpc-gateway v1.16.0
github.com/mattn/go-sqlite3 v1.14.15
github.com/mattn/go-sqlite3 v1.14.16
github.com/ory/dockertest/v3 v3.9.1
github.com/osmosis-labs/go-mutesting v0.0.0-20220811235203-65a53b4ea8e3
github.com/pkg/errors v0.9.1
github.com/rakyll/statik v0.1.7
github.com/spf13/cast v1.5.0
github.com/spf13/cobra v1.6.0
github.com/spf13/cobra v1.6.1
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.13.0
github.com/stretchr/testify v1.8.1
@@ -281,7 +281,7 @@ require (
golang.org/x/term v0.1.0 // indirect
golang.org/x/text v0.4.0 // indirect
golang.org/x/tools v0.2.0 // indirect
google.golang.org/protobuf v1.28.1
google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1
8 changes: 4 additions & 4 deletions go.sum
Original file line number Diff line number Diff line change
@@ -765,8 +765,8 @@ github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m
github.com/mattn/go-runewidth v0.0.10 h1:CoZ3S2P7pvtP45xOtBw+/mDL2z0RKI576gSkzRRpdGg=
github.com/mattn/go-runewidth v0.0.10/go.mod h1:RAqKPSqVFrSLVXbA8x7dzmKdmGzieGRCM46jaSJTDAk=
github.com/mattn/go-sqlite3 v1.9.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-sqlite3 v1.14.15 h1:vfoHhTN1af61xCRSWzFIWzx2YskyMTwHLrExkBOjvxI=
github.com/mattn/go-sqlite3 v1.14.15/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
github.com/mattn/go-sqlite3 v1.14.16 h1:yOQRA0RpS5PFz/oikGwBEqvAWhWg5ufRz4ETLjwpU1Y=
github.com/mattn/go-sqlite3 v1.14.16/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
github.com/mattn/goveralls v0.0.3/go.mod h1:8d1ZMHsd7fW6IRPKQh46F2WRpyib5/X4FOpevwGNQEw=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
@@ -1078,8 +1078,8 @@ github.com/spf13/cast v1.5.0/go.mod h1:SpXXQ5YoyJw6s3/6cMTQuxvgRl3PCJiyaX9p6b155
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
github.com/spf13/cobra v1.6.0 h1:42a0n6jwCot1pUmomAp4T7DeMD+20LFv4Q54pxLf2LI=
github.com/spf13/cobra v1.6.0/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
github.com/spf13/cobra v1.6.1 h1:o94oiPyS4KD1mPy2fmcYYHHfCxLqYjJOhGsCHFZtEzA=
github.com/spf13/cobra v1.6.1/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
42 changes: 42 additions & 0 deletions osmomath/decimal.go
Original file line number Diff line number Diff line change
@@ -43,6 +43,13 @@ var (
oneInt = big.NewInt(1)
tenInt = big.NewInt(10)

// log_2(e)
// From: https://www.wolframalpha.com/input?i=log_2%28e%29+with+37+digits
logOfEbase2 = MustNewDecFromStr("1.442695040888963407359924681001892137")

// log_2(1.0001)
// From: https://www.wolframalpha.com/input?i=log_2%281.0001%29+to+33+digits
tickLogOf2 = MustNewDecFromStr("0.000144262291094554178391070900057480")
// initialized in init() since requires
// precision to be defined.
twoBigDec BigDec
@@ -917,3 +924,38 @@ func (x BigDec) LogBase2() BigDec {

return y
}

// Natural logarithm of x.
// Formula: ln(x) = log_2(x) / log_2(e)
func (x BigDec) Ln() BigDec {
log2x := x.LogBase2()

y := log2x.Quo(logOfEbase2)

return y
}

// log_1.0001(x) "tick" base logarithm
// Formula: log_1.0001(b) = log_2(b) / log_2(1.0001)
func (x BigDec) TickLog() BigDec {
log2x := x.LogBase2()

y := log2x.Quo(tickLogOf2)

return y
}

// log_a(x) custom base logarithm
// Formula: log_a(b) = log_2(b) / log_2(a)
func (x BigDec) CustomBaseLog(base BigDec) BigDec {
if base.LTE(ZeroDec()) || base.Equal(OneDec()) {
panic(fmt.Sprintf("log is not defined at base <= 0 or base == 1, base given (%s)", base))
}

log2x_argument := x.LogBase2()
log2x_base := base.LogBase2()

y := log2x_argument.Quo(log2x_base)

return y
}
279 changes: 279 additions & 0 deletions osmomath/decimal_test.go
Original file line number Diff line number Diff line change
@@ -739,3 +739,282 @@ func (s *decimalTestSuite) TestLog2() {
})
}
}

func (s *decimalTestSuite) TestLn() {
var expectedErrTolerance = MustNewDecFromStr("0.000000000000000000000000000000000100")

tests := map[string]struct {
initialValue BigDec
expected BigDec

expectedPanic bool
}{
"log_e{-1}; invalid; panic": {
initialValue: OneDec().Neg(),
expectedPanic: true,
},
"log_e{0}; invalid; panic": {
initialValue: ZeroDec(),
expectedPanic: true,
},
"log_e{0.001} = -6.90775527898213705205397436405309262": {
initialValue: MustNewDecFromStr("0.001"),
// From: https://www.wolframalpha.com/input?i=log0.001+to+36+digits+with+36+decimals
expected: MustNewDecFromStr("-6.90775527898213705205397436405309262"),
},
"log_e{0.56171821941421412902170941} = -0.576754943768592057376050794884207180": {
initialValue: MustNewDecFromStr("0.56171821941421412902170941"),
// From: https://www.wolframalpha.com/input?i=log0.56171821941421412902170941+to+36+digits
expected: MustNewDecFromStr("-0.576754943768592057376050794884207180"),
},
"log_e{0.999912345} = -0.000087658841924023373535614212850888": {
initialValue: MustNewDecFromStr("0.999912345"),
// From: https://www.wolframalpha.com/input?i=log0.999912345+to+32+digits
expected: MustNewDecFromStr("-0.000087658841924023373535614212850888"),
},
"log_e{1} = 0": {
initialValue: NewBigDec(1),
expected: NewBigDec(0),
},
"log_e{e} = 1": {
initialValue: MustNewDecFromStr("2.718281828459045235360287471352662498"),
// From: https://www.wolframalpha.com/input?i=e+with+36+decimals
expected: NewBigDec(1),
},
"log_e{7} = 1.945910149055313305105352743443179730": {
initialValue: NewBigDec(7),
// From: https://www.wolframalpha.com/input?i=log7+up+to+36+decimals
expected: MustNewDecFromStr("1.945910149055313305105352743443179730"),
},
"log_e{512} = 6.238324625039507784755089093123589113": {
initialValue: NewBigDec(512),
// From: https://www.wolframalpha.com/input?i=log512+up+to+36+decimals
expected: MustNewDecFromStr("6.238324625039507784755089093123589113"),
},
"log_e{580} = 6.36302810354046502061849560850445238": {
initialValue: NewBigDec(580),
// From: https://www.wolframalpha.com/input?i=log580+up+to+36+decimals
expected: MustNewDecFromStr("6.36302810354046502061849560850445238"),
},
"log_e{1024.987654321} = 6.93243584693509415029056534690631614": {
initialValue: NewDecWithPrec(1024987654321, 9),
// From: https://www.wolframalpha.com/input?i=log1024.987654321+to+36+digits
expected: MustNewDecFromStr("6.93243584693509415029056534690631614"),
},
"log_e{912648174127941279170121098210.92821920190204131121} = 68.986147965719214790400745338243805015": {
initialValue: MustNewDecFromStr("912648174127941279170121098210.92821920190204131121"),
// From: https://www.wolframalpha.com/input?i=log912648174127941279170121098210.92821920190204131121+to+38+digits
expected: MustNewDecFromStr("68.986147965719214790400745338243805015"),
},
}

for name, tc := range tests {
s.Run(name, func() {
osmoassert.ConditionalPanic(s.T(), tc.expectedPanic, func() {
// Create a copy to test that the original was not modified.
// That is, that Ln() is non-mutative.
initialCopy := ZeroDec()
initialCopy.i.Set(tc.initialValue.i)

// system under test.
res := tc.initialValue.Ln()
require.True(DecApproxEq(s.T(), tc.expected, res, expectedErrTolerance))
require.Equal(s.T(), initialCopy, tc.initialValue)
})
})
}
}

func (s *decimalTestSuite) TestTickLog() {
tests := map[string]struct {
initialValue BigDec
expected BigDec

expectedErrTolerance BigDec
expectedPanic bool
}{
"log_1.0001{-1}; invalid; panic": {
initialValue: OneDec().Neg(),
expectedPanic: true,
},
"log_1.0001{0}; invalid; panic": {
initialValue: ZeroDec(),
expectedPanic: true,
},
"log_1.0001{0.001} = -69081.006609899112313305835611219486392199": {
initialValue: MustNewDecFromStr("0.001"),
// From: https://www.wolframalpha.com/input?i=log_1.0001%280.001%29+to+41+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000143031879"),
expected: MustNewDecFromStr("-69081.006609899112313305835611219486392199"),
},
"log_1.0001{0.999912345} = -0.876632247930741919880461740717176538": {
initialValue: MustNewDecFromStr("0.999912345"),
// From: https://www.wolframalpha.com/input?i=log_1.0001%280.999912345%29+to+36+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000000138702"),
expected: MustNewDecFromStr("-0.876632247930741919880461740717176538"),
},
"log_1.0001{1} = 0": {
initialValue: NewBigDec(1),

expectedErrTolerance: ZeroDec(),
expected: NewBigDec(0),
},
"log_1.0001{1.0001} = 1": {
initialValue: MustNewDecFromStr("1.0001"),

expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000000152500"),
expected: OneDec(),
},
"log_1.0001{512} = 62386.365360724158196763710649998441051753": {
initialValue: NewBigDec(512),
// From: https://www.wolframalpha.com/input?i=log_1.0001%28512%29+to+41+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000129292137"),
expected: MustNewDecFromStr("62386.365360724158196763710649998441051753"),
},
"log_1.0001{1024.987654321} = 69327.824629506998657531621822514042777198": {
initialValue: NewDecWithPrec(1024987654321, 9),
// From: https://www.wolframalpha.com/input?i=log_1.0001%281024.987654321%29+to+41+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000143836264"),
expected: MustNewDecFromStr("69327.824629506998657531621822514042777198"),
},
"log_1.0001{912648174127941279170121098210.92821920190204131121} = 689895.972156319183538389792485913311778672": {
initialValue: MustNewDecFromStr("912648174127941279170121098210.92821920190204131121"),
// From: https://www.wolframalpha.com/input?i=log_1.0001%28912648174127941279170121098210.92821920190204131121%29+to+42+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000001429936067"),
expected: MustNewDecFromStr("689895.972156319183538389792485913311778672"),
},
}

for name, tc := range tests {
s.Run(name, func() {
osmoassert.ConditionalPanic(s.T(), tc.expectedPanic, func() {
// Create a copy to test that the original was not modified.
// That is, that Ln() is non-mutative.
initialCopy := ZeroDec()
initialCopy.i.Set(tc.initialValue.i)

// system under test.
res := tc.initialValue.TickLog()
fmt.Println(name, res.Sub(tc.expected).Abs())
require.True(DecApproxEq(s.T(), tc.expected, res, tc.expectedErrTolerance))
require.Equal(s.T(), initialCopy, tc.initialValue)
})
})
}
}

func (s *decimalTestSuite) TestCustomBaseLog() {
tests := map[string]struct {
initialValue BigDec
base BigDec

expected BigDec
expectedErrTolerance BigDec

expectedPanic bool
}{
"log_2{-1}: normal base, invalid argument - panics": {
initialValue: NewBigDec(-1),
base: NewBigDec(2),
expectedPanic: true,
},
"log_2{0}: normal base, invalid argument - panics": {
initialValue: NewBigDec(0),
base: NewBigDec(2),
expectedPanic: true,
},
"log_(-1)(2): invalid base, normal argument - panics": {
initialValue: NewBigDec(2),
base: NewBigDec(-1),
expectedPanic: true,
},
"log_1(2): base cannot equal to 1 - panics": {
initialValue: NewBigDec(2),
base: NewBigDec(1),
expectedPanic: true,
},
"log_30(100) = 1.353984985057691049642502891262784015": {
initialValue: NewBigDec(100),
base: NewBigDec(30),
// From: https://www.wolframalpha.com/input?i=log_30%28100%29+to+37+digits
expectedErrTolerance: ZeroDec(),
expected: MustNewDecFromStr("1.353984985057691049642502891262784015"),
},
"log_0.2(0.99) = 0.006244624769837438271878639001855450": {
initialValue: MustNewDecFromStr("0.99"),
base: MustNewDecFromStr("0.2"),
// From: https://www.wolframalpha.com/input?i=log_0.2%280.99%29+to+34+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000000000013"),
expected: MustNewDecFromStr("0.006244624769837438271878639001855450"),
},

"log_0.0001(500000) = -1.424742501084004701196565276318876743": {
initialValue: NewBigDec(500000),
base: NewDecWithPrec(1, 4),
// From: https://www.wolframalpha.com/input?i=log_0.0001%28500000%29+to+37+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000000000003"),
expected: MustNewDecFromStr("-1.424742501084004701196565276318876743"),
},

"log_500000(0.0001) = -0.701881216598197542030218906945601429": {
initialValue: NewDecWithPrec(1, 4),
base: NewBigDec(500000),
// From: https://www.wolframalpha.com/input?i=log_500000%280.0001%29+to+36+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000000000001"),
expected: MustNewDecFromStr("-0.701881216598197542030218906945601429"),
},

"log_10000(5000000) = 1.674742501084004701196565276318876743": {
initialValue: NewBigDec(5000000),
base: NewBigDec(10000),
// From: https://www.wolframalpha.com/input?i=log_10000%285000000%29+to+37+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000000000002"),
expected: MustNewDecFromStr("1.674742501084004701196565276318876743"),
},
"log_0.123456789(1) = 0": {
initialValue: OneDec(),
base: MustNewDecFromStr("0.123456789"),

expectedErrTolerance: ZeroDec(),
expected: ZeroDec(),
},
"log_1111(1111) = 1": {
initialValue: NewBigDec(1111),
base: NewBigDec(1111),

expectedErrTolerance: ZeroDec(),
expected: OneDec(),
},

"log_1.123{1024.987654321} = 59.760484327223888489694630378785099461": {
initialValue: NewDecWithPrec(1024987654321, 9),
base: NewDecWithPrec(1123, 3),
// From: https://www.wolframalpha.com/input?i=log_1.123%281024.987654321%29+to+38+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000000007686"),
expected: MustNewDecFromStr("59.760484327223888489694630378785099461"),
},

"log_1.123{912648174127941279170121098210.92821920190204131121} = 594.689327867863079177915648832621538986": {
initialValue: MustNewDecFromStr("912648174127941279170121098210.92821920190204131121"),
base: NewDecWithPrec(1123, 3),
// From: https://www.wolframalpha.com/input?i=log_1.123%28912648174127941279170121098210.92821920190204131121%29+to+39+digits
expectedErrTolerance: MustNewDecFromStr("0.000000000000000000000000000000077705"),
expected: MustNewDecFromStr("594.689327867863079177915648832621538986"),
},
}
for name, tc := range tests {
s.Run(name, func() {
osmoassert.ConditionalPanic(s.T(), tc.expectedPanic, func() {
// Create a copy to test that the original was not modified.
// That is, that Ln() is non-mutative.
initialCopy := ZeroDec()
initialCopy.i.Set(tc.initialValue.i)

// system under test.
res := tc.initialValue.CustomBaseLog(tc.base)
require.True(DecApproxEq(s.T(), tc.expected, res, tc.expectedErrTolerance))
require.Equal(s.T(), initialCopy, tc.initialValue)
})
})
}
}
42 changes: 0 additions & 42 deletions proto/osmosis/streamswap/v1/event.proto

This file was deleted.

27 changes: 0 additions & 27 deletions proto/osmosis/streamswap/v1/genesis.proto

This file was deleted.

34 changes: 0 additions & 34 deletions proto/osmosis/streamswap/v1/params.proto

This file was deleted.

63 changes: 0 additions & 63 deletions proto/osmosis/streamswap/v1/query.proto

This file was deleted.

114 changes: 0 additions & 114 deletions proto/osmosis/streamswap/v1/state.proto

This file was deleted.

141 changes: 0 additions & 141 deletions proto/osmosis/streamswap/v1/tx.proto

This file was deleted.

21 changes: 21 additions & 0 deletions scripts/check_release.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# this script checks if existing upgrade handler's version is not smaller than current release

#!/bin/bash

VERSION=$1
latest_version=0
for f in app/upgrades/*; do
s_f=(${f//// })
version=${s_f[2]}
num_version=${version//[!0-9]/}
if [[ $num_version -gt $latest_version ]]; then
latest_version=$num_version
fi
done

VERSION=${VERSION[@]:1}
VERSION_MAJOR=(${VERSION//./ })
VERSION_MAJOR=${VERSION_MAJOR[0]}
if [[ $VERSION_MAJOR -gt $latest_version ]]; then
echo "MAJOR=1" >> $GITHUB_ENV
fi
92 changes: 92 additions & 0 deletions scripts/empty_upgrade_handler_gen.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
#!/bin/bash

# 1) this script creates an empty directory in app/upgrades called "vX" where X is a previous version + 1 with an empty upgrade handler.
# 2) increases E2E_UPGRADE_VERSION in makefile by 1
# 3) adds new version to app.go

# Also insures that all the imports make use of a current module version from go mod:
# (see: module=$(go mod edit -json | jq ".Module.Path") in this script)
# Github workflow which calls this script can be found here: osmosis/.github/workflows/auto-update-upgrade.yml

latest_version=0
for f in app/upgrades/*; do
s_f=(${f//// })
version=${s_f[2]}
num_version=${version//[!0-9]/}
if [[ $num_version -gt $latest_version ]]; then
LATEST_FILE=$f
latest_version=$num_version
fi
done
version_create=$((latest_version+1))
new_file=./app/upgrades/v${version_create}

mkdir $new_file
CONSTANTS_FILE=$new_file/constants.go
UPGRADES_FILE=$new_file/upgrades.go
touch $CONSTANTS_FILE
touch $UPGRADES_FILE

module=$(go mod edit -json | jq ".Module.Path")
module=${module%?}
path=${module%???}

bracks='"'
# set packages
echo -e "package v${version_create}\n" >> $CONSTANTS_FILE
echo -e "package v${version_create}\n" >> $UPGRADES_FILE

# imports
echo "import (" >> $CONSTANTS_FILE
echo "import (" >> $UPGRADES_FILE

# set imports for constants.go
echo -e "\t$module/app/upgrades$bracks\n" >> $CONSTANTS_FILE
echo -e '\tstore "github.com/cosmos/cosmos-sdk/store/types"' >> $CONSTANTS_FILE

# set imports for upgrades.go
echo -e '\tsdk "github.com/cosmos/cosmos-sdk/types"' >> $UPGRADES_FILE
echo -e '\t"github.com/cosmos/cosmos-sdk/types/module"' >> $UPGRADES_FILE
echo -e '\tupgradetypes "github.com/cosmos/cosmos-sdk/x/upgrade/types"\n' >> $UPGRADES_FILE
echo -e "\t$module/app/keepers$bracks" >> $UPGRADES_FILE
echo -e "\t$module/app/upgrades$bracks" >> $UPGRADES_FILE

# close import
echo ")" >> $UPGRADES_FILE
echo -e ")\n" >> $CONSTANTS_FILE

# constants.go logic
echo "// UpgradeName defines the on-chain upgrade name for the Osmosis v$version_create upgrade." >> $CONSTANTS_FILE
echo "const UpgradeName = ${bracks}v$version_create$bracks" >> $CONSTANTS_FILE
echo "
var Upgrade = upgrades.Upgrade{
UpgradeName: UpgradeName,
CreateUpgradeHandler: CreateUpgradeHandler,
StoreUpgrades: store.StoreUpgrades{},
}" >> $CONSTANTS_FILE

# upgrades.go logic
echo "
func CreateUpgradeHandler(
mm *module.Manager,
configurator module.Configurator,
bpm upgrades.BaseAppParamManager,
keepers *keepers.AppKeepers,
) upgradetypes.UpgradeHandler {
return func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) {
return mm.RunMigrations(ctx, configurator, fromVM)
}
}" >> $UPGRADES_FILE

# change app/app.go file
app_file=./app/app.go
UPGRADES_LINE=$(grep -F upgrades.Upgrade{ $app_file)
UPGRADES_LINE="${UPGRADES_LINE%?}, v${version_create}.Upgrade}"
sed -i "s|.*upgrades.Upgrade{.*|$UPGRADES_LINE|" $app_file

PREV_IMPORT="v$latest_version $module/app/upgrades/v$latest_version$bracks"
NEW_IMPORT="v$version_create $module/app/upgrades/v$version_create$bracks"
sed -i "s|.*$PREV_IMPORT.*|\t$PREV_IMPORT\n\t$NEW_IMPORT|" $app_file

# change e2e version in makefile
sed -i "s/E2E_UPGRADE_VERSION := ${bracks}v$latest_version$bracks/E2E_UPGRADE_VERSION := ${bracks}v$version_create$bracks/" ./Makefile
2 changes: 1 addition & 1 deletion tests/e2e/configurer/chain/commands.go
Original file line number Diff line number Diff line change
@@ -105,7 +105,7 @@ func (n *NodeConfig) FailIBCTransfer(from, recipient, amount string) {

cmd := []string{"osmosisd", "tx", "ibc-transfer", "transfer", "transfer", "channel-0", recipient, amount, fmt.Sprintf("--from=%s", from)}

_, _, err := n.containerManager.ExecTxCmdWithSuccessString(n.t, n.chainId, n.Name, cmd, "rate limit exceeded")
_, _, err := n.containerManager.ExecTxCmdWithSuccessString(n.t, n.chainId, n.Name, cmd, "Rate Limit exceeded")
require.NoError(n.t, err)

n.LogActionF("Failed to send IBC transfer (as expected)")
4 changes: 2 additions & 2 deletions tests/e2e/containers/config.go
Original file line number Diff line number Diff line change
@@ -24,10 +24,10 @@ const (
// It should be uploaded to Docker Hub. OSMOSIS_E2E_SKIP_UPGRADE should be unset
// for this functionality to be used.
previousVersionOsmoRepository = "osmolabs/osmosis"
previousVersionOsmoTag = "12.1"
previousVersionOsmoTag = "12.2"
// Pre-upgrade repo/tag for osmosis initialization (this should be one version below upgradeVersion)
previousVersionInitRepository = "osmolabs/osmosis-e2e-init-chain"
previousVersionInitTag = "v12.1.0-e2e-v1"
previousVersionInitTag = "v12.2.0"
// Hermes repo/version for relayer
relayerRepository = "osmolabs/hermes"
relayerTag = "0.13.0"
9 changes: 6 additions & 3 deletions tests/e2e/e2e_setup_test.go
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
//go:build e2e
// +build e2e

package e2e

import (
@@ -15,6 +12,8 @@ import (
)

const (
// Environment variable signifying whether to run e2e tests.
e2eEnabledEnv = "OSMOSIS_E2E"
// Environment variable name to skip the upgrade tests
skipUpgradeEnv = "OSMOSIS_E2E_SKIP_UPGRADE"
// Environment variable name to skip the IBC tests
@@ -40,6 +39,10 @@ type IntegrationTestSuite struct {
}

func TestIntegrationTestSuite(t *testing.T) {
isEnabled := os.Getenv(e2eEnabledEnv)
if isEnabled != "True" {
t.Skip(fmt.Sprintf("e2e test is disabled. To run, set %s to True", e2eEnabledEnv))
}
suite.Run(t, new(IntegrationTestSuite))
}

131 changes: 126 additions & 5 deletions tests/e2e/e2e_test.go
Original file line number Diff line number Diff line change
@@ -1,15 +1,18 @@
//go:build e2e
// +build e2e

package e2e

import (
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"strconv"
"time"

paramsutils "github.com/cosmos/cosmos-sdk/x/params/client/utils"

ibcratelimittypes "github.com/osmosis-labs/osmosis/v12/x/ibc-rate-limit/types"

sdk "github.com/cosmos/cosmos-sdk/types"
coretypes "github.com/tendermint/tendermint/rpc/core/types"

@@ -110,6 +113,124 @@ func (s *IntegrationTestSuite) TestSuperfluidVoting() {
)
}

// Copy a file from A to B with io.Copy
func copyFile(a, b string) error {
source, err := os.Open(a)
if err != nil {
return err
}
defer source.Close()
destination, err := os.Create(b)
if err != nil {
return err
}
defer destination.Close()
_, err = io.Copy(destination, source)
if err != nil {
return err
}
return nil
}

func (s *IntegrationTestSuite) TestIBCTokenTransferRateLimiting() {
if s.skipIBC {
s.T().Skip("Skipping IBC tests")
}
chainA := s.configurer.GetChainConfig(0)
chainB := s.configurer.GetChainConfig(1)

node, err := chainA.GetDefaultNode()
s.NoError(err)

supply, err := node.QueryTotalSupply()
s.NoError(err)
osmoSupply := supply.AmountOf("uosmo")

// balance, err := node.QueryBalances(chainA.NodeConfigs[1].PublicAddress)
// s.NoError(err)

f, err := osmoSupply.ToDec().Float64()
s.NoError(err)

over := f * 0.02

// Sending >1%
chainA.SendIBC(chainB, chainB.NodeConfigs[0].PublicAddress, sdk.NewInt64Coin(initialization.OsmoDenom, int64(over)))

// copy the contract from x/rate-limit/testdata/
wd, err := os.Getwd()
s.NoError(err)
// co up two levels
projectDir := filepath.Dir(filepath.Dir(wd))
fmt.Println(wd, projectDir)
err = copyFile(projectDir+"/x/ibc-rate-limit/bytecode/rate_limiter.wasm", wd+"/scripts/rate_limiter.wasm")
s.NoError(err)
node.StoreWasmCode("rate_limiter.wasm", initialization.ValidatorWalletName)
chainA.LatestCodeId += 1
node.InstantiateWasmContract(
strconv.Itoa(chainA.LatestCodeId),
fmt.Sprintf(`{"gov_module": "%s", "ibc_module": "%s", "paths": [{"channel_id": "channel-0", "denom": "%s", "quotas": [{"name":"testQuota", "duration": 86400, "send_recv": [1, 1]}] } ] }`, node.PublicAddress, node.PublicAddress, initialization.OsmoToken.Denom),
initialization.ValidatorWalletName)

// Using code_id 1 because this is the only contract right now. This may need to change if more contracts are added
contracts, err := node.QueryContractsFromId(chainA.LatestCodeId)
s.NoError(err)
s.Require().Len(contracts, 1, "Wrong number of contracts for the rate limiter")

proposal := paramsutils.ParamChangeProposalJSON{
Title: "Param Change",
Description: "Changing the rate limit contract param",
Changes: paramsutils.ParamChangesJSON{
paramsutils.ParamChangeJSON{
Subspace: ibcratelimittypes.ModuleName,
Key: "contract",
Value: []byte(fmt.Sprintf(`"%s"`, contracts[0])),
},
},
Deposit: "625000000uosmo",
}
proposalJson, err := json.Marshal(proposal)
s.NoError(err)

node.SubmitParamChangeProposal(string(proposalJson), initialization.ValidatorWalletName)
chainA.LatestProposalNumber += 1

for _, n := range chainA.NodeConfigs {
n.VoteYesProposal(initialization.ValidatorWalletName, chainA.LatestProposalNumber)
}

// The value is returned as a string, so we have to unmarshal twice
type Params struct {
Key string `json:"key"`
Subspace string `json:"subspace"`
Value string `json:"value"`
}

s.Eventually(
func() bool {
var params Params
node.QueryParams(ibcratelimittypes.ModuleName, "contract", &params)
var val string
err := json.Unmarshal([]byte(params.Value), &val)
if err != nil {
return false
}
return val != ""
},
1*time.Minute,
10*time.Millisecond,
"Osmosis node failed to retrieve params",
)

// Sending <1%. Should work
chainA.SendIBC(chainB, chainB.NodeConfigs[0].PublicAddress, sdk.NewInt64Coin(initialization.OsmoDenom, 1))
// Sending >1%. Should fail
node.FailIBCTransfer(initialization.ValidatorWalletName, chainB.NodeConfigs[0].PublicAddress, fmt.Sprintf("%duosmo", int(over)))

// Removing the rate limit so it doesn't affect other tests
node.WasmExecute(contracts[0], `{"remove_path": {"channel_id": "channel-0", "denom": "uosmo"}}`, initialization.ValidatorWalletName)
}

// TestAddToExistingLockPostUpgrade ensures addToExistingLock works for locks created preupgrade.
func (s *IntegrationTestSuite) TestAddToExistingLockPostUpgrade() {
if s.skipUpgrade {
@@ -401,7 +522,7 @@ func (s *IntegrationTestSuite) TestStateSync() {

// start the state synchin node.
err = stateSynchingNode.Run()
s.NoError(err)
s.Require().NoError(err)

// ensure that the state synching node cathes up to the running node.
s.Require().Eventually(func() bool {
@@ -417,7 +538,7 @@ func (s *IntegrationTestSuite) TestStateSync() {

// stop the state synching node.
err = chainA.RemoveNode(stateSynchingNode.Name)
s.NoError(err)
s.Require().NoError(err)
}

func (s *IntegrationTestSuite) TestExpeditedProposals() {
3 changes: 0 additions & 3 deletions tests/e2e/initialization/init_test.go
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
//go:build e2e
// +build e2e

package initialization_test

import (
205 changes: 169 additions & 36 deletions x/ibc-rate-limit/README.md
Original file line number Diff line number Diff line change
@@ -1,50 +1,134 @@
# # IBC Rate Limit
# IBC Rate Limit

The ``IBC Rate Limit`` middleware implements an [IBC Middleware](https://github.com/cosmos/ibc-go/blob/f57170b1d4dd202a3c6c1c61dcf302b6a9546405/docs/ibc/middleware/develop.md)
that wraps a [transfer](https://ibc.cosmos.network/main/apps/transfer/overview.html) app to regulate how much value can
flow in and out of the chain for a specific denom and channel.
The IBC Rate Limit module is responsible for adding a governance-configurable rate limit to IBC transfers.
This is a safety control, intended to protect assets on osmosis in event of:

## Contents
* a bug/hack on osmosis
* a bug/hack on the counter-party chain
* a bug/hack in IBC itself

1. **[Concepts](#concepts)**
2. **[Parameters](#parameters)**
3. **[Contract](#contract)**
4. **[Integration](#integration)**
This is done in exchange for a potential (one-way) bridge liveness tradeoff, in periods of high deposits or withdrawals.

## Concepts
The architecture of this package is a minimal go package which implements an [IBC Middleware](https://github.com/cosmos/ibc-go/blob/f57170b1d4dd202a3c6c1c61dcf302b6a9546405/docs/ibc/middleware/develop.md) that wraps the [ICS20 transfer](https://ibc.cosmos.network/main/apps/transfer/overview.html) app, and calls into a cosmwasm contract.
The cosmwasm contract then has all of the actual IBC rate limiting logic.
The Cosmwasm code can be found in the [`contracts`](./contracts/) package, with bytecode findable in the [`bytecode`](./bytecode/) folder. The cosmwasm VM usage allows Osmosis chain governance to choose to change this safety control with no hard forks, via a parameter change proposal, a great mitigation for faster threat adaptavity.

### Overview
The status of the module is being in a state suitable for some initial governance settable rate limits for high value bridged assets.
Its not in its long term / end state for all channels by any means, but does act as a strong protection we
can instantiate today for high value IBC connections.

The `x/ibc-rate-limit` module implements an IBC middleware and a transfer app wrapper. The middleware checks if the
amount of value of a specific denom transferred through a channel has exceeded a quota defined by governance for
that channel/denom. These checks are handled through a CosmWasm contract. The contract to be used for this is
configured via a parameter.
## Motivation

### Middleware
The motivation of IBC-rate-limit comes from the empirical observations of blockchain bridge hacks that a rate limit would have massively reduced the stolen amount of assets in:

- [Polynetwork Bridge Hack ($611 million)](https://rekt.news/polynetwork-rekt/)
- [BNB Bridge Hack ($586 million)](https://rekt.news/bnb-bridge-rekt/)
- [Wormhole Bridge Hack ($326 million)](https://rekt.news/wormhole-rekt/)
- [Nomad Bridge Hack ($190 million)](https://rekt.news/nomad-rekt/)
- [Harmony Bridge Hack ($100 million)](https://rekt.news/harmony-rekt/) - (Would require rate limit + monitoring)
- [Dragonberry IBC bug](https://forum.cosmos.network/t/ibc-security-advisory-dragonberry/7702) (can't yet disclose amount at risk, but was saved due to being found first by altruistic Osmosis core developers)

In the presence of a software bug on Osmosis, IBC itself, or on a counterparty chain, we would like to prevent the bridge from being fully depegged.
This stems from the idea that a 30% asset depeg is ~infinitely better than a 100% depeg.
Its _crazy_ that today these complex bridged assets can instantly go to 0 in event of bug.
The goal of a rate limit is to raise an alert that something has potentially gone wrong, allowing validators and developers to have time to analyze, react, and protect larger portions of user funds.

The thesis of this is that, it is worthwile to sacrifice liveness in the case of legitimate demand to send extreme amounts of funds, to prevent the terrible long-tail full fund risks.
Rate limits aren't the end-all of safety controls, they're merely the simplest automated one. More should be explored and added onto IBC!

## Rate limit types

We express rate limits in time-based periods.
This means, we set rate limits for (say) 6-hour, daily, and weekly intervals.
The rate limit for a given time period stores the relevant amount of assets at the start of the rate limit.
Rate limits are then defined on percentage terms of the asset.
The time windows for rate limits are currently _not_ rolling, they have discrete start/end times.

We allow setting separate rate limits for the inflow and outflow of assets.
We do all of our rate limits based on the _net flow_ of assets on a channel pair. This prevents DOS issues, of someone repeatedly sending assets back and forth, to trigger rate limits and break liveness.

We currently envision creating two kinds of rate limits:

* Per denomination rate limits
- allows safety statements like "Only 30% of Stars on Osmosis can flow out in one day" or "The amount of Atom on Osmosis can at most double per day".
* Per channel rate limits
- Limit the total inflow and outflow on a given IBC channel, based on "USDC" equivalent, using Osmosis as the price oracle.

We currently only implement per denomination rate limits for non-native assets. We do not yet implement channel based rate limits.

Currently these rate limits automatically "expire" at the end of the quota duration. TODO: Think of better designs here. E.g. can we have a constant number of subsequent quotas start filled? Or perhaps harmonically decreasing amounts of next few quotas pre-filled? Halted until DAO override seems not-great.

## Instantiating rate limits

Today all rate limit quotas must be set manually by governance.
In the future, we should design towards some conservative rate limit to add as a safety-backstop automatically for channels.
Ideas for how this could look:

* One month after a channel has been created, automatically add in some USDC-based rate limit
* One month after governance incentivizes an asset, add on a per-denomination rate limit.

Definitely needs far more ideation and iteration!

## Parameterizing the rate limit

One element is we don't want any rate limit timespan thats too short, e.g. not enough time for humans to react to. So we wouldn't want a 1 hour rate limit, unless we think that if its hit, it could be assessed within an hour.

### Handling rate limit boundaries

We want to be safe against the case where say we have a daily rate limit ending at a given time, and an adversary attempts to attack near the boundary window.
We would not like them to be able to "double extract funds" by timing their extraction near a window boundary.

Admittedly, not a lot of thought has been put into how to deal with this well.
Right now we envision simply handling this by saying if you want a quota of duration D, instead include two quotas of duration D, but offset by `D/2` from each other.

Ideally we can change windows to be more 'rolling' in the future, to avoid this overhead and more cleanly handle the problem. (Perhaps rolling ~1 hour at a time)

### Inflow parameterization

The "Inflow" side of a rate limit is essentially protection against unforeseen bug on a counterparty chain.
This can be quite conservative (e.g. bridged amount doubling in one week). This covers a few cases:

* Counter-party chain B having a token theft attack
- TODO: description of how this looks
* Counter-party chain B runaway mint
- TODO: description of how this looks
* IBC theft
- TODO: description of how this looks

It does get more complex when the counterparty chain is itself a DEX, but this is still much more protection than nothing.

### Outflow parameterization

The "Outflow" side of a rate limit is protection against a bug on Osmosis OR IBC.
This has potential for much more user-frustrating issues, if set too low.
E.g. if theres some event that causes many people to suddenly withdraw many STARS or many USDC.

So this parameterization has to contend with being a tradeoff of withdrawal liveness in high volatility periods vs being a crucial safety rail, in event of on-Osmosis bug.

TODO: Better fill out

### Example suggested parameterization

## Code structure

As mentioned at the beginning of the README, the go code is a relatively minimal ICS 20 wrapper, that dispatches relevant calls to a cosmwasm contract that implements the rate limiting functionality.

### Go Middleware

To achieve this, the middleware needs to implement the `porttypes.Middleware` interface and the
`porttypes.ICS4Wrapper` interface. This allows the middleware to send and receive IBC messages by wrapping
any IBC module, and be used as an ICS4 wrapper by a transfer module (for sending packets or writing acknowledgements).

Of those interfaces, just the following methods have custom logic:

* `ICS4Wrapper.SendPacket` adds tracking of value sent via an ibc channel
* `Middleware.OnRecvPacket` adds tracking of value received via an ibc channel
* `Middleware.OnAcknowledgementPacket` undos the tracking of a sent packet if the acknowledgment is not a success
* `OnTimeoutPacket` undos the tracking of a sent packet if the packet times out (is not relayed)
* `ICS4Wrapper.SendPacket` forwards to contract, with intent of tracking of value sent via an ibc channel
* `Middleware.OnRecvPacket` forwards to contract, with intent of tracking of value received via an ibc channel
* `Middleware.OnAcknowledgementPacket` forwards to contract, with intent of undoing the tracking of a sent packet if the acknowledgment is not a success
* `OnTimeoutPacket` forwards to contract, with intent of undoing the tracking of a sent packet if the packet times out (is not relayed)

All other methods from those interfaces are passthroughs to the underlying implementations.

### Contract Concepts

The tracking contract uses the following concepts

1. **RateLimit** - tracks the value flow transferred and the quota for a path.
2. **Path** - is a (denom, channel) pair.
3. **Flow** - tracks the value that has moved through a path during the current time window.
4. **Quota** - is the percentage of the denom's total value that can be transferred through the path in a given period of time (duration)

## Parameters
#### Parameters

The middleware uses the following parameters:

@@ -55,35 +139,84 @@ The middleware uses the following parameters:
1. **ContractAddress** -
The contract address is the address of an instantiated version of the contract provided under `./contracts/`

## Contract
### Cosmwasm Contract Concepts

Something to keep in mind with all of the code, is that we have to reason separately about every item in the following matrix:

| Native Token | Non-Native Token |
|----------------------|--------------------------|
| Send Native Token | Send Non-Native Token |
| Receive Native Token | Receive Non-Native Token |
| Timeout Native Send | Timeout Non-native Send |

### Messages
(Error ACK can reuse the same code as timeout)

TODO: Spend more time on sudo messages in the following description. We need to better describe how we map the quota concepts onto the code.
Need to describe how we get the quota beginning balance, and that its different for sends and receives.
Explain intracacies of tracking that a timeout and/or ErrorAck must appear from the same quota, else we ignore its update to the quotas.


The tracking contract uses the following concepts

1. **RateLimit** - tracks the value flow transferred and the quota for a path.
2. **Path** - is a (denom, channel) pair.
3. **Flow** - tracks the value that has moved through a path during the current time window.
4. **Quota** - is the percentage of the denom's total value that can be transferred through the path in a given period of time (duration)

#### Messages

The contract specifies the following messages:

#### Query
##### Query

* GetQuotas - Returns the quotas for a path

#### Exec
##### Exec

* AddPath - Adds a list of quotas for a path
* RemovePath - Removes a path
* ResetPathQuota - If a rate limit has been reached, the contract's governance address can reset the quota so that transfers are allowed again

#### Sudo
##### Sudo

Sudo messages can only be executed by the chain.

* SendPacket - Increments the amount used out of the send quota and checks that the send is allowed. If it isn't, it will return a RateLimitExceeded error
* RecvPacket - Increments the amount used out of the receive quota and checks that the receive is allowed. If it isn't, it will return a RateLimitExceeded error
* UndoSend - If a send has failed, the undo message is used to remove its cost from the send quota

## Integration
### Integration

The rate limit middleware wraps the `transferIBCModule` and is added as the entry route for IBC transfers.

The module is also provided to the underlying `transferIBCModule` as its `ICS4Wrapper`; previously, this would have
pointed to a channel, which also implements the `ICS4Wrapper` interface.

This integration can be seen in [osmosis/app/keepers/keepers.go](https://github.com/osmosis-labs/osmosis/blob/main/app/keepers/keepers.go)

## Testing strategy

TODO: Fill this out

## Known Future work

Items that've been highlighted above:

* Making automated rate limits get added for channels, instead of manual configuration only
* Improving parameterization strategies / data analysis
* Adding the USDC based rate limits
* We need better strategies for how rate limits "expire".

Not yet highlighted

* Making monitoring tooling to know when approaching rate limiting and when they're hit
* Making tooling to easily give us summaries we can use, to reason about "bug or not bug" in event of rate limit being hit
* Enabling ways to pre-declare large transfers so as to not hit rate limits.
* Perhaps you can on-chain declare intent to send these assets with a large delay, that raises monitoring but bypasses rate limits?
* Maybe contract-based tooling to split up the transfer suffices?
* Strategies to account for high volatility periods without hitting rate limits
* Can imagine "Hop network" style markets emerging
* Could imagine tieng it into looking at AMM volatility, or off-chain oracles
* but these are both things we should be wary of security bugs in.
* Maybe [constraint based programming with tracking of provenance](https://youtu.be/HB5TrK7A4pI?t=2852) as a solution
* Analyze changing denom-based rate limits, to just overall withdrawal amount for Osmosis
Binary file added x/ibc-rate-limit/bytecode/rate_limiter.wasm
Binary file not shown.
17 changes: 3 additions & 14 deletions x/ibc-rate-limit/contracts/rate-limiter/Cargo.toml
Original file line number Diff line number Diff line change
@@ -15,17 +15,6 @@ exclude = [
[lib]
crate-type = ["cdylib", "rlib"]

[profile.release]
opt-level = 3
debug = false
rpath = false
lto = true
debug-assertions = false
codegen-units = 1
panic = 'abort'
incremental = false
overflow-checks = true

[features]
# for more explicit tests, cargo test --features=backtraces
backtraces = ["cosmwasm-std/backtraces"]
@@ -43,14 +32,14 @@ optimize = """docker run --rm -v "$(pwd)":/code \
"""

[dependencies]
cosmwasm-std = "1.0.0"
cosmwasm-storage = "1.0.0"
cosmwasm-std = "1.1.0"
cosmwasm-storage = "1.1.0"
cosmwasm-schema = "1.1.0"
cw-storage-plus = "0.13.2"
cw2 = "0.13.2"
schemars = "0.8.8"
serde = { version = "1.0.137", default-features = false, features = ["derive"] }
thiserror = { version = "1.0.31" }

[dev-dependencies]
cosmwasm-schema = "1.0.0"
cw-multi-test = "0.13.2"
13 changes: 13 additions & 0 deletions x/ibc-rate-limit/contracts/rate-limiter/examples/schema.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
use cosmwasm_schema::write_api;

use rate_limiter::msg::{ExecuteMsg, InstantiateMsg, MigrateMsg, QueryMsg, SudoMsg};

fn main() {
write_api! {
instantiate: InstantiateMsg,
query: QueryMsg,
execute: ExecuteMsg,
sudo: SudoMsg,
migrate: MigrateMsg,
}
}
71 changes: 37 additions & 34 deletions x/ibc-rate-limit/contracts/rate-limiter/src/contract_tests.rs
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@

use crate::{contract::*, ContractError};
use cosmwasm_std::testing::{mock_dependencies, mock_env, mock_info};
use cosmwasm_std::{from_binary, Addr, Attribute};
use cosmwasm_std::{from_binary, Addr, Attribute, Uint256};

use crate::helpers::tests::verify_query_response;
use crate::msg::{InstantiateMsg, PathMsg, QueryMsg, QuotaMsg, SudoMsg};
@@ -52,8 +52,8 @@ fn consume_allowance() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_300_u32.into(),
funds: 300_u32.into(),
};
let res = sudo(deps.as_mut(), mock_env(), msg).unwrap();

@@ -64,8 +64,8 @@ fn consume_allowance() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_300_u32.into(),
funds: 300_u32.into(),
};
let err = sudo(deps.as_mut(), mock_env(), msg).unwrap_err();
assert!(matches!(err, ContractError::RateLimitExceded { .. }));
@@ -91,14 +91,14 @@ fn symetric_flows_dont_consume_allowance() {
let send_msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_300_u32.into(),
funds: 300_u32.into(),
};
let recv_msg = SudoMsg::RecvPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_000_u32.into(),
funds: 300_u32.into(),
};

let res = sudo(deps.as_mut(), mock_env(), send_msg.clone()).unwrap();
@@ -154,8 +154,8 @@ fn asymetric_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 60,
channel_value: 3_060_u32.into(),
funds: 60_u32.into(),
};
let res = sudo(deps.as_mut(), mock_env(), msg).unwrap();
let Attribute { key, value } = &res.attributes[4];
@@ -166,8 +166,8 @@ fn asymetric_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 60,
channel_value: 3_060_u32.into(),
funds: 60_u32.into(),
};

let res = sudo(deps.as_mut(), mock_env(), msg).unwrap();
@@ -180,8 +180,8 @@ fn asymetric_quotas() {
let recv_msg = SudoMsg::RecvPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 30,
channel_value: 3_000_u32.into(),
funds: 30_u32.into(),
};
let res = sudo(deps.as_mut(), mock_env(), recv_msg).unwrap();
let Attribute { key, value } = &res.attributes[3];
@@ -195,8 +195,8 @@ fn asymetric_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 60,
channel_value: 3_060_u32.into(),
funds: 60_u32.into(),
};
let err = sudo(deps.as_mut(), mock_env(), msg.clone()).unwrap_err();
assert!(matches!(err, ContractError::RateLimitExceded { .. }));
@@ -205,8 +205,8 @@ fn asymetric_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 30,
channel_value: 3_060_u32.into(),
funds: 30_u32.into(),
};
let res = sudo(deps.as_mut(), mock_env(), msg.clone()).unwrap();
let Attribute { key, value } = &res.attributes[3];
@@ -246,8 +246,8 @@ fn query_state() {
assert_eq!(value[0].quota.max_percentage_send, 10);
assert_eq!(value[0].quota.max_percentage_recv, 10);
assert_eq!(value[0].quota.duration, RESET_TIME_WEEKLY);
assert_eq!(value[0].flow.inflow, 0);
assert_eq!(value[0].flow.outflow, 0);
assert_eq!(value[0].flow.inflow, Uint256::from(0_u32));
assert_eq!(value[0].flow.outflow, Uint256::from(0_u32));
assert_eq!(
value[0].flow.period_end,
env.block.time.plus_seconds(RESET_TIME_WEEKLY)
@@ -256,16 +256,16 @@ fn query_state() {
let send_msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_300_u32.into(),
funds: 300_u32.into(),
};
sudo(deps.as_mut(), mock_env(), send_msg.clone()).unwrap();

let recv_msg = SudoMsg::RecvPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 30,
channel_value: 3_000_u32.into(),
funds: 30_u32.into(),
};
sudo(deps.as_mut(), mock_env(), recv_msg.clone()).unwrap();

@@ -277,8 +277,8 @@ fn query_state() {
"weekly",
(10, 10),
RESET_TIME_WEEKLY,
30,
300,
30_u32.into(),
300_u32.into(),
env.block.time.plus_seconds(RESET_TIME_WEEKLY),
);
}
@@ -317,8 +317,8 @@ fn bad_quotas() {
"bad_quota",
(100, 100),
200,
0,
0,
0_u32.into(),
0_u32.into(),
env.block.time.plus_seconds(200),
);
}
@@ -343,21 +343,24 @@ fn undo_send() {
let send_msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_300_u32.into(),
funds: 300_u32.into(),
};
let undo_msg = SudoMsg::UndoSend {
channel_id: format!("channel"),
denom: format!("denom"),
funds: 300,
funds: 300_u32.into(),
};

sudo(deps.as_mut(), mock_env(), send_msg.clone()).unwrap();

let trackers = RATE_LIMIT_TRACKERS
.load(&deps.storage, ("channel".to_string(), "denom".to_string()))
.unwrap();
assert_eq!(trackers.first().unwrap().flow.outflow, 300);
assert_eq!(
trackers.first().unwrap().flow.outflow,
Uint256::from(300_u32)
);
let period_end = trackers.first().unwrap().flow.period_end;
let channel_value = trackers.first().unwrap().quota.channel_value;

@@ -366,7 +369,7 @@ fn undo_send() {
let trackers = RATE_LIMIT_TRACKERS
.load(&deps.storage, ("channel".to_string(), "denom".to_string()))
.unwrap();
assert_eq!(trackers.first().unwrap().flow.outflow, 0);
assert_eq!(trackers.first().unwrap().flow.outflow, Uint256::from(0_u32));
assert_eq!(trackers.first().unwrap().flow.period_end, period_end);
assert_eq!(trackers.first().unwrap().quota.channel_value, channel_value);
}
8 changes: 6 additions & 2 deletions x/ibc-rate-limit/contracts/rate-limiter/src/error.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
use cosmwasm_std::{StdError, Timestamp};
use cosmwasm_std::{StdError, Timestamp, Uint256};
use thiserror::Error;

#[derive(Error, Debug)]
@@ -9,10 +9,14 @@ pub enum ContractError {
#[error("Unauthorized")]
Unauthorized {},

#[error("IBC Rate Limit exceded for channel {channel:?} and denom {denom:?}. Try again after {reset:?}")]
#[error("IBC Rate Limit exceeded for {channel}/{denom}. Tried to transfer {amount} which exceeds capacity on the '{quota_name}' quota ({used}/{max}). Try again after {reset:?}")]
RateLimitExceded {
channel: String,
denom: String,
amount: Uint256,
quota_name: String,
used: Uint256,
max: Uint256,
reset: Timestamp,
},

12 changes: 6 additions & 6 deletions x/ibc-rate-limit/contracts/rate-limiter/src/execute.rs
Original file line number Diff line number Diff line change
@@ -159,8 +159,8 @@ mod tests {
"daily",
(3, 5),
1600,
0,
0,
0_u32.into(),
0_u32.into(),
env.block.time.plus_seconds(1600),
);

@@ -208,8 +208,8 @@ mod tests {
"daily",
(3, 5),
1600,
0,
0,
0_u32.into(),
0_u32.into(),
env.block.time.plus_seconds(1600),
);

@@ -241,8 +241,8 @@ mod tests {
"different",
(50, 30),
5000,
0,
0,
0_u32.into(),
0_u32.into(),
env.block.time.plus_seconds(5000),
);
}
6 changes: 3 additions & 3 deletions x/ibc-rate-limit/contracts/rate-limiter/src/helpers.rs
Original file line number Diff line number Diff line change
@@ -37,7 +37,7 @@ impl RateLimitingContract {
}

pub mod tests {
use cosmwasm_std::Timestamp;
use cosmwasm_std::{Timestamp, Uint256};

use crate::state::RateLimit;

@@ -46,8 +46,8 @@ pub mod tests {
quota_name: &str,
send_recv: (u32, u32),
duration: u64,
inflow: u128,
outflow: u128,
inflow: Uint256,
outflow: Uint256,
period_end: Timestamp,
) {
assert_eq!(value.quota.name, quota_name);
76 changes: 38 additions & 38 deletions x/ibc-rate-limit/contracts/rate-limiter/src/integration_tests.rs
Original file line number Diff line number Diff line change
@@ -42,7 +42,7 @@ fn mock_app() -> App {
// Instantiate the contract
fn proper_instantiate(paths: Vec<PathMsg>) -> (App, RateLimitingContract) {
let mut app = mock_app();
let cw_template_id = app.store_code(contract_template());
let cw_code_id = app.store_code(contract_template());

let msg = InstantiateMsg {
gov_module: Addr::unchecked(GOV_ADDR),
@@ -52,7 +52,7 @@ fn proper_instantiate(paths: Vec<PathMsg>) -> (App, RateLimitingContract) {

let cw_rate_limit_contract_addr = app
.instantiate_contract(
cw_template_id,
cw_code_id,
Addr::unchecked(GOV_ADDR),
&msg,
&[],
@@ -82,8 +82,8 @@ fn expiration() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_300_u32.into(),
funds: 300_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
let res = app.sudo(cosmos_msg).unwrap();
@@ -105,8 +105,8 @@ fn expiration() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_300_u32.into(),
funds: 300_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
let _err = app.sudo(cosmos_msg).unwrap_err();
@@ -123,8 +123,8 @@ fn expiration() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_300_u32.into(),
funds: 300_u32.into(),
};

let cosmos_msg = cw_rate_limit_contract.sudo(msg);
@@ -162,8 +162,8 @@ fn multiple_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 1,
channel_value: 101_u32.into(),
funds: 1_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap();
@@ -172,8 +172,8 @@ fn multiple_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 1,
channel_value: 101_u32.into(),
funds: 1_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap_err();
@@ -188,8 +188,8 @@ fn multiple_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 1,
channel_value: 101_u32.into(),
funds: 1_u32.into(),
};

let cosmos_msg = cw_rate_limit_contract.sudo(msg);
@@ -207,8 +207,8 @@ fn multiple_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 1,
channel_value: 101_u32.into(),
funds: 1_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap();
@@ -224,8 +224,8 @@ fn multiple_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 1,
channel_value: 101_u32.into(),
funds: 1_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap_err();
@@ -240,8 +240,8 @@ fn multiple_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 1,
channel_value: 101_u32.into(),
funds: 1_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap_err();
@@ -257,8 +257,8 @@ fn multiple_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 1,
channel_value: 101_u32.into(),
funds: 1_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap_err();
@@ -272,8 +272,8 @@ fn multiple_quotas() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 1,
channel_value: 101_u32.into(),
funds: 1_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap();
@@ -296,8 +296,8 @@ fn channel_value_cached() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 1,
channel_value: 100_u32.into(),
funds: 1_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap();
@@ -306,8 +306,8 @@ fn channel_value_cached() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100,
funds: 3,
channel_value: 100_u32.into(),
funds: 3_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap_err();
@@ -316,8 +316,8 @@ fn channel_value_cached() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 100000,
funds: 3,
channel_value: 100000_u32.into(),
funds: 3_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg);
app.sudo(cosmos_msg).unwrap_err();
@@ -336,8 +336,8 @@ fn channel_value_cached() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 10_000,
funds: 100,
channel_value: 10_000_u32.into(),
funds: 100_u32.into(),
};

let cosmos_msg = cw_rate_limit_contract.sudo(msg);
@@ -353,8 +353,8 @@ fn channel_value_cached() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 10_000,
funds: 100,
channel_value: 10_000_u32.into(),
funds: 100_u32.into(),
};

let cosmos_msg = cw_rate_limit_contract.sudo(msg);
@@ -364,8 +364,8 @@ fn channel_value_cached() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 1,
funds: 75,
channel_value: 1_u32.into(),
funds: 75_u32.into(),
};

let cosmos_msg = cw_rate_limit_contract.sudo(msg);
@@ -380,8 +380,8 @@ fn add_paths_later() {
let msg = SudoMsg::SendPacket {
channel_id: format!("channel"),
denom: format!("denom"),
channel_value: 3_000,
funds: 300,
channel_value: 3_000_u32.into(),
funds: 300_u32.into(),
};
let cosmos_msg = cw_rate_limit_contract.sudo(msg.clone());
let res = app.sudo(cosmos_msg).unwrap();
29 changes: 14 additions & 15 deletions x/ibc-rate-limit/contracts/rate-limiter/src/msg.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
use cosmwasm_std::Addr;
use cosmwasm_schema::{cw_serde, QueryResponses};
use cosmwasm_std::{Addr, Uint256};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};

@@ -44,7 +45,7 @@ impl QuotaMsg {

/// Initialize the contract with the address of the IBC module and any existing channels.
/// Only the ibc module is allowed to execute actions on this contract
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, JsonSchema)]
#[cw_serde]
pub struct InstantiateMsg {
pub gov_module: Addr,
pub ibc_module: Addr,
@@ -53,8 +54,7 @@ pub struct InstantiateMsg {

/// The caller (IBC module) is responsible for correctly calculating the funds
/// being sent through the channel
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[cw_serde]
pub enum ExecuteMsg {
AddPath {
channel_id: String,
@@ -72,34 +72,33 @@ pub enum ExecuteMsg {
},
}

#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[cw_serde]
#[derive(QueryResponses)]
pub enum QueryMsg {
#[returns(Vec<crate::state::RateLimit>)]
GetQuotas { channel_id: String, denom: String },
}

#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[cw_serde]
pub enum SudoMsg {
SendPacket {
channel_id: String,
denom: String,
channel_value: u128,
funds: u128,
channel_value: Uint256,
funds: Uint256,
},
RecvPacket {
channel_id: String,
denom: String,
channel_value: u128,
funds: u128,
channel_value: Uint256,
funds: Uint256,
},
UndoSend {
channel_id: String,
denom: String,
funds: u128,
funds: Uint256,
},
}

#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[cw_serde]
pub enum MigrateMsg {}
64 changes: 64 additions & 0 deletions x/ibc-rate-limit/contracts/rate-limiter/src/packet.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
use cosmwasm_std::{Addr, Deps, Timestamp};
use serde::{Deserialize, Serialize};

#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
pub struct Height {
/// Previously known as "epoch"
revision_number: Option<u64>,

/// The height of a block
revision_height: Option<u64>,
}

#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
pub struct FungibleTokenData {
denom: String,
amount: u128,
sender: Addr,
receiver: Addr,
}

#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
pub struct Packet {
pub sequence: u64,
pub source_port: String,
pub source_channel: String,
pub destination_port: String,
pub destination_channel: String,
pub data: FungibleTokenData,
pub timeout_height: Height,
pub timeout_timestamp: Option<Timestamp>,
}

impl Packet {
pub fn channel_value(&self, _deps: Deps) -> u128 {
// let balance = deps.querier.query_all_balances("address", self.data.denom);
// deps.querier.sup
return 125000000000011250 * 2;
}

pub fn get_funds(&self) -> u128 {
return self.data.amount;
}

fn local_channel(&self) -> String {
// Pick the appropriate channel depending on whether this is a send or a recv
return self.destination_channel.clone();
}

fn local_demom(&self) -> String {
// This should actually convert the denom from the packet to the osmosis denom, but for now, just returning this
return self.data.denom.clone();
}

pub fn path_data(&self) -> (String, String) {
let denom = self.local_demom();
let channel = if denom.starts_with("ibc/") {
self.local_channel()
} else {
"any".to_string() // native tokens are rate limited globally
};

return (channel, denom);
}
}
114 changes: 85 additions & 29 deletions x/ibc-rate-limit/contracts/rate-limiter/src/state.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
use cosmwasm_std::{Addr, Timestamp};
use cosmwasm_std::{Addr, Timestamp, Uint256};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::cmp;
@@ -62,16 +62,15 @@ pub enum FlowType {
/// This is a design decision to avoid the period calculations and thus reduce gas consumption
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, JsonSchema, Copy)]
pub struct Flow {
// Q: Do we have edge case issues with inflow/outflow being u128, e.g. what if a token has super high precision.
pub inflow: u128,
pub outflow: u128,
pub inflow: Uint256,
pub outflow: Uint256,
pub period_end: Timestamp,
}

impl Flow {
pub fn new(
inflow: impl Into<u128>,
outflow: impl Into<u128>,
inflow: impl Into<Uint256>,
outflow: impl Into<Uint256>,
now: Timestamp,
duration: u64,
) -> Self {
@@ -87,22 +86,31 @@ impl Flow {
/// (balance_in, balance_out) where balance_in in is how much has been
/// transferred into the flow, and balance_out is how much value transferred
/// out.
pub fn balance(&self) -> (u128, u128) {
pub fn balance(&self) -> (Uint256, Uint256) {
(
self.inflow.saturating_sub(self.outflow),
self.outflow.saturating_sub(self.inflow),
)
}

/// checks if the flow, in the current state, has exceeded a max allowance
pub fn exceeds(&self, direction: &FlowType, max_inflow: u128, max_outflow: u128) -> bool {
pub fn exceeds(&self, direction: &FlowType, max_inflow: Uint256, max_outflow: Uint256) -> bool {
let (balance_in, balance_out) = self.balance();
match direction {
FlowType::In => balance_in > max_inflow,
FlowType::Out => balance_out > max_outflow,
}
}

/// returns the balance in a direction. This is used for displaying cleaner errors
pub fn balance_on(&self, direction: &FlowType) -> Uint256 {
let (balance_in, balance_out) = self.balance();
match direction {
FlowType::In => balance_in,
FlowType::Out => balance_out,
}
}

/// If now is greater than the period_end, the Flow is considered expired.
pub fn is_expired(&self, now: Timestamp) -> bool {
self.period_end < now
@@ -113,21 +121,21 @@ impl Flow {
/// Expire resets the Flow to start tracking the value transfer from the
/// moment this method is called.
pub fn expire(&mut self, now: Timestamp, duration: u64) {
self.inflow = 0;
self.outflow = 0;
self.inflow = Uint256::from(0_u32);
self.outflow = Uint256::from(0_u32);
self.period_end = now.plus_seconds(duration);
}

/// Updates the current flow incrementing it by a transfer of value.
pub fn add_flow(&mut self, direction: FlowType, value: u128) {
pub fn add_flow(&mut self, direction: FlowType, value: Uint256) {
match direction {
FlowType::In => self.inflow = self.inflow.saturating_add(value),
FlowType::Out => self.outflow = self.outflow.saturating_add(value),
}
}

/// Updates the current flow reducing it by a transfer of value.
pub fn undo_flow(&mut self, direction: FlowType, value: u128) {
pub fn undo_flow(&mut self, direction: FlowType, value: Uint256) {
match direction {
FlowType::In => self.inflow = self.inflow.saturating_sub(value),
FlowType::Out => self.outflow = self.outflow.saturating_sub(value),
@@ -139,7 +147,7 @@ impl Flow {
fn apply_transfer(
&mut self,
direction: &FlowType,
funds: u128,
funds: Uint256,
now: Timestamp,
quota: &Quota,
) -> bool {
@@ -166,21 +174,30 @@ pub struct Quota {
pub max_percentage_send: u32,
pub max_percentage_recv: u32,
pub duration: u64,
pub channel_value: Option<u128>,
pub channel_value: Option<Uint256>,
}

impl Quota {
/// Calculates the max capacity (absolute value in the same unit as
/// total_value) in each direction based on the total value of the denom in
/// the channel. The result tuple represents the max capacity when the
/// transfer is in directions: (FlowType::In, FlowType::Out)
pub fn capacity(&self) -> (u128, u128) {
pub fn capacity(&self) -> (Uint256, Uint256) {
match self.channel_value {
Some(total_value) => (
total_value * (self.max_percentage_recv as u128) / 100_u128,
total_value * (self.max_percentage_send as u128) / 100_u128,
total_value * Uint256::from(self.max_percentage_recv) / Uint256::from(100_u32),
total_value * Uint256::from(self.max_percentage_send) / Uint256::from(100_u32),
),
None => (0, 0), // This should never happen, but ig the channel value is not set, we disallow any transfer
None => (0_u32.into(), 0_u32.into()), // This should never happen, but ig the channel value is not set, we disallow any transfer
}
}

/// returns the capacity in a direction. This is used for displaying cleaner errors
pub fn capacity_on(&self, direction: &FlowType) -> Uint256 {
let (max_in, max_out) = self.capacity();
match direction {
FlowType::In => max_in,
FlowType::Out => max_out,
}
}
}
@@ -210,6 +227,29 @@ pub struct RateLimit {
pub flow: Flow,
}

// The channel value on send depends on the amount on escrow. The ibc transfer
// module modifies the escrow amount by "funds" on sends before calling the
// contract. This function takes that into account so that the channel value
// that we track matches the channel value at the moment when the ibc
// transaction started executing
fn calculate_channel_value(
channel_value: Uint256,
denom: &str,
funds: Uint256,
direction: &FlowType,
) -> Uint256 {
match direction {
FlowType::Out => {
if denom.contains("ibc") {
channel_value + funds // Non-Native tokens get removed from the supply on send. Add that amount back
} else {
channel_value - funds // Native tokens increase escrow amount on send. Remove that amount here
}
}
FlowType::In => channel_value,
}
}

impl RateLimit {
/// Checks if a transfer is allowed and updates the data structures
/// accordingly.
@@ -221,14 +261,26 @@ impl RateLimit {
&mut self,
path: &Path,
direction: &FlowType,
funds: u128,
channel_value: u128,
funds: Uint256,
channel_value: Uint256,
now: Timestamp,
) -> Result<Self, ContractError> {
// Flow used before this transaction is applied.
// This is used to make error messages more informative
let initial_flow = self.flow.balance_on(direction);

// Apply the transfer. From here on, we will updated the flow with the new transfer
// and check if it exceeds the quota at the current time

let expired = self.flow.apply_transfer(direction, funds, now, &self.quota);
// Cache the channel value if it has never been set or it has expired.
if self.quota.channel_value.is_none() || expired {
self.quota.channel_value = Some(channel_value)
self.quota.channel_value = Some(calculate_channel_value(
channel_value,
&path.denom,
funds,
direction,
))
}

let (max_in, max_out) = self.quota.capacity();
@@ -237,6 +289,10 @@ impl RateLimit {
true => Err(ContractError::RateLimitExceded {
channel: path.channel.to_string(),
denom: path.denom.to_string(),
amount: funds,
quota_name: self.quota.name.to_string(),
used: initial_flow,
max: self.quota.capacity_on(direction),
reset: self.flow.period_end,
}),
false => Ok(RateLimit {
@@ -292,18 +348,18 @@ pub mod tests {
assert!(!flow.is_expired(epoch.plus_seconds(RESET_TIME_WEEKLY)));
assert!(flow.is_expired(epoch.plus_seconds(RESET_TIME_WEEKLY).plus_nanos(1)));

assert_eq!(flow.balance(), (0_u128, 0_u128));
flow.add_flow(FlowType::In, 5);
assert_eq!(flow.balance(), (5_u128, 0_u128));
flow.add_flow(FlowType::Out, 2);
assert_eq!(flow.balance(), (3_u128, 0_u128));
assert_eq!(flow.balance(), (0_u32.into(), 0_u32.into()));
flow.add_flow(FlowType::In, 5_u32.into());
assert_eq!(flow.balance(), (5_u32.into(), 0_u32.into()));
flow.add_flow(FlowType::Out, 2_u32.into());
assert_eq!(flow.balance(), (3_u32.into(), 0_u32.into()));
// Adding flow doesn't affect expiration
assert!(!flow.is_expired(epoch.plus_seconds(RESET_TIME_DAILY)));

flow.expire(epoch.plus_seconds(RESET_TIME_WEEKLY), RESET_TIME_WEEKLY);
assert_eq!(flow.balance(), (0_u128, 0_u128));
assert_eq!(flow.inflow, 0_u128);
assert_eq!(flow.outflow, 0_u128);
assert_eq!(flow.balance(), (0_u32.into(), 0_u32.into()));
assert_eq!(flow.inflow, Uint256::from(0_u32));
assert_eq!(flow.outflow, Uint256::from(0_u32));
assert_eq!(flow.period_end, epoch.plus_seconds(RESET_TIME_WEEKLY * 2));

// Expiration has moved
8 changes: 4 additions & 4 deletions x/ibc-rate-limit/contracts/rate-limiter/src/sudo.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
use cosmwasm_std::{DepsMut, Response, Timestamp};
use cosmwasm_std::{DepsMut, Response, Timestamp, Uint256};

use crate::{
state::{FlowType, Path, RateLimit, RATE_LIMIT_TRACKERS},
@@ -14,8 +14,8 @@ use crate::{
pub fn try_transfer(
deps: DepsMut,
path: &Path,
channel_value: u128,
funds: u128,
channel_value: Uint256,
funds: Uint256,
direction: FlowType,
now: Timestamp,
) -> Result<Response, ContractError> {
@@ -96,7 +96,7 @@ fn add_rate_limit_attributes(response: Response, result: &RateLimit) -> Response

// This function manually injects an inflow. This is used when reverting a
// packet that failed ack or timed-out.
pub fn undo_send(deps: DepsMut, path: &Path, funds: u128) -> Result<Response, ContractError> {
pub fn undo_send(deps: DepsMut, path: &Path, funds: Uint256) -> Result<Response, ContractError> {
// Sudo call. Only go modules should be allowed to access this
let trackers = RATE_LIMIT_TRACKERS.may_load(deps.storage, path.into())?;

462 changes: 462 additions & 0 deletions x/ibc-rate-limit/ibc_middleware_test.go

Large diffs are not rendered by default.

24 changes: 20 additions & 4 deletions x/ibc-rate-limit/ibc_module.go
Original file line number Diff line number Diff line change
@@ -103,22 +103,38 @@ func (im *IBCModule) OnChanCloseConfirm(
return im.app.OnChanCloseConfirm(ctx, portID, channelID)
}

func ValidateReceiverAddress(packet channeltypes.Packet) error {
var packetData transfertypes.FungibleTokenPacketData
if err := json.Unmarshal(packet.GetData(), &packetData); err != nil {
return err
}
if len(packetData.Receiver) >= 4096 {
return sdkerrors.Wrapf(sdkerrors.ErrInvalidAddress, "IBC Receiver address too long. Max supported length is %d", 4096)
}
return nil
}

// OnRecvPacket implements the IBCModule interface
func (im *IBCModule) OnRecvPacket(
ctx sdk.Context,
packet channeltypes.Packet,
relayer sdk.AccAddress,
) exported.Acknowledgement {
if err := ValidateReceiverAddress(packet); err != nil {
return channeltypes.NewErrorAcknowledgement(err.Error())
}

contract := im.ics4Middleware.GetParams(ctx)
if contract == "" {
// The contract has not been configured. Continue as usual
return im.app.OnRecvPacket(ctx, packet, relayer)
}
amount, denom, err := GetFundsFromPacket(packet)
if err != nil {
return channeltypes.NewErrorAcknowledgement("bad packet")
return channeltypes.NewErrorAcknowledgement("bad packet in rate limit's OnRecvPacket")
}
channelValue := im.ics4Middleware.CalculateChannelValue(ctx, denom)

channelValue := im.ics4Middleware.CalculateChannelValue(ctx, denom, packet)

err = CheckAndUpdateRateLimits(
ctx,
@@ -127,11 +143,11 @@ func (im *IBCModule) OnRecvPacket(
contract,
channelValue,
packet.GetDestChannel(),
denom,
denom, // We always use the packet's denom here, as we want the limits to be the same on both directions
amount,
)
if err != nil {
return channeltypes.NewErrorAcknowledgement(types.RateLimitExceededMsg)
return channeltypes.NewErrorAcknowledgement(types.ErrRateLimitExceeded.Error())
}

// if this returns an Acknowledgement that isn't successful, all state changes are discarded
15 changes: 9 additions & 6 deletions x/ibc-rate-limit/ics4_wrapper.go
Original file line number Diff line number Diff line change
@@ -53,21 +53,23 @@ func (i *ICS4Wrapper) SendPacket(ctx sdk.Context, chanCap *capabilitytypes.Capab

amount, denom, err := GetFundsFromPacket(packet)
if err != nil {
return sdkerrors.Wrap(err, "Rate limited SendPacket")
return sdkerrors.Wrap(err, "Rate limit SendPacket")
}
channelValue := i.CalculateChannelValue(ctx, denom)

channelValue := i.CalculateChannelValue(ctx, denom, packet)

err = CheckAndUpdateRateLimits(
ctx,
i.ContractKeeper,
"send_packet",
contract,
channelValue,
packet.GetSourceChannel(),
denom,
denom, // We always use the packet's denom here, as we want the limits to be the same on both directions
amount,
)
if err != nil {
return sdkerrors.Wrap(err, "Rate limited SendPacket")
return sdkerrors.Wrap(err, "bad packet in rate limit's SendPacket")
}

return i.channel.SendPacket(ctx, chanCap, packet)
@@ -84,6 +86,7 @@ func (i *ICS4Wrapper) GetParams(ctx sdk.Context) (contract string) {

// CalculateChannelValue The value of an IBC channel. This is calculated using the denom supplied by the sender.
// if the denom is not correct, the transfer should fail somewhere else on the call chain
func (i *ICS4Wrapper) CalculateChannelValue(ctx sdk.Context, denom string) sdk.Int {
return i.bankKeeper.GetSupplyWithOffset(ctx, denom).Amount
func (i *ICS4Wrapper) CalculateChannelValue(ctx sdk.Context, denom string, packet exported.PacketI) sdk.Int {
// The logic is etracted into a function here so that it can be used within the tests
return CalculateChannelValue(ctx, denom, packet.GetSourcePort(), packet.GetSourceChannel(), i.bankKeeper)
}
44 changes: 37 additions & 7 deletions x/ibc-rate-limit/rate_limit.go
Original file line number Diff line number Diff line change
@@ -2,10 +2,13 @@ package ibc_rate_limit

import (
"encoding/json"
"strings"

wasmkeeper "github.com/CosmWasm/wasmd/x/wasm/keeper"
sdk "github.com/cosmos/cosmos-sdk/types"
sdkerrors "github.com/cosmos/cosmos-sdk/types/errors"
bankkeeper "github.com/cosmos/cosmos-sdk/x/bank/keeper"
transfertypes "github.com/cosmos/ibc-go/v3/modules/apps/transfer/types"
"github.com/cosmos/ibc-go/v3/modules/core/exported"
"github.com/osmosis-labs/osmosis/v12/x/ibc-rate-limit/types"
)
@@ -15,11 +18,6 @@ var (
msgRecv = "recv_packet"
)

type PacketData struct {
Denom string `json:"denom"`
Amount string `json:"amount"`
}

func CheckAndUpdateRateLimits(ctx sdk.Context, contractKeeper *wasmkeeper.PermissionedKeeper,
msgType, contract string,
channelValue sdk.Int, sourceChannel, denom string,
@@ -42,6 +40,7 @@ func CheckAndUpdateRateLimits(ctx sdk.Context, contractKeeper *wasmkeeper.Permis
}

_, err = contractKeeper.Sudo(ctx, contractAddr, sendPacketMsg)

if err != nil {
return sdkerrors.Wrap(types.ErrRateLimitExceeded, err.Error())
}
@@ -128,10 +127,41 @@ func BuildWasmExecMsg(msgType, sourceChannel, denom string, channelValue sdk.Int
}

func GetFundsFromPacket(packet exported.PacketI) (string, string, error) {
var packetData PacketData
var packetData transfertypes.FungibleTokenPacketData
err := json.Unmarshal(packet.GetData(), &packetData)
if err != nil {
return "", "", err
}
return packetData.Amount, packetData.Denom, nil
return packetData.Amount, GetLocalDenom(packetData.Denom), nil
}

func GetLocalDenom(denom string) string {
// Expected denoms in the following cases:
//
// send non-native: transfer/channel-0/denom -> ibc/xxx
// send native: denom -> denom
// recv (B)non-native: denom
// recv (B)native: transfer/channel-0/denom
//
if strings.HasPrefix(denom, "transfer/") {
denomTrace := transfertypes.ParseDenomTrace(denom)
return denomTrace.IBCDenom()
} else {
return denom
}
}

func CalculateChannelValue(ctx sdk.Context, denom string, port, channel string, bankKeeper bankkeeper.Keeper) sdk.Int {
if strings.HasPrefix(denom, "ibc/") {
return bankKeeper.GetSupplyWithOffset(ctx, denom).Amount
}

if channel == "any" {
// ToDo: Get all channels and sum the escrow addr value over all the channels
escrowAddress := transfertypes.GetEscrowAddress(port, channel)
return bankKeeper.GetBalance(ctx, escrowAddress, denom).Amount
} else {
escrowAddress := transfertypes.GetEscrowAddress(port, channel)
return bankKeeper.GetBalance(ctx, escrowAddress, denom).Amount
}
}
Binary file removed x/ibc-rate-limit/testdata/rate_limiter.wasm
Binary file not shown.
96 changes: 96 additions & 0 deletions x/ibc-rate-limit/testutil/chain.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
package osmosisibctesting

import (
"time"

"github.com/cosmos/cosmos-sdk/baseapp"
"github.com/cosmos/cosmos-sdk/client"
cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
sdk "github.com/cosmos/cosmos-sdk/types"
ibctesting "github.com/cosmos/ibc-go/v3/testing"
"github.com/cosmos/ibc-go/v3/testing/simapp/helpers"
"github.com/osmosis-labs/osmosis/v12/app"
abci "github.com/tendermint/tendermint/abci/types"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)

type TestChain struct {
*ibctesting.TestChain
}

// SendMsgsNoCheck overrides ibctesting.TestChain.SendMsgs so that it doesn't check for errors. That should be handled by the caller
func (chain *TestChain) SendMsgsNoCheck(msgs ...sdk.Msg) (*sdk.Result, error) {
// ensure the chain has the latest time
chain.Coordinator.UpdateTimeForChain(chain.TestChain)

_, r, err := SignAndDeliver(
chain.TxConfig,
chain.App.GetBaseApp(),
chain.GetContext().BlockHeader(),
msgs,
chain.ChainID,
[]uint64{chain.SenderAccount.GetAccountNumber()},
[]uint64{chain.SenderAccount.GetSequence()},
chain.SenderPrivKey,
)
if err != nil {
return nil, err
}

// SignAndDeliver calls app.Commit()
chain.NextBlock()

// increment sequence for successful transaction execution
err = chain.SenderAccount.SetSequence(chain.SenderAccount.GetSequence() + 1)
if err != nil {
return nil, err
}

chain.Coordinator.IncrementTime()

return r, nil
}

// SignAndDeliver signs and delivers a transaction without asserting the results. This overrides the function
// from ibctesting
func SignAndDeliver(
txCfg client.TxConfig, app *baseapp.BaseApp, header tmproto.Header, msgs []sdk.Msg,
chainID string, accNums, accSeqs []uint64, priv ...cryptotypes.PrivKey,
) (sdk.GasInfo, *sdk.Result, error) {
tx, _ := helpers.GenTx(
txCfg,
msgs,
sdk.Coins{sdk.NewInt64Coin(sdk.DefaultBondDenom, 0)},
helpers.DefaultGenTxGas,
chainID,
accNums,
accSeqs,
priv...,
)

// Simulate a sending a transaction and committing a block
app.BeginBlock(abci.RequestBeginBlock{Header: header})
gInfo, res, err := app.Deliver(txCfg.TxEncoder(), tx)

app.EndBlock(abci.RequestEndBlock{})
app.Commit()

return gInfo, res, err
}

// Move epochs to the future to avoid issues with minting
func (chain *TestChain) MoveEpochsToTheFuture() {
epochsKeeper := chain.GetOsmosisApp().EpochsKeeper
ctx := chain.GetContext()
for _, epoch := range epochsKeeper.AllEpochInfos(ctx) {
epoch.StartTime = ctx.BlockTime().Add(time.Hour * 24 * 30)
epochsKeeper.DeleteEpochInfo(chain.GetContext(), epoch.Identifier)
_ = epochsKeeper.AddEpochInfo(ctx, epoch)
}
}

// GetOsmosisApp returns the current chain's app as an OsmosisApp
func (chain *TestChain) GetOsmosisApp() *app.OsmosisApp {
v, _ := chain.App.(*app.OsmosisApp)
return v
}
70 changes: 70 additions & 0 deletions x/ibc-rate-limit/testutil/wasm.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
package osmosisibctesting

import (
"fmt"
"io/ioutil"

"github.com/stretchr/testify/require"

wasmkeeper "github.com/CosmWasm/wasmd/x/wasm/keeper"
wasmtypes "github.com/CosmWasm/wasmd/x/wasm/types"
sdk "github.com/cosmos/cosmos-sdk/types"
govtypes "github.com/cosmos/cosmos-sdk/x/gov/types"
transfertypes "github.com/cosmos/ibc-go/v3/modules/apps/transfer/types"
"github.com/osmosis-labs/osmosis/v12/x/ibc-rate-limit/types"
"github.com/stretchr/testify/suite"
)

func (chain *TestChain) StoreContractCode(suite *suite.Suite) {
osmosisApp := chain.GetOsmosisApp()

govKeeper := osmosisApp.GovKeeper
wasmCode, err := ioutil.ReadFile("./bytecode/rate_limiter.wasm")
suite.Require().NoError(err)

addr := osmosisApp.AccountKeeper.GetModuleAddress(govtypes.ModuleName)
src := wasmtypes.StoreCodeProposalFixture(func(p *wasmtypes.StoreCodeProposal) {
p.RunAs = addr.String()
p.WASMByteCode = wasmCode
})

// when stored
storedProposal, err := govKeeper.SubmitProposal(chain.GetContext(), src, false)
suite.Require().NoError(err)

// and proposal execute
handler := govKeeper.Router().GetRoute(storedProposal.ProposalRoute())
err = handler(chain.GetContext(), storedProposal.GetContent())
suite.Require().NoError(err)
}

func (chain *TestChain) InstantiateContract(suite *suite.Suite, quotas string) sdk.AccAddress {
osmosisApp := chain.GetOsmosisApp()
transferModule := osmosisApp.AccountKeeper.GetModuleAddress(transfertypes.ModuleName)
govModule := osmosisApp.AccountKeeper.GetModuleAddress(govtypes.ModuleName)

initMsgBz := []byte(fmt.Sprintf(`{
"gov_module": "%s",
"ibc_module":"%s",
"paths": [%s]
}`,
govModule, transferModule, quotas))

contractKeeper := wasmkeeper.NewDefaultPermissionKeeper(osmosisApp.WasmKeeper)
codeID := uint64(1)
creator := osmosisApp.AccountKeeper.GetModuleAddress(govtypes.ModuleName)
addr, _, err := contractKeeper.Instantiate(chain.GetContext(), codeID, creator, creator, initMsgBz, "rate limiting contract", nil)
suite.Require().NoError(err)
return addr
}

func (chain *TestChain) RegisterRateLimitingContract(addr []byte) {
addrStr, err := sdk.Bech32ifyAddressBytes("osmo", addr)
require.NoError(chain.T, err)
params, err := types.NewParams(addrStr)
require.NoError(chain.T, err)
osmosisApp := chain.GetOsmosisApp()
paramSpace, ok := osmosisApp.AppKeepers.ParamsKeeper.GetSubspace(types.ModuleName)
require.True(chain.T, ok)
paramSpace.SetParamSet(chain.GetContext(), &params)
}
3 changes: 1 addition & 2 deletions x/ibc-rate-limit/types/errors.go
Original file line number Diff line number Diff line change
@@ -5,8 +5,7 @@ import (
)

var (
RateLimitExceededMsg = "rate limit exceeded"
ErrRateLimitExceeded = sdkerrors.Register(ModuleName, 2, RateLimitExceededMsg)
ErrRateLimitExceeded = sdkerrors.Register(ModuleName, 2, "rate limit exceeded")
ErrBadMessage = sdkerrors.Register(ModuleName, 3, "bad message")
ErrContractError = sdkerrors.Register(ModuleName, 4, "contract error")
)
121 changes: 0 additions & 121 deletions x/streamswap/README.md

This file was deleted.

1,344 changes: 0 additions & 1,344 deletions x/streamswap/types/event.pb.go

This file was deleted.

758 changes: 0 additions & 758 deletions x/streamswap/types/genesis.pb.go

This file was deleted.

513 changes: 0 additions & 513 deletions x/streamswap/types/params.pb.go

This file was deleted.

1,404 changes: 0 additions & 1,404 deletions x/streamswap/types/query.pb.go

This file was deleted.

395 changes: 0 additions & 395 deletions x/streamswap/types/query.pb.gw.go

This file was deleted.

1,400 changes: 0 additions & 1,400 deletions x/streamswap/types/state.pb.go

This file was deleted.

2,359 changes: 0 additions & 2,359 deletions x/streamswap/types/tx.pb.go

This file was deleted.