Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(forge): Better deployment #402

Closed
brockelmore opened this issue Jan 8, 2022 · 38 comments · Fixed by #1208
Closed

feat(forge): Better deployment #402

brockelmore opened this issue Jan 8, 2022 · 38 comments · Fixed by #1208
Assignees
Labels
C-forge Command: forge D-hard Difficulty: hard T-feature Type: feature

Comments

@brockelmore
Copy link
Member

brockelmore commented Jan 8, 2022

Add the ability for forge to use a solidity file to create a contract on mainnet and arbitrary deploy scripts by adding a cheatcode. You could do something like:

contract t {
    function deployMyContract() {
        uint256 input1 = dep.someFunc();
        vm.mainnetDeploy();
        MyContract t = new MyContract(input1);
    }
}
@gakonst
Copy link
Member

gakonst commented Jan 8, 2022

Would vm.mainnetDeploy() broadcast the next call as a transaction? So it'd require having instantiated a signer and an RPC connection? Maybe it could be configured in a 2-step process, one which generates the transactions, and another one which fires them all? (brainstorming)

@gakonst gakonst added T-feature Type: feature C-forge Command: forge labels Jan 8, 2022
@Melvillian
Copy link

Along with vm.mainnetDeploy(), we could specify FOUNDRY_* env vars in the foundry.toml and then these scripts would implicitly use the FOUNDRY_RPC_URL and FOUNDRY_PRIVATE_KEY or MNEMONIC + PATH env vars to setup RPC connection and signers.

@gakonst
Copy link
Member

gakonst commented Jan 10, 2022

Another idea from @moodysalem:

name: Deploy

on:
  workflow_dispatch:
    inputs:
      rpc_url:
        description: URL of the endpoint against which to deploy
        required: true
        type: string
      gas_price:
        description: The gas price in GWEI of the native currency of the chain
        required: false
        default: '50'
        type: string

jobs:
  tests:
    name: Deploy

    runs-on: ubuntu-latest

    steps:
      - name: Deploy V3 Factory
        id: deploy-v3-factory
        uses: ethdeploy/deploy-contract@v1
        with:
          private-key: ${{ secrets.V3_FACTORY_DEPLOY_KEY }}
          object-uri: 'npm:@uniswap/v3-core@1.0.0:artifacts/contracts/UniswapV3Factory.sol/UniswapV3Factory.json'

      - name: Verify V3 Factory
        uses: ethdeploy/verify-contract@v1
        with:
          source-uri: 'git@github.com:Uniswap/v3-core.git/contracts/UniswapV3Factory.sol'

      - name: Set V3 Factory Owner
        uses: ethdeploy/call-contract@v1
        with:
          private-key: ${{ secrets.V3_FACTORY_DEPLOY_KEY }}
          address: ${{ steps.deploy-v3-factory.outputs.address }}
          methodSignature: 'setOwner(address newOwner)'
          arguments: '0x........'

And the deployment state would be saved in GHA via upload-artifacts / download-artifacts, so that you can run the workflow multiple times to resume it.

I think I prefer the Solidity only way still, just adding for context

@crisgarner
Copy link
Contributor

crisgarner commented Jan 13, 2022

I would rename it to vm.deploy(); and select network like:

forge deploy --network rinkeby

env variables for keys and .toml for paths, rpcs, and some configurations.

What I like about hardhat deploy plugin, is that you can deploy and run additional commands, like initial calls for setups or stuff like that. It also generates you and .json abi file with the deployed address used for frontend stuff and can generate you a single file with all contracts abis and addresses.

@brockelmore
Copy link
Member Author

we should be able to just deduce where you want to deploy to based on the RPC you provide

@crisgarner
Copy link
Contributor

we should be able to just deduce where you want to deploy to based on the RPC you provide

Usually we reuse deploy scripts for different networks

@brockelmore
Copy link
Member Author

Usually we reuse deploy scripts for different networks

Thats my point - when someone does forge create ./deploy/* --rpc-url <xxxxx>, we leverage the rpc url to deploy. if there is a config file and no rpc provided, we can force them to specify the network

@mds1
Copy link
Collaborator

mds1 commented Jan 13, 2022

Might already have this and I'm not seeing it, but we also would want a flag to specify when you want to deploy on a local fork of that RPC vs. actually deploying to the live network, e.g

  • forge create ./deploy/* --rpc-url <xxxxx> --fork to deploy to a local fork
  • forge create ./deploy/* --rpc-url <xxxxx> --live to deploy to the live network

with --fork being the default if unspecified to prevent accidental deploys

@d-xo
Copy link

d-xo commented Jan 13, 2022

Would you be able to reuse the deployment contract as a test fixture if it contains calls to vm.mainnetDeploy?

@crisgarner
Copy link
Contributor

Usually we reuse deploy scripts for different networks

Thats my point - when someone does forge create ./deploy/* --rpc-url <xxxxx>, we leverage the rpc url to deploy. if there is a config file and no rpc provided, we can force them to specify the network

Sounds good, but might me confusing doing forge create ./deploy/* --rpc-url <rinkeby.xxx> and in the contract vm.deployMainnet();

@gakonst
Copy link
Member

gakonst commented Jan 16, 2022

We can just call it vm.deploy() to avoid the confusion I think?

@brockelmore
Copy link
Member Author

brockelmore commented Jan 16, 2022

What do people think about vm.broadcast as the cheatcode? it makes it agnostic to create/call style transactions, as well as what network youre sending to.

@gakonst
Copy link
Member

gakonst commented Jan 16, 2022

Good point^

So maybe

contract Deployer {
    function deploy() external { 
        vm.startBroadcast()
        Foo foo = new Foo();
        foo.set(5);
        vm.stopBroadcast()
        uint256 y = foo.bar() * 2;
        vm.broadcast()
        foo.setY(y);
    }
}

(vm.broadcast() would apply only to the next call)

In the above case, forge deploy would produce 3 ethers-rs TypedTransaction (EIP1559 by default, with legacy option to switch to old tx style) which would be broadcast to the network of choice via --rpc-url or ETH_RPC_URL.

And maybe forge deploy --offline would serialize the transactions to some JSON blob usable by other tools?

In an ideal world, we're also able to test these deploy scripts. I now wonder if we should add the assertions in line, e.g. as require statements above, or if they should be executed as part of a forge test-like workflow with a contract DeployerTest { function testDeploy() } type function?

@brockelmore
Copy link
Member Author

we also need to simulate it and make sure it doesnt fail - also would be good to give the gas costs

@gakonst
Copy link
Member

gakonst commented Jan 16, 2022

Wonder how we'd handle idempotency, e.g. if for whatever reason it fails at the first call, how does it know to resume from the second one? @moodysalem any thoughts here (cf the v3 deployer from some time ago)

@brockelmore
Copy link
Member Author

one option is that when we send a tx, we record it in a cache file - with deployed addresses and what not. I think hardhat does something similar. and if we have a cache, we pickup where we left off unless they pass a flag to restart the deployment (we would need to include the chain it was deployed to tho for multichain deployments, like local node vs mainnet)

@lucas-manuel
Copy link

Yeah I think this is a great idea and would definitely use it.

A couple of ideas (echoing some that are above but wanted to have a comprehensive list):

  • "Dry-run" deployments on mainnet forks
  • Gas cost summaries of deployments
  • Some sort of artifact that gets generated with all deployed addresses (and tx hashes maybe)
  • Being able to ensure auto-verification of all deployed contracts (single file verification)
  • Being able to run assertions against newly deployed contracts to ensure complete deployment

@mds1
Copy link
Collaborator

mds1 commented Jan 16, 2022

I like all the ideas on @lucas-manuel's list, and would like to add two more:

  • A way to specify expected deployer account and nonce, or alternatively perhaps a way to specify expected contract address
  • Ensure deploy scripts are easily reusable as setUp() logic, to easily run tests against deployed state

@onbjerg
Copy link
Member

onbjerg commented Jan 16, 2022

Putting this here: https://github.com/EthWorks/Mars

Mars has dry-runs, gas cost summaries, automatic deployment of only parts of your code that changed etc. We could probably borrow some ideas from there

@fubhy
Copy link
Contributor

fubhy commented Jan 24, 2022

I love this ...

Some random thoughts (copied from chat):

I wonder how well this would work for necessariliy moduralized deployments of larger projects and/or maintaining an idempotent deployment over a longer time span with different parts of the project having different lifespans & different solidity versions.

I already ran into a similar problem in my integration tests where one of the core contracts (persistent contract in an otherwise upgradable protocol) is on solidity 0.6.12 so I can't directly use it, and inlining and bumping it to 0.8.11 just for the sake of the test would be weird. Sure, a fork test would work fine but comes with its own downsides (mainly performance).

Anyways ...

I'd imagine that problems of a similar nature might arise for long-lived, upgradable protocols that wish to deploy & configure their contracts in an idempotent IAC fashion.

It feels like a layered approach could work well where deployment artifacts shared between these layers/steps in sequence could also be used as a starting point for integration tests (by etching the deployed bytecode as recorded in the deployment artifacts of the previous layers), thus not needing a fork.

This could also be a fine approach for Acceptance Tests prior to a protocol upgrade.

... And a nice way to share deployment artifacts between repositories (just like lib) possibly including state.

@fubhy
Copy link
Contributor

fubhy commented Jan 25, 2022

Imho, this is related because integration testing on top of the actual deployment can be quite useful: #326

I might be overthinking this, but here's a possible solution that could work also for types of projects as described in my previous comments without sacrificing DX/UX:

We might be able to structure this with "modularized" deployment files that define their I/O interface using shared structs that live in an "interface" contract in a separate .sol file without a version constraint. Thus, these interfaces could be shared between different deployment modules (and tests) with different solidity versions. This interface contract would feature serialization & deserialization (SerDe) a.k.a encode/decode functions for the I/O structs of the corresponding deployment contract.

A cheatcode could be used to then utilize these interfaces, both to share I/O between deployment modules and to enable integration tests to use a deployment (and its output) e.g. in setUp without juggling abi encode/decode manually.

When running actual production deployments, these structs could be serialized into artifacts and written to disk.

Does this make any sense at all? Is it worth a shot?

@brockelmore
Copy link
Member Author

brockelmore commented Jan 26, 2022

I am just attacking this based on intuition first. I have started development of this but thought id outline current plan.

Basic plan is to:

  1. support contracts that have IS_DEPLOY - similar to IS_TEST,
  2. run functions that have deploy in the name, with support for numbers, i.e. deploy1, deploy2, etc.
  3. Support broadcast, startBroadcast and stopBroadcast, which has similar semantics to prank
  4. we will execute in the vm and craft the needed TransactionRequest based on the broadcast cheatcode state
  5. by default, forge deploy dry runs (i.e. doesn’t actually send transactions). you specifically pass a flag to actually broadcast
  6. we will print out a report of the deployments (and actually broadcast them). more details on what this entails likely to arise later
  7. we will cache results, and maybe full txs even, via json based on deploy function number
  8. you will be able to pass in a selection of deployment stages like 1 3 6 which would run deploy1, deploy3 and deploy6

i already have the cheatcodes written and tx crafting done. now it’s the hard part of actually building the interaction with the VM

addressing comments:

I wonder how well this would work for necessariliy moduralized deployments of larger projects and/or maintaining an idempotent deployment over a longer time span with different parts of the project having different lifespans & different solidity versions.

having multiple deploy1 deploy2, etc, as well as separate deploy contracts should make this trivial to upgrade and manage larger repos & varying lifetimes & versionings because you can simply have a contract that is UpgradeA that has a new pragma 0.8.11 for example, which would import UpgradedA that also used that version. So the compiler pipeline would just take care of the versioning for you.

Imho, this is related because integration testing on top of the actual deployment can be quite useful: #326

Agree, stdlib will have a helper function for getting code that isnt compatible with a particular version that leverages the getCode cheatcode.

@wilsoncusack
Copy link
Contributor

awesome. Thoughts on renaming IS_DEPLOY to something like IS_LIVE_REQUESTS to indicate that this could also be used for making live txs and not just deploys?

@brockelmore
Copy link
Member Author

ye good thought - we should make all naming agnostic to deploy vs normal txs

@mds1
Copy link
Collaborator

mds1 commented Jan 26, 2022

support contracts that have IS_DEPLOY - similar to IS_TEST

afaik we don't actually use IS_TEST, and just use the test prefix to identify test functions—just mentioning since it'd be nice to have consistent behavior between how the special prefixes behave, especially since we'll likely have more prefixes in the future too, like invariant and prove

also, how would it work with reusing the deploy() methods so you can run tests against post-deploy state? would you just call the deploy methods within setUp() and have deploy assign the deployed contracts to your test contract state variables?

@wighawag
Copy link

Hi, just found about this discussion after @onbjerg mentioned it to me here: #634 where I basically mention the lack of mechanism to replicate deployment procedure in tests and allude to a potential mechanism to solve it.

I see that here you are discussing an approach where you start from the solidity test and imagine thus a mechanism where such test solidity code would perform the deployment themselves, making these deployment setup available in test as a matter of fact.

I have been thinking of a different approach, where the deployment procedure would be free to use whatever programming language suits them.

This has several benefit compared to a solidity approach:

  • full flexibility, reading file, managing proxy upgrade and check like storage layout changes, etc...
  • no bottleneck on the availability of specific cheatcode (with the approach discussed here, I can easily imagine request for specific cheatcode not thought of initially)
  • no need to learn a new DSL (cheatcode)
  • can reuse existing library to deal with all the intricacies of deployments, including L2 like zksync that require different transaction mechanism.

The way I see that alternative approach is as follow :

forge would let the user setup the EVM state before running the tests by invoking some scripts.

Basically such script would be given a RPC endpoint (FOUNDRY_RPC_URL) and can perform any action on it, including but not limited to contract deployment. The only requirement for these scripts would be to save the contract data in a specific location / format so that the solidity tests can reuse them.

Thinking of something like:

forge test --before "deploy.py" or forge test --before "node deploy.js" or whatever

where forge would execute whatever command is passed to it, here "deploy.py" or "node deploy.js", but this can be anything.

That execution would have a specific environment variable set to be the RPC endpoint to talk to (FOUNDRY_RPC_URL) and maybe other env variavkes too. Here, since we are dealing with the test command, the rpc endpoint would simply be the in-memory EVM against which the tests are run.

deploy.py or deploy.js would then do whatever it want to with that rpc end point (Deploying new contracts, making calls, constructing merkle trees, making proxy upgrades, including checking storage layout conflict, etcc...). the only thing it needs to do is to save the result of the deployment in specific location / format.

Once the script finishes, forge can then execute the tests and provide cheatcode for the tests to access the deployment data (like addresses, argument used or more). forge should also snapshot the state of the EVM before the tests are executed so that it reset to it before every test, (instead of having to re-execute the deployments). (evm_snapsnot)

Then when it is time to deploy on a network, the script would not even need forge, but forge could have a command to facilitate the setting of the environment variable described above. something like :

forge deploy --network rinkeby "deploy.py" or a more generic term might be forge execute --network rinkeby "node deploy.js" (because after all these does not need to be about deployments of new contracts)

a js script could look like this

...
const provider = new JsonRPCProvider(process.env.FOUNDRY_RPC_URL);
...
async main() {
  ...
  const factory = new ContractFactory(abi, bytecode, signer);
  const contract = await factory.deploy();
  fs.writeFileSync(`${process.env.FOUNDRY_DATA_FOLDER}/MyDeployedContract.json`, {address: contract.address, abi, bytecode, ...});
}

As for the solidity test, these would look something like the following:

...
  function setUp() {
    MyDeployedContract myContract = MyDeployedContract(vm.getData("MyDeployedContract01").address); 
  }
...

What do you think ?

@fubhy
Copy link
Contributor

fubhy commented Feb 1, 2022

@wighawag I think that in this discussion we've already reached consensus that deployments should absolutely be available to tests. It's clear that we need to be able to verify the integrity of a deployment. Handling deployment artifacts & I/O of some kind is definitely necessary too.

full flexibility, reading file, managing proxy upgrade and check like storage layout changes, etc...

This can be achieved with cheatcodes (handling deployment artifact / config I/O, etc.) too and particularly storage layout changes, etc. are more easily tested directly in solidiity rather than through the JSON RPC indirection.

no bottleneck on the availability of specific cheatcode (with the approach discussed here, I can easily imagine request for specific cheatcode not thought of initially)

Cheatcodes are great and are simply the equivalent of e.g. hardhat_impersonateAccount or other custom JSON RPC methods. Missing cheatcodes can be worked around with ffi or by possibly making cheatcodes pluggable through other means in the future. Not sure if this has been discussed yet.

no need to learn a new DSL (cheatcode)

What's the difference between learning cheatcodes vs. custom rpc calls & other stubbing / mocking / artifact loading etc. utils in a less native environment. Personally, I think the benefit of using solidity code for deploying contracts is much more intuitive than going via multiple layers of indirection & abstraction in a foreign language.

can reuse existing library to deal with all the intricacies of deployments, including L2 like zksync that require different transaction mechanism.

There are definitely certain (arguably less common) use-cases that might not be covered initially. I think that's fine. Personally, I am absolutely positive that solidity-native deployments are going to offer a far superior developer experience for the majority of cases. Let's not discard this attempt just because it might not cover all cases right off the bat.

All that said, nothing is stopping you from building a deployment framework around JSON RPC & the build artifacts produced by the compiler. You could load the deployment artifacts into a fork test (e.g. with a custom cheatcode or ffi) and proceed to verify the deployment there.

@wighawag
Copy link

wighawag commented Feb 1, 2022

@fubhy thanks for the reply

Looking forward to see such solidity-based deployment implemented :)

All that said, nothing is stopping you from building a deployment framework around JSON RPC & the build artifacts produced by the compiler. You could load the deployment artifacts into a fork test (e.g. with a custom cheatcode or iif) and proceed to verify the deployment there.

One benefit of the approach I mentioned, is that I would actually have to do nothing and could reuse existing framework.

from what you said here, it seems I can already do it. Where could I learn more ?

Feel free to reply here : #634 to keep that discussion on solidity-based deployments

@gakonst
Copy link
Member

gakonst commented Feb 1, 2022

Current plan is for @brockelmore to handle the cheatcode / TX creator, and I'll write the "driver" code which executes each migration to completion idempotently, with gas escalation/rebroadcast for reliable inclusion etc. We'll update you all - thx for all the feedback

@sambacha
Copy link
Contributor

Just making a note: this is a feature most people will not really leverage in the sense that they do not typically have overly complicated deployments.

I have used this before
https://github.com/icetan/abi-to-dhall

It generates a bash script for deploying, it cam handle doing the entire DSS. I think the autogenerated deployment scripts should be a feature especially in large deployments: who wants to waste time doing that.

As for persistence of compiled and deployed output , we need it for verification and for downstream distribution. We could solve both at once and provision a service that offers that, I had explored this idea a few months ago with https://abi.storage

Cheers

@sambacha
Copy link
Contributor

just to clarify the benefit of having persistence handled together with deployment is now you could provide a URL to consumers of your application/etc, like unpkg. esm.sh, etc

@wighawag
Copy link

just to clarify the benefit of having persistence handled together with deployment is now you could provide a URL to consumers of your application/etc, like unpkg. esm.sh, etc

Definitely!

To add to that, another very useful feature (present in hardhat-deploy) is the ability to share the deployment procedure itself. Very useful to integrate whatever protocol in your integration tests / app.

For example, if you use hardhat-deploy and you need to test your protocol against uniswap, just import hardhat-deploy-uniswap-v2 (ideally provided by the project itsself) from npm and you can do deployments.fixture('UniswapV2Router') and get access to a fully deployed uniswap, in your test but also when you develop your app.

@sambacha
Copy link
Contributor

Ability to Use a defined artifact packaging schema, eg eth-pm, dpack, pojson, etc

@sambacha
Copy link
Contributor

also, websockets.

@brockelmore brockelmore linked a pull request Apr 5, 2022 that will close this issue
8 tasks
@CyrusOfEden
Copy link

CyrusOfEden commented Apr 12, 2022

I'm new to this so this may be both off topic and wrong, but I'm wondering if y'all considered using a proxy pattern coupled with deploy contracts. Feels like it would simplify a lot of the semantics

Based on my reading from ds-proxy couldn't we have forge deploy a proxy for each account, and then forge build into a cast send?

@onbjerg onbjerg added this to Foundry Apr 17, 2022
@onbjerg onbjerg moved this to Todo in Foundry Apr 17, 2022
@onbjerg onbjerg moved this from Todo to In Progress in Foundry Apr 17, 2022
@brockelmore
Copy link
Member Author

I'm new to this so this may be both off topic and wrong, but I'm wondering if y'all considered using a proxy pattern coupled with deploy contracts. Feels like it would simplify a lot of the semantics

Based on my reading from ds-proxy couldn't we have forge deploy a proxy for each account, and then forge build into a cast send?

We don't want to force users into a particular setup. We want this to feel like a better version of writing a script in javascript. Adding a proxy and what not forces users into a particular style. Given the work in #1208, I think we will stick with this

@CyrusOfEden
Copy link

Sweet, excited for what's to come! Foundry ftw.

Repository owner moved this from In Progress to Done in Foundry May 29, 2022
@aathan
Copy link
Contributor

aathan commented Jun 5, 2022

See #1840

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-forge Command: forge D-hard Difficulty: hard T-feature Type: feature
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.