-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(forge): Better deployment #402
Comments
Would |
Along with |
Another idea from @moodysalem: name: Deploy
on:
workflow_dispatch:
inputs:
rpc_url:
description: URL of the endpoint against which to deploy
required: true
type: string
gas_price:
description: The gas price in GWEI of the native currency of the chain
required: false
default: '50'
type: string
jobs:
tests:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Deploy V3 Factory
id: deploy-v3-factory
uses: ethdeploy/deploy-contract@v1
with:
private-key: ${{ secrets.V3_FACTORY_DEPLOY_KEY }}
object-uri: 'npm:@uniswap/v3-core@1.0.0:artifacts/contracts/UniswapV3Factory.sol/UniswapV3Factory.json'
- name: Verify V3 Factory
uses: ethdeploy/verify-contract@v1
with:
source-uri: 'git@github.com:Uniswap/v3-core.git/contracts/UniswapV3Factory.sol'
- name: Set V3 Factory Owner
uses: ethdeploy/call-contract@v1
with:
private-key: ${{ secrets.V3_FACTORY_DEPLOY_KEY }}
address: ${{ steps.deploy-v3-factory.outputs.address }}
methodSignature: 'setOwner(address newOwner)'
arguments: '0x........' And the deployment state would be saved in GHA via upload-artifacts / download-artifacts, so that you can run the workflow multiple times to resume it. I think I prefer the Solidity only way still, just adding for context |
I would rename it to forge deploy --network rinkeby env variables for keys and .toml for paths, rpcs, and some configurations. What I like about hardhat deploy plugin, is that you can deploy and run additional commands, like initial calls for setups or stuff like that. It also generates you and .json abi file with the deployed address used for frontend stuff and can generate you a single file with all contracts abis and addresses. |
we should be able to just deduce where you want to deploy to based on the RPC you provide |
Usually we reuse deploy scripts for different networks |
Thats my point - when someone does |
Might already have this and I'm not seeing it, but we also would want a flag to specify when you want to deploy on a local fork of that RPC vs. actually deploying to the live network, e.g
with |
Would you be able to reuse the deployment contract as a test fixture if it contains calls to |
Sounds good, but might me confusing doing |
We can just call it vm.deploy() to avoid the confusion I think? |
What do people think about |
Good point^ So maybe contract Deployer {
function deploy() external {
vm.startBroadcast()
Foo foo = new Foo();
foo.set(5);
vm.stopBroadcast()
uint256 y = foo.bar() * 2;
vm.broadcast()
foo.setY(y);
}
} ( In the above case, And maybe In an ideal world, we're also able to test these deploy scripts. I now wonder if we should add the assertions in line, e.g. as |
we also need to simulate it and make sure it doesnt fail - also would be good to give the gas costs |
Wonder how we'd handle idempotency, e.g. if for whatever reason it fails at the first call, how does it know to resume from the second one? @moodysalem any thoughts here (cf the v3 deployer from some time ago) |
one option is that when we send a tx, we record it in a cache file - with deployed addresses and what not. I think hardhat does something similar. and if we have a cache, we pickup where we left off unless they pass a flag to restart the deployment (we would need to include the chain it was deployed to tho for multichain deployments, like local node vs mainnet) |
Yeah I think this is a great idea and would definitely use it. A couple of ideas (echoing some that are above but wanted to have a comprehensive list):
|
I like all the ideas on @lucas-manuel's list, and would like to add two more:
|
Putting this here: https://github.com/EthWorks/Mars Mars has dry-runs, gas cost summaries, automatic deployment of only parts of your code that changed etc. We could probably borrow some ideas from there |
I love this ... Some random thoughts (copied from chat): I wonder how well this would work for necessariliy moduralized deployments of larger projects and/or maintaining an idempotent deployment over a longer time span with different parts of the project having different lifespans & different solidity versions. I already ran into a similar problem in my integration tests where one of the core contracts (persistent contract in an otherwise upgradable protocol) is on solidity 0.6.12 so I can't directly use it, and inlining and bumping it to 0.8.11 just for the sake of the test would be weird. Sure, a fork test would work fine but comes with its own downsides (mainly performance). Anyways ... I'd imagine that problems of a similar nature might arise for long-lived, upgradable protocols that wish to deploy & configure their contracts in an idempotent IAC fashion. It feels like a layered approach could work well where deployment artifacts shared between these layers/steps in sequence could also be used as a starting point for integration tests (by etching the deployed bytecode as recorded in the deployment artifacts of the previous layers), thus not needing a fork. This could also be a fine approach for Acceptance Tests prior to a protocol upgrade. ... And a nice way to share deployment artifacts between repositories (just like |
Imho, this is related because integration testing on top of the actual deployment can be quite useful: #326 I might be overthinking this, but here's a possible solution that could work also for types of projects as described in my previous comments without sacrificing DX/UX: We might be able to structure this with "modularized" deployment files that define their I/O interface using shared structs that live in an "interface" contract in a separate A cheatcode could be used to then utilize these interfaces, both to share I/O between deployment modules and to enable integration tests to use a deployment (and its output) e.g. in When running actual production deployments, these structs could be serialized into artifacts and written to disk. Does this make any sense at all? Is it worth a shot? |
I am just attacking this based on intuition first. I have started development of this but thought id outline current plan. Basic plan is to:
i already have the cheatcodes written and tx crafting done. now it’s the hard part of actually building the interaction with the VM addressing comments:
having multiple
Agree, stdlib will have a helper function for getting code that isnt compatible with a particular version that leverages the |
awesome. Thoughts on renaming IS_DEPLOY to something like IS_LIVE_REQUESTS to indicate that this could also be used for making live txs and not just deploys? |
ye good thought - we should make all naming agnostic to deploy vs normal txs |
afaik we don't actually use also, how would it work with reusing the |
Hi, just found about this discussion after @onbjerg mentioned it to me here: #634 where I basically mention the lack of mechanism to replicate deployment procedure in tests and allude to a potential mechanism to solve it. I see that here you are discussing an approach where you start from the solidity test and imagine thus a mechanism where such test solidity code would perform the deployment themselves, making these deployment setup available in test as a matter of fact. I have been thinking of a different approach, where the deployment procedure would be free to use whatever programming language suits them. This has several benefit compared to a solidity approach:
The way I see that alternative approach is as follow :
Basically such script would be given a RPC endpoint (FOUNDRY_RPC_URL) and can perform any action on it, including but not limited to contract deployment. The only requirement for these scripts would be to save the contract data in a specific location / format so that the solidity tests can reuse them. Thinking of something like:
where forge would execute whatever command is passed to it, here "deploy.py" or "node deploy.js", but this can be anything. That execution would have a specific environment variable set to be the RPC endpoint to talk to (FOUNDRY_RPC_URL) and maybe other env variavkes too. Here, since we are dealing with the test command, the rpc endpoint would simply be the in-memory EVM against which the tests are run.
Once the script finishes, forge can then execute the tests and provide cheatcode for the tests to access the deployment data (like addresses, argument used or more). forge should also snapshot the state of the EVM before the tests are executed so that it reset to it before every test, (instead of having to re-execute the deployments). (evm_snapsnot) Then when it is time to deploy on a network, the script would not even need forge, but forge could have a command to facilitate the setting of the environment variable described above. something like :
a js script could look like this ...
const provider = new JsonRPCProvider(process.env.FOUNDRY_RPC_URL);
...
async main() {
...
const factory = new ContractFactory(abi, bytecode, signer);
const contract = await factory.deploy();
fs.writeFileSync(`${process.env.FOUNDRY_DATA_FOLDER}/MyDeployedContract.json`, {address: contract.address, abi, bytecode, ...});
} As for the solidity test, these would look something like the following: ...
function setUp() {
MyDeployedContract myContract = MyDeployedContract(vm.getData("MyDeployedContract01").address);
}
... What do you think ? |
@wighawag I think that in this discussion we've already reached consensus that deployments should absolutely be available to tests. It's clear that we need to be able to verify the integrity of a deployment. Handling deployment artifacts & I/O of some kind is definitely necessary too.
This can be achieved with cheatcodes (handling deployment artifact / config I/O, etc.) too and particularly storage layout changes, etc. are more easily tested directly in solidiity rather than through the JSON RPC indirection.
Cheatcodes are great and are simply the equivalent of e.g.
What's the difference between learning cheatcodes vs. custom rpc calls & other stubbing / mocking / artifact loading etc. utils in a less native environment. Personally, I think the benefit of using solidity code for deploying contracts is much more intuitive than going via multiple layers of indirection & abstraction in a foreign language.
There are definitely certain (arguably less common) use-cases that might not be covered initially. I think that's fine. Personally, I am absolutely positive that solidity-native deployments are going to offer a far superior developer experience for the majority of cases. Let's not discard this attempt just because it might not cover all cases right off the bat. All that said, nothing is stopping you from building a deployment framework around JSON RPC & the build artifacts produced by the compiler. You could load the deployment artifacts into a fork test (e.g. with a custom cheatcode or ffi) and proceed to verify the deployment there. |
@fubhy thanks for the reply Looking forward to see such solidity-based deployment implemented :)
One benefit of the approach I mentioned, is that I would actually have to do nothing and could reuse existing framework. from what you said here, it seems I can already do it. Where could I learn more ? Feel free to reply here : #634 to keep that discussion on solidity-based deployments |
Current plan is for @brockelmore to handle the cheatcode / TX creator, and I'll write the "driver" code which executes each migration to completion idempotently, with gas escalation/rebroadcast for reliable inclusion etc. We'll update you all - thx for all the feedback |
Just making a note: this is a feature most people will not really leverage in the sense that they do not typically have overly complicated deployments. I have used this before It generates a bash script for deploying, it cam handle doing the entire DSS. I think the autogenerated deployment scripts should be a feature especially in large deployments: who wants to waste time doing that. As for persistence of compiled and deployed output , we need it for verification and for downstream distribution. We could solve both at once and provision a service that offers that, I had explored this idea a few months ago with https://abi.storage Cheers |
just to clarify the benefit of having persistence handled together with deployment is now you could provide a URL to consumers of your application/etc, like unpkg. esm.sh, etc |
Definitely! To add to that, another very useful feature (present in hardhat-deploy) is the ability to share the deployment procedure itself. Very useful to integrate whatever protocol in your integration tests / app. For example, if you use |
Ability to Use a defined artifact packaging schema, eg eth-pm, dpack, pojson, etc |
also, websockets. |
I'm new to this so this may be both off topic and wrong, but I'm wondering if y'all considered using a proxy pattern coupled with deploy contracts. Feels like it would simplify a lot of the semantics Based on my reading from ds-proxy couldn't we have forge deploy a proxy for each account, and then |
We don't want to force users into a particular setup. We want this to feel like a better version of writing a script in javascript. Adding a proxy and what not forces users into a particular style. Given the work in #1208, I think we will stick with this |
Sweet, excited for what's to come! Foundry ftw. |
See #1840 |
Add the ability for forge to use a solidity file to create a contract on mainnet and arbitrary deploy scripts by adding a cheatcode. You could do something like:
The text was updated successfully, but these errors were encountered: