-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: permit module #1
Conversation
.env.example
Outdated
@@ -1 +1,4 @@ | |||
MY_SECRET="MY_SECRET" | |||
X25519_PRIVATE_KEY=TNjblUQ1U959TKGddmNlyOJQ7RMGaqS0XXH185Ei1Ik |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually you leave values empty in the example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to make it easy for testing/debugging and it's the anvil account so security isn't a concern but I'll update this
// @ts-expect-error globalThis | ||
globalThis.window = undefined; | ||
// @ts-expect-error globalThis | ||
globalThis.importScripts = undefined; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hope this doesn't cause issues.
src/generate-permits.ts
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldnt be here.
src/utils/keys.ts
Outdated
import { scalarMult, box } from "tweetnacl"; | ||
import tweetnaclUtil from "tweetnacl-util"; | ||
// @ts-expect-error no type | ||
import blake2b from "blake2b"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Going to need to do research on these new deps
wrangler.toml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose we can keep this in here to allow for optional Worker deployment. This should really all be invoked from action.yml in the root though, using tsx as the runtime.
I can make an issue soon so you can get credit for this submission. |
const NFT_MINTER_PRIVATE_KEY = process.env.NFT_MINTER_PRIVATE_KEY ?? "0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d"; | ||
const NFT_CONTRACT_ADDRESS = "0x38a70c040ca5f5439ad52d0e821063b0ec0b52b6"; // "0x6a87f05a74AB2EC25D1Eea0a3Cd24C3A2eCfF3E0"; | ||
const NFT_MINTER_PRIVATE_KEY = process.env.NFT_MINTER_PRIVATE_KEY; | ||
const NFT_CONTRACT_ADDRESS = process.env.NFT_CONTRACT_ADDRESS; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should immediately check after if it exists. Generally we use helper functions to 1. import all expected process.env
values, and 2. check if they exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I just realized but in a normal node runtime you must use dotenv in every file or else this shouldn't work. Did you test your deliverable? If its not tested this is not ready for review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I posted a test video showing it's all functioning however not within a node env, I'm setting that up now first for /research
then /permit
.
My current setup looks like:
kernel -> pluginDispatch() -> plugin workflow -> runs the action -> posts the comment
I thought that plugins will use repository dispatch to return data to the kernel, which will call the next plugin in the chain or stop if it's the last one.
My setup requires that no data be returned to the kernel as my plugins would handle the entire execution flow after dispatch, not sure if that's what you intend for it but it will suffice for testing
I forked from your example branch and built from there with a few best guesses
Building the plugin types like this
botconfig...
plugins: Plugins;
};
type Plugins = {
[key in keyof GithubEventWebHookEvents]: Plugin[];
};
export interface Plugin {
name: string;
description: string;
command?: string;
example?: string;
uses: string[];
with?: string[];
}
const plugins = configuration.plugins?.ISSUE_COMMENT_CREATED;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My setup requires that no data be returned to the kernel as my plugins would handle the entire execution flow after dispatch, not sure if that's what you intend for it but it will suffice for testing
Sure for early testing it might be fine. @whilefoo please chime in on how the interface will look like for plugin I/O
At least I can say that the kernel passes 1. The config for that command (which may include extra properties that specific plugins will need to ignore 2. an authentication token so that the plugin can authenticate (read private repositories, post comments as the bot etc)
The plugins must return results to the kernel because the kernel is what reads the config and connects the chain of plugins together.
Each partner instance can configure their own order of plugins.
This is only sensible to implement the logic once inside the kernel to dynamically handle mounting multiple plugins in whatever order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where I'm confused is: invoking an action.yml is different from a workflow, action.yml cannot be executed on a dispatch unless called from a workflow which would imply each plugin will require a workflow listening for it's custom event and the kernel would kick off the workflow which would invoke the action.yml otherwise won't the kernel require some kind of robust generic workflow that it can begin that invokes these action.yml's?
The way I have it set up now, the config.yml is parsed, all plugins for the new comment event are initiated in plugins, it maps over the regex's applying them then firing the plugin (starting the workflow in the plugin's repo which handles sequencing of the other plugins it relies on by the way of the uses keyword) if it matches any.
If the kernel is piping output from one workflow into the next that's going to take ages running through setup 2/3/4 times as opposed one workflow orchestrating the sequence of other plugins/core actions that it relies on. It's still very modular, partner-configurable and would be quicker than waiting for multiple runners to be assigned, built and executed.
Maybe I'm off on the wrong track, idk but I can't wrap my head around it from a plugin dev standpoint. Partners just add the plugin's info into their config.yml obtained from the plugin's README, the dev defines the config and sequence of called actions and if the kernel is using dispatch to kickoff multiple sequencial workflows then it could just call one workflow that encapsulates everything that plugin needs and then the kernel can be hands-off immediately after dispatch
As custom mounting etc is to be avoided without hindering plugin extensibility I thought it made sense that the kernel scans the config for plugins, applies those regexes on the appropriate webhook events then allowing the plugin's workflow to handle everything.
Are their more reasons to send data back to the kernel other than 'piping'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that your primarily concern with this architecture is speed.
As per my original proposal we can easily compile popular recipes into something like a cloudflare worker if speed is a concern.
For example the recipe for conversation rewards by default would be three separate GitHub actions in my proposal. But because we use it so frequently it might make sense to compile into a single Worker and use that endpoint as the "workflow" (in this case it would make sense to paste in the entire URL to indicate to the bot that it's an external API)
However I see maybe future use cases that will require payments to be generated. It is unwise to redundantly copy the permit generation code across multiple plugins. So that's why we would keep it in its own reusable module.
Beyond this, it makes sense to conform to GitHub's existing Action schema to take advantage of all of the actions that already exist. We can make the bot interoperate seamlessly with all of those existing capabilities.
For example, with my proposal it would be trivial to map any action to any arbitrary slash command. The possibilities are infinite though.
It makes our offering more capable without much extra work.
Lastly it technically would allow us to directly connect our plugins into CI if we wanted to.
I don't understand your concern with action.yml. It can be a simple wrapper that just calls an action inside of the workflows folder. It's a non issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this I'm sure I'm on the same page now
Convo Rewards modules:
- commentScoring (action.yml)
- permitGeneration (action.yml)
- permitComment (action.yml)
This recipe is invoked via dispatch (starts a workflow in the recipe repo otherwise it would need to kick off one from the kernel/bot repo which I don't expect) with uses
for each module sequenced as per the workflow file (ubiquibot-config.plugins?.ISSUE_COMMENT_CREATED
would match this sequence adhering to the plugin config structure you described
Each module is a lego brick that a workflow imports and runs with or without other modules
If a recipe takes an exorbitant amount of time a worker would be built for that recipe
I don't understand your concern with action.yml. It can be a simple wrapper that just calls an action inside of the workflows folder. It's a non issue.
Wouldn't it be the other way about, the /workflows/*.yml
calls the re-usable action.yml
s and then if piping the output from a workflow (which has run the imported actions) you'd access that through the actions/runs/${runID}/jobs
endpoint?
From what I've read and in building what I have I've found out that dispatching a workflow does not return a runID, you'd need to fetch from /actions/runs/
and filter for in_progress
then match a runner to the workflow that was just dispatched (assuming heavy traffic it might be difficult to find the relevant run). Will know for sure later today
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be the other way about, the /workflows/*.yml calls the re-usable action.yml
Don't see why that would be the case. Perhaps there is a gap in my knowledge, but from what I understand action.yml
is the default entry point for any GitHub Action, so we should follow this.
Also as a side note, I realize that conversation-rewards
(the first step of the recipe, which includes qualitative and quantitative analysis of comments) would be impossible to include in a Worker. It was actually the reason why we invented the "delegated compute" infrastructure in the first place (being able to run event handlers outside of the bot kernel.)
It takes a tremendous amount of time to render comments in a virtual DOM (this is to be able to consolidate both markdown and HTML into a single HTML selector for ubiquibot-config.yml
purposes) as well as to run the Open AI queries for qualitative analysis. From memory it took ~400ms per comment for quantitative analysis - even running locally on an M1 Macbook - with average issues having around 10 comments to process including their pull requests'. And then several network requests to OpenAI, which returns its results over a couple of seconds.
Workers free tier has a max of 10ms for perspective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing whilefoo's recent push I have a better understanding of how things are going to be running
comment created webhook fires, which I assume uses workflow_dispatch
to start the plugin and the plugin is expected to return the output with a repo_dispatch
dispatch step (I think)
/research
needs to be able to read from the botConfig for the openai key, and I see that the authToken is being passed as an input (which I think is insecure if that authToken can read from private repos idk) via DelegatedComputeInputs
I haven't fired any workflows this way yet as I don't want to log anything sensitive(unsure if it would). On my build and with this one that's where I'm stuck.
So I was thinking of setting repo secrets (assuming that's how the @ubiquibot/plugin repos will work too) which in the context of the /research-plugin
repo where the action will run it would have that private read access without sending the app install auth token?
Should my research action be expecting npx tsx "body" issueNumber sender org
or should it be expecting to handle the original event and destructuring everything it needs (token, body, etc..)
I'm hanging tight until the plugin setup is configured. Running in a node env has been tested locally using dotenv and I was able to dispatch events from the kernel, have the script run but it bailed when it needed private repo permissions. Edit: I mean from how I had the plugin setup configured, the setup looks like it's along the lines of what I had figured but the permissions issue was my blocker |
|
@Keyrxng if you have any questions on how to integrate the plugin with the kernel let me know |
I think the wisest way to proceed is to make a plug-in template repo and everybody can just fork it. Some essential capabilities off the top of my head:
This could potentially be five plugins that can be developed concurrently Permit Generation SDKI'm starting to wonder if it would make sense to make permit generation in an SDK instead of a restful API. We already started on an npm package called @ubiquibot/config import { Permit } from "@ubiquibot/plugin-sdk"
const EVM_KEY = await decryptEvmPrivateKey(config.keys.evmPrivateKeyEncrypted);
const permit = new Permit({
user: "keyrxng",
amount: "50",
token: "0x1234...",
from: EVM_KEY
});
await postPermitComment(permit.comment); It could allow tighter coupling with respect to the plugin logic and permit generation interfaces. Obviously fully local development plus type safety will be easier than debugging over GitHub Actions. @FernandVEYRIER @whilefoo rfc Security@rndquu rfc from here below I'm still thinking through this but each plugin might require their own evm key encryption tool / webapp which seems like a bad idea. But it allows fine grained control from partners to grant access to specific plugin developers specific keys. Also plugin devs can't steal keys from other plugins with this approach. SAFEs (Decentralized)A totally separate approach is:
I'm unsure if it's compatible with permit generation but this would allow for partners to "opt-in" plugins and set fine grained limits around amounts etc.
Database (Centralized)A far more centralized approach: we could just write the credits to our Supabase payments ledger. Then when the user goes to pay.ubq.fi, we can load all of the money owed to them and then generate all the permits from all the project wallets. Closing ThoughtsCentralization is tempting but kind of against the Ethereum ethos. I think we should continue to lean into the decentralized way. |
I already started working on assistive pricing. I can already see that every plugin will need the same setup so in the future we will need to make a plugin SDK.
So every plugin would need its own public/private key to decrypt the partner's private keys, and how we would manage and store all encrypted keys? globals:
encryptedPrivateKey: "wallet private key encrypted with kernel's private key"
...
- uses:
- plugin: plugin-A
with:
evmEncryptedKey: ${{ encrypt(globals.encryptedPrivateKey) }} # encryptedPrivateKey will be encrypted with plugin's public key Just some quick thoughts, I still need to think more about this |
At a high level I think that it would be best to abstract all the headache of encryption away from partners. The plugin developers should have to worry about it. API Key Style (Centralized)One, possibly bad idea, is that Ubiquity assigns an encryption key per plugin developer (very similar to how companies provide API keys.) Although this is a centralized approach. Then we can have a single UI that allows partners to generate their configs with encrypted EVM private keys per plugin. The webapp can parse the existing config and then insert the So the flow would be:
SAFE (Decentralized)I still think there's potentially a missed opportunity here to fully leverage SAFE for delegating authority to other signers (plugins) etc. It just requires a deep dive on their capabilities in the context of our plugins. |
Nice work foo, once you understand what going on it's pretty intuitive permit module creating the permit then feeding it into a very simple comment I wasn't sure whether all of the comment scoring stuff was to be bundled with payout if the permit module is a single unit. It makes sense for the comment scoring to take place then feed that info to the permit module and then post the comment? @whilefoo is the intention that once the last I had to change a couple of bits of the kernel
const installation = installations.data.find((inst) => inst.account?.login.toLowerCase() === owner.toLowerCase());
|
The plugin should return data to the kernel because it doesn't know if it's the last one in the chain. Even if there are no outputs it should still return empty output object so that the kernel knows it finished execution and calls the next plugin if there is one. |
This is my plugin config foo:
I'm doubting that I have the config setup the way you envision it but at the same time these two "plugins" which I'm testing with idk whether they'll be bundled as one or used as individual building blocks, but the fact that things are working says I'm on the right track anyway. Once I'm sure I'm on track I'll be more confident in throwing together plugins ( plugins:
issue_comment.created:
- name: "Permit Generation"
description: "Generate a payout permit on the fly."
command: "^\/permit\\s+((0x[a-fA-F0-9]{40})|([a-zA-Z0-9]{4,})|([a-zA-Z0-9]{3,}\\.eth))\\s+([a-zA-Z0-9]+|\\d+)$"
example: "/wallet <wallet address>"
uses:
- plugin: ubq-testing/permit-generation:compute.yml@plugin-testing
type: "github"
with:
evmNetworkId: 100
evmPrivateEncrypted: ...
id: "permitgeneration"
- plugin: ubq-testing/payout-comment:compute.yml@development
type: "github"
with:
type: "${{ permitgeneration.output.type }}"
permit: "${{ permitgeneration.output.permit }}"
transferDetails: "${{ permitgeneration.output.transferDetails }}"
owner: "${{ permitgeneration.output.owner }}"
signature: "${{ permitgeneration.output.signature }}"
networkId: "${{ permitgeneration.output.networkId }}"
id: "permitcomment"
skipBotEvents: true |
The config looks good!
comment plugin should return data (it can be empty) but it will work even if it doesn't |
|
@pavlovcik is the intention then that the permit module receive only github username & amount or will there be step between converting username to wallet address or should the permit module be capable of receiving either/or. Also, if both nft rewards and erc20 payouts are enabled then how do we distinguish? {
pavlovcik: {
amount: 50,
nft: false,
},
}; |
The default is compute.yml but you can change it. If by action.yml you refer to the compose action in the root dir, you can't call a composite action directly.
It skips events triggered by bots, for example bot posts a comment |
Didn't deeply do research here but I'm very certain that if necessary we can call whatever file we need in a repo, including We want to maximize compatibility with GitHub Actions so that we are able to make the bot more useful out the gates i.e. by mapping a custom slash command with any GitHub Action in the marketplace.
Makes sense to make that default behavior. I can't think of any instance that we want the bot to react to bots. There are direct ways to handle this via custom webhooks. Otherwise it seems prone to infinite loops. GitHub has a similar apprehension to bot loops (its quite difficult to set up infinite loops with GitHub tooling for bot-to-bot invocations.) |
Maybe we can generalize with a token address. The NFT can be an ERC1155 which means that the amount is nothing special. We can technically mint 1 or many and its still the NFT. {
"pavlovcik": {
"amount": "1",
"token": "0x4e38D89362f7e5db0096CE44ebD021c3962aA9a0"
}
} |
And the permit module is being treated as a plugin technically speaking right? @whilefoo only the kernel has access to this if I'm not mistaken, right? The plugins aren't fed the KV id so if the kernel is only processing consecutive plugins, a username > id fetch plugin is required that'll need to run on repo secrets access to the up to date KV key? Or is the permit module going to be a standalone worker with it's own KV storage? Maybe I'm missing something... |
I envision that permit module should be isolated as its own plugin yes. @whilefoo knows implementation details the best but I imagine that it would be most secure to only authenticate requests coming from the kernel. But maybe that's tedious to build? |
That's correct. if But if multiple plugins need the access to wallet addresses then it depends if the want data to public or private. Making it private is hard because you need a way that only plugins can access it and even then what if somebody makes a plugin just to get wallet addresses.
You can't really do that because you are calling a workflow and |
I think we shouldn't aim to support plugin data sharing yet. Let's build it when we actually need to, given that we are over two weeks behind schedule. |
Is permit generation gonna be a Cloudflare worker or a plugin? @pavlovcik |
Depends on whether or not it's going to be a worker with it's own KV or have it's own supabase DB, I've seen pav say each module should have it's own KV which implies either all modules are independent workers or the kernel has access to the various module KV's and I've seen you say that all plugins should have it's own supabase DB. I started working on this again about an hour ago, seeing that your going with the supabase DB I've done the same. I'll have a good chunk pushed tonight TODOs
I've got the base setup for either a worker or plugin just needs decided which way it should be taken |
We should use KV because i think it's a lot simpler to set up for developers for maintenance reasons. Although to be fair I didn't work on this so I have limited perspective on if Supabase would be superior for some reason for this use case.
I consider everything a "plugin" and they should support both GitHub actions and cloudflare worker runtimes. I think that it's convenient to make a template that automatically allows plugin developers to run off of GitHub actions but perhaps we should have native support/accommodate Cloudflare Workers as well for "production" plugins. I think they are less convenient to debug but obviously way more performant. Which is a sensible trade off for finalized plugins. I particularly enjoy the transparency of GitHub actions logs and the other related views.
Essential. I wish that the config could define the command key. @whilefoo do you have advice for this? Would be interesting if we could map arbitrary key strings to function signatures, inspired by solidity (but I don't think this is possible) We may however be able to map specific file paths to modules inside of the plugin to the key string in the config, although I don't like that it requires digging through a plugins contents just to map a command
There is a concept called base rate in reference to how important a specific repository is. It should be passed in from the config. However the old multiplier we used to associate with specific contributors I think we should deprecate. We barely used it.
Runtime environment secrets. GitHub actions should read from the repository secrets. Cloudflare Worksrs from Cloudflare secrets. Also something I just realized but our ts-template natively supports UI hosting upon deployment: it would be very interesting if some plugins have their own UIs for registration etc. So instead of onboard.UBQ.fi to encrypt a wallet private key, it could be hosted directly by this permit plugin so the code is all shared. It would be an even better UX if the plugin generated a dedicated wallet private key for the user as well instead of them bringing their own wallet address. Ideally fully on the client side but an restful API might be okay in the short term |
I realized that we will need to share This is because @whilefoo rfc |
I took things in a supabase direction after the initial cloudflare worker build but I can pivot back if it's going to be a worker.
I've set it up so that if a wallet cannot be found the permit doesn't get generated and a comment is posted telling them to register and have a reviewer re-run the workflow which I intend to throw if no permit gets generated then it'll get passed to the permit comment plugin once completed. Which in theory should be just pausing it until they register if I'm thinking along the right lines I've set up 3 commands
I'm supporting
I started working on
I suggested account abstraction a while back and implemented a rough af version when things started to pivot to a broader target demographic which is basically what you are suggesting here I think. E.g contributor registers via |
chore: testing workflow
I think we should only use Supabase if KV isn't sufficient for a plugin's needs (although I can't see how this may be the case.) To clarify I have a fair amount of experience using Supabase and conceptually understand KV (haven't really used it much though) but it seems that KV should be faster for plugin developers to set up. For the short term I say we all try working with KV first for plugins. Then when we eventually make our plugin SDK or plugin template, we can by default include KV.
Permit module should be strictly defined with its capabilities in order to make the code small, simple, and maintanable both from a maintenance perspective and security perspective. The only I/O it must have is with the kernel, which means that it shouldn't directly be able to post error comments. This is a lot of unnecessary code bloat.
Multiple commands seems interesting but out of scope. We can simply toggle the close as completed state to regenerate permits. Again, this adds unnecessary code bloat.
I have no experience working with the NFT code so I have little input here. However the information should be coming in explicitly defined in the config. Implicitly determining if its an NFT reward is the wrong approach.
No I'm referring to the contributor's set multiplier that we saved in the database, and would output their task assignment scoring multiplied with this number. We tried it temporarily but stopped using it. If you're starting research on the the daily streak thing, thats fine but it should definitely not be in this code. As per the readme, this codebase should only accept input related to total amount of rewards and to who, then output payment permits only. All other complex logic for calculating the total per contributor etc must be handled outside of this plugin in separate plugins.
I read that Coinbase is working on some in-browser solution that's in beta now. We can probably learn from their implementation and do something similar. |
Plugin devs could setup KV and access it through the API with their cloudflare creds then when the plugin gets ubiquity approval and then becomes hosted under
If I'm to revert back to the worker build I had then the flow should be:
Edit: A better idea is probably just to have a worker for the sole purpose of KV access and keep everything else as-is that way there's no need for they libsodium workarounds |
Similar to the Chrome Extension store, or iOS App Store etc each plugin should be independently hosted with their own backends. This makes development for new developers, as well as managing our infrastructure simple.
I'm pretty certain that KV works in GitHub Actions. Besides, worst case scenario we just run "miniflare" inside of GitHub Actions. Its a full Ubuntu environment. It should basically not have any limitations on program execution. |
…iguration chore: add correct Knip workflow configuration
Template init
Related to ubiquity/.github#98
Cloudflare worker based permit2 module