Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: permit module #1

Merged
merged 15 commits into from
Mar 27, 2024
Merged

Conversation

Keyrxng
Copy link
Contributor

@Keyrxng Keyrxng commented Feb 25, 2024

Related to ubiquity/.github#98

Cloudflare worker based permit2 module

.env.example Outdated
@@ -1 +1,4 @@
MY_SECRET="MY_SECRET"
X25519_PRIVATE_KEY=TNjblUQ1U959TKGddmNlyOJQ7RMGaqS0XXH185Ei1Ik
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually you leave values empty in the example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to make it easy for testing/debugging and it's the anvil account so security isn't a concern but I'll update this

Comment on lines 12 to 15
// @ts-expect-error globalThis
globalThis.window = undefined;
// @ts-expect-error globalThis
globalThis.importScripts = undefined;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope this doesn't cause issues.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldnt be here.

import { scalarMult, box } from "tweetnacl";
import tweetnaclUtil from "tweetnacl-util";
// @ts-expect-error no type
import blake2b from "blake2b";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going to need to do research on these new deps

wrangler.toml Outdated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose we can keep this in here to allow for optional Worker deployment. This should really all be invoked from action.yml in the root though, using tsx as the runtime.

@0x4007
Copy link
Member

0x4007 commented Feb 25, 2024

I can make an issue soon so you can get credit for this submission.

const NFT_MINTER_PRIVATE_KEY = process.env.NFT_MINTER_PRIVATE_KEY ?? "0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d";
const NFT_CONTRACT_ADDRESS = "0x38a70c040ca5f5439ad52d0e821063b0ec0b52b6"; // "0x6a87f05a74AB2EC25D1Eea0a3Cd24C3A2eCfF3E0";
const NFT_MINTER_PRIVATE_KEY = process.env.NFT_MINTER_PRIVATE_KEY;
const NFT_CONTRACT_ADDRESS = process.env.NFT_CONTRACT_ADDRESS;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should immediately check after if it exists. Generally we use helper functions to 1. import all expected process.env values, and 2. check if they exist.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I just realized but in a normal node runtime you must use dotenv in every file or else this shouldn't work. Did you test your deliverable? If its not tested this is not ready for review.

Copy link
Contributor Author

@Keyrxng Keyrxng Feb 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I posted a test video showing it's all functioning however not within a node env, I'm setting that up now first for /research then /permit.

My current setup looks like:

kernel -> pluginDispatch() -> plugin workflow -> runs the action -> posts the comment

I thought that plugins will use repository dispatch to return data to the kernel, which will call the next plugin in the chain or stop if it's the last one.

My setup requires that no data be returned to the kernel as my plugins would handle the entire execution flow after dispatch, not sure if that's what you intend for it but it will suffice for testing

I forked from your example branch and built from there with a few best guesses

https://github.com/Keyrxng/ubiquibot-kernel/blob/plugins/src/github/handlers/issue-comment/created.ts

Building the plugin types like this

botconfig...
  plugins: Plugins;
};

type Plugins = {
  [key in keyof GithubEventWebHookEvents]: Plugin[];
};

export interface Plugin {
  name: string;
  description: string;
  command?: string;
  example?: string;
  uses: string[];
  with?: string[];
}

const plugins = configuration.plugins?.ISSUE_COMMENT_CREATED;

Copy link
Member

@0x4007 0x4007 Feb 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My setup requires that no data be returned to the kernel as my plugins would handle the entire execution flow after dispatch, not sure if that's what you intend for it but it will suffice for testing

Sure for early testing it might be fine. @whilefoo please chime in on how the interface will look like for plugin I/O

At least I can say that the kernel passes 1. The config for that command (which may include extra properties that specific plugins will need to ignore 2. an authentication token so that the plugin can authenticate (read private repositories, post comments as the bot etc)


The plugins must return results to the kernel because the kernel is what reads the config and connects the chain of plugins together.

Each partner instance can configure their own order of plugins.

This is only sensible to implement the logic once inside the kernel to dynamically handle mounting multiple plugins in whatever order.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where I'm confused is: invoking an action.yml is different from a workflow, action.yml cannot be executed on a dispatch unless called from a workflow which would imply each plugin will require a workflow listening for it's custom event and the kernel would kick off the workflow which would invoke the action.yml otherwise won't the kernel require some kind of robust generic workflow that it can begin that invokes these action.yml's?

The way I have it set up now, the config.yml is parsed, all plugins for the new comment event are initiated in plugins, it maps over the regex's applying them then firing the plugin (starting the workflow in the plugin's repo which handles sequencing of the other plugins it relies on by the way of the uses keyword) if it matches any.

If the kernel is piping output from one workflow into the next that's going to take ages running through setup 2/3/4 times as opposed one workflow orchestrating the sequence of other plugins/core actions that it relies on. It's still very modular, partner-configurable and would be quicker than waiting for multiple runners to be assigned, built and executed.

Maybe I'm off on the wrong track, idk but I can't wrap my head around it from a plugin dev standpoint. Partners just add the plugin's info into their config.yml obtained from the plugin's README, the dev defines the config and sequence of called actions and if the kernel is using dispatch to kickoff multiple sequencial workflows then it could just call one workflow that encapsulates everything that plugin needs and then the kernel can be hands-off immediately after dispatch

As custom mounting etc is to be avoided without hindering plugin extensibility I thought it made sense that the kernel scans the config for plugins, applies those regexes on the appropriate webhook events then allowing the plugin's workflow to handle everything.

Are their more reasons to send data back to the kernel other than 'piping'?

Copy link
Member

@0x4007 0x4007 Feb 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that your primarily concern with this architecture is speed.

As per my original proposal we can easily compile popular recipes into something like a cloudflare worker if speed is a concern.

For example the recipe for conversation rewards by default would be three separate GitHub actions in my proposal. But because we use it so frequently it might make sense to compile into a single Worker and use that endpoint as the "workflow" (in this case it would make sense to paste in the entire URL to indicate to the bot that it's an external API)


However I see maybe future use cases that will require payments to be generated. It is unwise to redundantly copy the permit generation code across multiple plugins. So that's why we would keep it in its own reusable module.


Beyond this, it makes sense to conform to GitHub's existing Action schema to take advantage of all of the actions that already exist. We can make the bot interoperate seamlessly with all of those existing capabilities.

For example, with my proposal it would be trivial to map any action to any arbitrary slash command. The possibilities are infinite though.

It makes our offering more capable without much extra work.


Lastly it technically would allow us to directly connect our plugins into CI if we wanted to.


I don't understand your concern with action.yml. It can be a simple wrapper that just calls an action inside of the workflows folder. It's a non issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this I'm sure I'm on the same page now

Convo Rewards modules:

  • commentScoring (action.yml)
  • permitGeneration (action.yml)
  • permitComment (action.yml)

This recipe is invoked via dispatch (starts a workflow in the recipe repo otherwise it would need to kick off one from the kernel/bot repo which I don't expect) with uses for each module sequenced as per the workflow file (ubiquibot-config.plugins?.ISSUE_COMMENT_CREATED would match this sequence adhering to the plugin config structure you described

Each module is a lego brick that a workflow imports and runs with or without other modules

If a recipe takes an exorbitant amount of time a worker would be built for that recipe

I don't understand your concern with action.yml. It can be a simple wrapper that just calls an action inside of the workflows folder. It's a non issue.

Wouldn't it be the other way about, the /workflows/*.yml calls the re-usable action.ymls and then if piping the output from a workflow (which has run the imported actions) you'd access that through the actions/runs/${runID}/jobs endpoint?

From what I've read and in building what I have I've found out that dispatching a workflow does not return a runID, you'd need to fetch from /actions/runs/ and filter for in_progress then match a runner to the workflow that was just dispatched (assuming heavy traffic it might be difficult to find the relevant run). Will know for sure later today

Copy link
Member

@0x4007 0x4007 Feb 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be the other way about, the /workflows/*.yml calls the re-usable action.yml

Don't see why that would be the case. Perhaps there is a gap in my knowledge, but from what I understand action.yml is the default entry point for any GitHub Action, so we should follow this.

Also as a side note, I realize that conversation-rewards (the first step of the recipe, which includes qualitative and quantitative analysis of comments) would be impossible to include in a Worker. It was actually the reason why we invented the "delegated compute" infrastructure in the first place (being able to run event handlers outside of the bot kernel.)

It takes a tremendous amount of time to render comments in a virtual DOM (this is to be able to consolidate both markdown and HTML into a single HTML selector for ubiquibot-config.yml purposes) as well as to run the Open AI queries for qualitative analysis. From memory it took ~400ms per comment for quantitative analysis - even running locally on an M1 Macbook - with average issues having around 10 comments to process including their pull requests'. And then several network requests to OpenAI, which returns its results over a couple of seconds.

Workers free tier has a max of 10ms for perspective.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeing whilefoo's recent push I have a better understanding of how things are going to be running

comment created webhook fires, which I assume uses workflow_dispatch to start the plugin and the plugin is expected to return the output with a repo_dispatch dispatch step (I think)

/research needs to be able to read from the botConfig for the openai key, and I see that the authToken is being passed as an input (which I think is insecure if that authToken can read from private repos idk) via DelegatedComputeInputs I haven't fired any workflows this way yet as I don't want to log anything sensitive(unsure if it would). On my build and with this one that's where I'm stuck.

So I was thinking of setting repo secrets (assuming that's how the @ubiquibot/plugin repos will work too) which in the context of the /research-plugin repo where the action will run it would have that private read access without sending the app install auth token?

Should my research action be expecting npx tsx "body" issueNumber sender org

or should it be expecting to handle the original event and destructuring everything it needs (token, body, etc..)

@0x4007 0x4007 marked this pull request as draft February 26, 2024 11:19
@Keyrxng
Copy link
Contributor Author

Keyrxng commented Mar 2, 2024

I'm hanging tight until the plugin setup is configured.

Running in a node env has been tested locally using dotenv and I was able to dispatch events from the kernel, have the script run but it bailed when it needed private repo permissions.

Edit: I mean from how I had the plugin setup configured, the setup looks like it's along the lines of what I had figured but the permissions issue was my blocker

@0x4007
Copy link
Member

0x4007 commented Mar 2, 2024

I'm hanging tight until the plugin setup is configured.

Running in a node env has been tested locally using dotenv and I was able to dispatch events from the kernel, have the script run but it bailed when it needed private repo permissions.

@whilefoo

@whilefoo
Copy link
Member

whilefoo commented Mar 6, 2024

@Keyrxng if you have any questions on how to integrate the plugin with the kernel let me know

@0x4007
Copy link
Member

0x4007 commented Mar 6, 2024

@Keyrxng if you have any questions on how to integrate the plugin with the kernel let me know

I think the wisest way to proceed is to make a plug-in template repo and everybody can just fork it.

Some essential capabilities off the top of my head:

  1. Assistive pricing
  2. Start command
  3. Help command (maybe the kernel should have this built in, since it just reads from config?)
  4. Conversation rewards
    • quantitative
    • qualitative
    • clarity
  5. Permit generation
  6. Permit comment

This could potentially be five plugins that can be developed concurrently

Permit Generation SDK

I'm starting to wonder if it would make sense to make permit generation in an SDK instead of a restful API.

We already started on an npm package called @ubiquibot/config

import { Permit } from "@ubiquibot/plugin-sdk"

const EVM_KEY = await decryptEvmPrivateKey(config.keys.evmPrivateKeyEncrypted);

const permit = new Permit({
   user: "keyrxng",
   amount: "50",
   token: "0x1234...",
   from: EVM_KEY
});

await postPermitComment(permit.comment);

It could allow tighter coupling with respect to the plugin logic and permit generation interfaces. Obviously fully local development plus type safety will be easier than debugging over GitHub Actions.

@FernandVEYRIER @whilefoo rfc

Security

@rndquu rfc from here below

I'm still thinking through this but each plugin might require their own evm key encryption tool / webapp which seems like a bad idea.

But it allows fine grained control from partners to grant access to specific plugin developers specific keys. Also plugin devs can't steal keys from other plugins with this approach.

SAFEs (Decentralized)

A totally separate approach is:

  1. funding wallets must be gnosis safes
  2. Plugins must be added as signers to the safes
  3. Special owner policy allows for limits around which plugin can spend how much.

I'm unsure if it's compatible with permit generation but this would allow for partners to "opt-in" plugins and set fine grained limits around amounts etc.

  • It might be the best decentralized UX if the SAFE just transferred to the user right away. This would force projects to pay the gas fees though, I guess.
  • Alternatively the plugin itself can have gas and charge a fee for this service of invoking the transfers, kind of like cowswap resolvers.
  • last idea would be that permits are generated and then pay.ubq.fi just consumes all of them and let's the user cash out one at a time.

Database (Centralized)

A far more centralized approach: we could just write the credits to our Supabase payments ledger.

Then when the user goes to pay.ubq.fi, we can load all of the money owed to them and then generate all the permits from all the project wallets.

Closing Thoughts

Centralization is tempting but kind of against the Ethereum ethos. I think we should continue to lean into the decentralized way.

@whilefoo
Copy link
Member

whilefoo commented Mar 6, 2024

I already started working on assistive pricing. I can already see that every plugin will need the same setup so in the future we will need to make a plugin SDK.

I'm starting to wonder if it would make sense to make permit generation in an SDK instead of a restful API.

So every plugin would need its own public/private key to decrypt the partner's private keys, and how we would manage and store all encrypted keys?
Maybe we can encrypt the private key with the public key of the plugin which should be available to the kernel and this can be also used for all kinds of sensitive things not just private keys.
The config would look like:

globals:
  encryptedPrivateKey: "wallet private key encrypted with kernel's private key"
...
- uses:
  - plugin: plugin-A
    with:
       evmEncryptedKey: ${{ encrypt(globals.encryptedPrivateKey) }} # encryptedPrivateKey will be encrypted with plugin's public key 

Just some quick thoughts, I still need to think more about this

@0x4007
Copy link
Member

0x4007 commented Mar 7, 2024

I already started working on assistive pricing. I can already see that every plugin will need the same setup so in the future we will need to make a plugin SDK.

I'm starting to wonder if it would make sense to make permit generation in an SDK instead of a restful API.

So every plugin would need its own public/private key to decrypt the partner's private keys, and how we would manage and store all encrypted keys? Maybe we can encrypt the private key with the public key of the plugin which should be available to the kernel and this can be also used for all kinds of sensitive things not just private keys. The config would look like:

globals:
  encryptedPrivateKey: "wallet private key encrypted with kernel's private key"
...
- uses:
  - plugin: plugin-A
    with:
       evmEncryptedKey: ${{ encrypt(globals.encryptedPrivateKey) }} # encryptedPrivateKey will be encrypted with plugin's public key 

Just some quick thoughts, I still need to think more about this

At a high level I think that it would be best to abstract all the headache of encryption away from partners. The plugin developers should have to worry about it.

API Key Style (Centralized)

One, possibly bad idea, is that Ubiquity assigns an encryption key per plugin developer (very similar to how companies provide API keys.) Although this is a centralized approach.

Then we can have a single UI that allows partners to generate their configs with encrypted EVM private keys per plugin. The webapp can parse the existing config and then insert the evmPrivateKeyEncrypted per plugin in the ubiquibot-config.yml.

So the flow would be:

  1. the user signs in with GitHub
  2. they select what repo
  3. the webapp parses the config, including the "global" evmPrivateKeyEncrypted
  4. a preview of the new config with the correct encrypted keys per plugin is displayed
  5. the user presses a "save" button, and the UbiquiBot updates their ubiquibot-config.yml

SAFE (Decentralized)

I still think there's potentially a missed opportunity here to fully leverage SAFE for delegating authority to other signers (plugins) etc. It just requires a deep dive on their capabilities in the context of our plugins.

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Mar 8, 2024

Nice work foo, once you understand what going on it's pretty intuitive

permit module creating the permit then feeding it into a very simple comment

I wasn't sure whether all of the comment scoring stuff was to be bundled with payout if the permit module is a single unit. It makes sense for the comment scoring to take place then feed that info to the permit module and then post the comment?

@whilefoo is the intention that once the last uses executes it's run it'll send back no data to the kernel? When I was getting things up and running yarn install bugged out with a code 500 and dropped the run all together which the kernel should probs know about but without a response as is, that would imply the end of the line of execution or the kernel would be listening out for a response that'll never come

I had to change a couple of bits of the kernel

  • use pavs getUbiquibotConfig() as it pulls from the org and the repo
const installation = installations.data.find((inst) => inst.account?.login.toLowerCase() === owner.toLowerCase());

  1. Is the permit module going to be SDK'd, kept as a workflow or both?
  2. I was thinking it's activated on pull_request.closed > as completed/merged & comment as a command?
  3. I'm using hardcoded anvil vars rn but as a command the invoker would comment the hunter's username and the amount (fetch addr from db), on pr.closed > merged use issue.assignee == pr.owner ? assignee : pr.owner and fetch from db. So the module is only outputting raw RewardPermit data and plugins importing handle process for comments/whatever?

@whilefoo
Copy link
Member

whilefoo commented Mar 9, 2024

@whilefoo is the intention that once the last uses executes it's run it'll send back no data to the kernel? When I was getting things up and running yarn install bugged out with a code 500 and dropped the run all together which the kernel should probs know about but without a response as is, that would imply the end of the line of execution or the kernel would be listening out for a response that'll never come

The plugin should return data to the kernel because it doesn't know if it's the last one in the chain. Even if there are no outputs it should still return empty output object so that the kernel knows it finished execution and calls the next plugin if there is one.

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Mar 11, 2024

@whilefoo is the intention that once the last uses executes it's run it'll send back no data to the kernel? When I was getting things up and running yarn install bugged out with a code 500 and dropped the run all together which the kernel should probs know about but without a response as is, that would imply the end of the line of execution or the kernel would be listening out for a response that'll never come

The plugin should return data to the kernel because it doesn't know if it's the last one in the chain. Even if there are no outputs it should still return empty output object so that the kernel knows it finished execution and calls the next plugin if there is one.

This is my plugin config foo:

  • generate permit and pass that to comment
  • generate comment and post
  • Should the comment and post initiate the kernel return data or should there be a 3rd plugin added onto the end which does so?

I'm doubting that I have the config setup the way you envision it but at the same time these two "plugins" which I'm testing with idk whether they'll be bundled as one or used as individual building blocks, but the fact that things are working says I'm on the right track anyway. Once I'm sure I'm on track I'll be more confident in throwing together plugins (/research, /review etc)/porting features into modules/plugins etc

plugins:
  issue_comment.created:    
      - name: "Permit Generation"
        description: "Generate a payout permit on the fly."
        command: "^\/permit\\s+((0x[a-fA-F0-9]{40})|([a-zA-Z0-9]{4,})|([a-zA-Z0-9]{3,}\\.eth))\\s+([a-zA-Z0-9]+|\\d+)$"
        example: "/wallet <wallet address>"
        uses:
          - plugin: ubq-testing/permit-generation:compute.yml@plugin-testing
            type: "github" 
            with: 
              evmNetworkId: 100
              evmPrivateEncrypted: ...
              
            id: "permitgeneration"
          - plugin: ubq-testing/payout-comment:compute.yml@development
            type: "github"
            with: 
                type: "${{ permitgeneration.output.type }}"
                permit: "${{ permitgeneration.output.permit }}"
                transferDetails: "${{ permitgeneration.output.transferDetails }}"
                owner: "${{ permitgeneration.output.owner }}"
                signature: "${{ permitgeneration.output.signature }}"
                networkId: "${{ permitgeneration.output.networkId }}"
            id: "permitcomment"
        skipBotEvents: true

@whilefoo
Copy link
Member

The config looks good!

  • Should the comment and post initiate the kernel return data or should there be a 3rd plugin added onto the end which does so?

comment plugin should return data (it can be empty) but it will work even if it doesn't

@0x4007
Copy link
Member

0x4007 commented Mar 14, 2024

  • Exclude type: "github". It should be inferred if there is no protocol prefix i.e. https://
  • Also I've never seen :compute.yml syntax in GitHub Actions, so lets not do that. It always enters at action.yml
  • No idea what skipBotEvents: true is.

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Mar 15, 2024

accommodates final permit generation interface of Record<Username, string> i.e. { "pavlovcik": "50" }

@pavlovcik is the intention then that the permit module receive only github username & amount or will there be step between converting username to wallet address or should the permit module be capable of receiving either/or.

Also, if both nft rewards and erc20 payouts are enabled then how do we distinguish?

{
  pavlovcik: {
    amount: 50,
    nft: false,
  },
};

@whilefoo
Copy link
Member

  • Also I've never seen :compute.yml syntax in GitHub Actions, so lets not do that. It always enters at action.yml

The default is compute.yml but you can change it. If by action.yml you refer to the compose action in the root dir, you can't call a composite action directly.

  • No idea what skipBotEvents: true is.

It skips events triggered by bots, for example bot posts a comment

@0x4007
Copy link
Member

0x4007 commented Mar 15, 2024

you can't call a composite action directly.

Didn't deeply do research here but I'm very certain that if necessary we can call whatever file we need in a repo, including action.yml in the root.

We want to maximize compatibility with GitHub Actions so that we are able to make the bot more useful out the gates i.e. by mapping a custom slash command with any GitHub Action in the marketplace.

It skips events triggered by bots, for example bot posts a comment

Makes sense to make that default behavior. I can't think of any instance that we want the bot to react to bots. There are direct ways to handle this via custom webhooks. Otherwise it seems prone to infinite loops. GitHub has a similar apprehension to bot loops (its quite difficult to set up infinite loops with GitHub tooling for bot-to-bot invocations.)

@0x4007
Copy link
Member

0x4007 commented Mar 15, 2024

accommodates final permit generation interface of Record<Username, string> i.e. { "pavlovcik": "50" }

@pavlovcik is the intention then that the permit module receive only github username & amount or will there be step between converting username to wallet address or should the permit module be capable of receiving either/or.

  • The permit generation module should, at runtime, look up the user id based on the username.
  • Then it should look in its Cloudflare Key Value storage of the GitHub user ID and the associated wallet address.
  • Then it should generate the permit based on the passed in keys.evmPrivateEncrypted

Also, if both nft rewards and erc20 payouts are enabled then how do we distinguish?

{
  pavlovcik: {
    amount: 50,
    nft: false,
  },
};

Maybe we can generalize with a token address. The NFT can be an ERC1155 which means that the amount is nothing special. We can technically mint 1 or many and its still the NFT.

{ 
   "pavlovcik": {
      "amount": "1",
      "token": "0x4e38D89362f7e5db0096CE44ebD021c3962aA9a0"
   } 
}

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Mar 15, 2024

Then it should look in its Cloudflare Key Value storage of the GitHub user ID and the associated wallet address.

And the permit module is being treated as a plugin technically speaking right?

@whilefoo only the kernel has access to this if I'm not mistaken, right? The plugins aren't fed the KV id so if the kernel is only processing consecutive plugins, a username > id fetch plugin is required that'll need to run on repo secrets access to the up to date KV key?

Or is the permit module going to be a standalone worker with it's own KV storage?

Maybe I'm missing something...

@0x4007
Copy link
Member

0x4007 commented Mar 16, 2024

I envision that permit module should be isolated as its own plugin yes.

@whilefoo knows implementation details the best but I imagine that it would be most secure to only authenticate requests coming from the kernel. But maybe that's tedious to build?

@whilefoo
Copy link
Member

@whilefoo only the kernel has access to this if I'm not mistaken, right? The plugins aren't fed the KV id so if the kernel is only processing consecutive plugins, a username > id fetch plugin is required that'll need to run on repo secrets access to the up to date KV key?

That's correct. if permit-generation is the only plugin that needs wallet address of a user then it'd make sense that the plugin also handles a command that sets the wallet address and stores it in plugin's own DB.

But if multiple plugins need the access to wallet addresses then it depends if the want data to public or private. Making it private is hard because you need a way that only plugins can access it and even then what if somebody makes a plugin just to get wallet addresses.
If we don't care about addresses being private than it can be simply stored in Supabase DB and made accessible by other plugins via RLS.

Didn't deeply do research here but I'm very certain that if necessary we can call whatever file we need in a repo, including action.yml in the root.

You can't really do that because you are calling a workflow and action.yml is a composite action not a workflow, only a workflow can execute an action (https://docs.github.com/en/actions/creating-actions/creating-a-composite-action#testing-out-your-action-in-a-workflow).

@0x4007
Copy link
Member

0x4007 commented Mar 17, 2024

I think we shouldn't aim to support plugin data sharing yet. Let's build it when we actually need to, given that we are over two weeks behind schedule.

@whilefoo
Copy link
Member

Is permit generation gonna be a Cloudflare worker or a plugin? @pavlovcik
@Keyrxng what's left to be done?

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Mar 21, 2024

Is permit generation gonna be a Cloudflare worker or a plugin? pavlovcik Keyrxng what's left to be done?

Depends on whether or not it's going to be a worker with it's own KV or have it's own supabase DB, I've seen pav say each module should have it's own KV which implies either all modules are independent workers or the kernel has access to the various module KV's and I've seen you say that all plugins should have it's own supabase DB.

I started working on this again about an hour ago, seeing that your going with the supabase DB I've done the same. I'll have a good chunk pushed tonight

TODOs

  • handle wallet registration via command
  • integrate payout multiplier (seen pav comment it somewhere I forget where)
  • decide on how the module will differentiate between ERC20 and 721 payouts if both re active in the partner config
  • decide how the module will obtain the X25519_PRIVATE_KEY

I've got the base setup for either a worker or plugin just needs decided which way it should be taken

@0x4007
Copy link
Member

0x4007 commented Mar 21, 2024

We should use KV because i think it's a lot simpler to set up for developers for maintenance reasons.

Although to be fair I didn't work on this so I have limited perspective on if Supabase would be superior for some reason for this use case.

Is permit generation gonna be a Cloudflare worker or a plugin

I consider everything a "plugin" and they should support both GitHub actions and cloudflare worker runtimes. I think that it's convenient to make a template that automatically allows plugin developers to run off of GitHub actions but perhaps we should have native support/accommodate Cloudflare Workers as well for "production" plugins. I think they are less convenient to debug but obviously way more performant. Which is a sensible trade off for finalized plugins.

I particularly enjoy the transparency of GitHub actions logs and the other related views.


TODOs

handle wallet registration via command

Essential. I wish that the config could define the command key. @whilefoo do you have advice for this?

Would be interesting if we could map arbitrary key strings to function signatures, inspired by solidity (but I don't think this is possible)

We may however be able to map specific file paths to modules inside of the plugin to the key string in the config, although I don't like that it requires digging through a plugins contents just to map a command

integrate payout multiplier (seen pav comment it somewhere I forget where)

There is a concept called base rate in reference to how important a specific repository is. It should be passed in from the config.

However the old multiplier we used to associate with specific contributors I think we should deprecate. We barely used it.

decide on how the module will differentiate between ERC20 and 721 payouts if both re active in the partner config

decide how the module will obtain the X25519_PRIVATE_KEY

Runtime environment secrets. GitHub actions should read from the repository secrets. Cloudflare Worksrs from Cloudflare secrets.

Also something I just realized but our ts-template natively supports UI hosting upon deployment: it would be very interesting if some plugins have their own UIs for registration etc.

So instead of onboard.UBQ.fi to encrypt a wallet private key, it could be hosted directly by this permit plugin so the code is all shared.

It would be an even better UX if the plugin generated a dedicated wallet private key for the user as well instead of them bringing their own wallet address. Ideally fully on the client side but an restful API might be okay in the short term

@0x4007
Copy link
Member

0x4007 commented Mar 21, 2024

I think we shouldn't aim to support plugin data sharing yet. Let's build it when we actually need to, given that we are over two weeks behind schedule.

I realized that we will need to share /start command data with /wallet registration information.

This is because start should fail if a wallet is not registered. Perhaps start command can internally use the same KV instance/credentials to simplify the data sharing problem for now.

@whilefoo rfc

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Mar 21, 2024

I took things in a supabase direction after the initial cloudflare worker build but I can pivot back if it's going to be a worker.

This is because start should fail if a wallet is not registered.

I've set it up so that if a wallet cannot be found the permit doesn't get generated and a comment is posted telling them to register and have a reviewer re-run the workflow which I intend to throw if no permit gets generated then it'll get passed to the permit comment plugin once completed. Which in theory should be just pausing it until they register if I'm thinking along the right lines

I've set up 3 commands

  • /wallet (register address)
  • /permit <wallet addr> <token amount>
  • /permit <nft address> <username>

I'm supporting workflow_dispatch, pull_request.closed && merged, issue_comment.created. via the slash commands I can work out if it's an NFT or not but on the first two events it's tougher.

  • I'm pulling price from the labels, it would be handy if NFT-based payouts had a NFT label something to that affect

However the old multiplier we used to associate with specific contributors I think we should deprecate. We barely used it.

I started working on daily-streak and it's the best way I can think of applying a contributor-based payout increase for streaks

It would be an even better UX if the plugin generated a dedicated wallet private key for the user as well instead of them bringing their own wallet address. Ideally fully on the client side but an restful API might be okay in the short term

I suggested account abstraction a while back and implemented a rough af version when things started to pivot to a broader target demographic which is basically what you are suggesting here I think. E.g contributor registers via work.ubq with login via github and AA kicks in creating a wallet which is saved into db based on hunter's email/github login, then gasFaucet funds that address instantly/post-pr

@0x4007
Copy link
Member

0x4007 commented Mar 22, 2024

I took things in a supabase direction after the initial cloudflare worker build but I can pivot back if it's going to be a worker.

I think we should only use Supabase if KV isn't sufficient for a plugin's needs (although I can't see how this may be the case.)

To clarify I have a fair amount of experience using Supabase and conceptually understand KV (haven't really used it much though) but it seems that KV should be faster for plugin developers to set up.

For the short term I say we all try working with KV first for plugins. Then when we eventually make our plugin SDK or plugin template, we can by default include KV.

This is because start should fail if a wallet is not registered.

I've set it up so that if a wallet cannot be found the permit doesn't get generated and a comment is posted telling them to register and have a reviewer re-run the workflow which I intend to throw if no permit gets generated then it'll get passed to the permit comment plugin once completed. Which in theory should be just pausing it until they register if I'm thinking along the right lines

Permit module should be strictly defined with its capabilities in order to make the code small, simple, and maintanable both from a maintenance perspective and security perspective. The only I/O it must have is with the kernel, which means that it shouldn't directly be able to post error comments. This is a lot of unnecessary code bloat.

I've set up 3 commands

  • /wallet (register address)
  • /permit <wallet addr> <token amount>
  • /permit <nft address> <username>

Multiple commands seems interesting but out of scope. We can simply toggle the close as completed state to regenerate permits. Again, this adds unnecessary code bloat.

I'm supporting workflow_dispatch, pull_request.closed && merged, issue_comment.created. via the slash commands I can work out if it's an NFT or not but on the first two events it's tougher.

  • I'm pulling price from the labels, it would be handy if NFT-based payouts had a NFT label something to that affect

I have no experience working with the NFT code so I have little input here. However the information should be coming in explicitly defined in the config. Implicitly determining if its an NFT reward is the wrong approach.

However the old multiplier we used to associate with specific contributors I think we should deprecate. We barely used it.

I started working on daily-streak and it's the best way I can think of applying a contributor-based payout increase for streaks

No I'm referring to the contributor's set multiplier that we saved in the database, and would output their task assignment scoring multiplied with this number. We tried it temporarily but stopped using it.

If you're starting research on the the daily streak thing, thats fine but it should definitely not be in this code. As per the readme, this codebase should only accept input related to total amount of rewards and to who, then output payment permits only. All other complex logic for calculating the total per contributor etc must be handled outside of this plugin in separate plugins.

It would be an even better UX if the plugin generated a dedicated wallet private key for the user as well instead of them bringing their own wallet address. Ideally fully on the client side but an restful API might be okay in the short term

I suggested account abstraction a while back and implemented a rough af version when things started to pivot to a broader target demographic which is basically what you are suggesting here I think. E.g contributor registers via work.ubq with login via github and AA kicks in creating a wallet which is saved into db based on hunter's email/github login, then gasFaucet funds that address instantly/post-pr

I read that Coinbase is working on some in-browser solution that's in beta now. We can probably learn from their implementation and do something similar.

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Mar 22, 2024

(although I can't see how this may be the case.)

  • Either all plugins must be standalone workers then, somehow use the api or have the kernel be able to register new KV stores and assign them to the plugins

Multiple commands seems interesting but out of scope.

  • The only slash command will be /wallet

The only I/O it must have is with the kernel

  • Other than registering the wallet then all info required to generate a permit will be parsed from the previous plugin's output

However the information should be coming in explicitly defined in the config.

  • The previous plugin will pass the required info to determine which reward type it is, if removing the /permit ... slash commands there's no need to try to guess.

thats fine but it should definitely not be in this code

  • https://github.com/ubq-testing/ubq-daily-streak, but yes without /permit then there is no need to try to grab prices etc, every configurable aspect of the permit will be determined by the previous plugin and this module will generate and output the PermitTransactionData object for the next plugin to play with

For the short term I say we all try working with KV first for plugins.

Plugin devs could setup KV and access it through the API with their cloudflare creds then when the plugin gets ubiquity approval and then becomes hosted under ubiquity use org creds and create a KV store for each official plugin? The kernel could be updated to have access to all of these/none of them whatever, if the data isn't sensitive then include the KV ID in the plugin config which other plugin devs could hook into?

However, if Account A wants to allow Account B to access its KV store, here are some approaches that could be considered:

API Gateway or Custom Endpoint
Account A can create an API gateway or a set of custom endpoints within their Worker that proxies requests to the KV store. This Worker would authenticate requests from Account B using a pre-shared key, JWTs, or any other agreed-upon authentication method. This approach effectively allows Account B to read from or write to the KV store by making HTTP requests to the Worker, with the Worker ensuring that only authorized requests are processed.

If I'm to revert back to the worker build I had then the flow should be:

  • Plugin A sends permit data to Permit Module
  • Module makes req to worker receiving back the PermitTransactionData object
  • Module returns permit to kernel for Plugin C

<

  • User calls /wallet ...
  • Module sends context to worker, saving meta to KV
  • Modules leaves outcome comment
  • return {} to kernel

Edit: A better idea is probably just to have a worker for the sole purpose of KV access and keep everything else as-is that way there's no need for they libsodium workarounds

@0x4007
Copy link
Member

0x4007 commented Mar 25, 2024

Similar to the Chrome Extension store, or iOS App Store etc each plugin should be independently hosted with their own backends. This makes development for new developers, as well as managing our infrastructure simple.

Edit: A better idea is probably just to have a worker for the sole purpose of KV access and keep everything else as-is that way there's no need for they libsodium workarounds

I'm pretty certain that KV works in GitHub Actions. Besides, worst case scenario we just run "miniflare" inside of GitHub Actions. Its a full Ubuntu environment. It should basically not have any limitations on program execution.

@whilefoo whilefoo marked this pull request as ready for review March 27, 2024 17:11
@whilefoo whilefoo merged commit bcde7a0 into ubiquity-os:development Mar 27, 2024
gitcoindev added a commit that referenced this pull request Apr 8, 2024
…iguration

chore: add correct Knip workflow configuration
Keyrxng pushed a commit to ubq-testing/permit-generation that referenced this pull request Oct 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants