-
Notifications
You must be signed in to change notification settings - Fork 0
MVP #1
Comments
Refs #1. Lerna provides two key features for this project. 1. Allows packages to be linked together and dependent on each other directly in the repo. 2. Executes scripts in all packages in dependency order, allowing a b uild script to run over all packages in the right order. These two features are necessary for organizing the repo and making packages dependent on each other in a usable and testable manner. Release functionality might come in handy too, but that's not why I'm adding it for now. I generated this commit by running: ```shell npx lerna init ``` Only additional change was adding `lerna` to the NPM scripts so developers don't need to install it globally or use `npx` each time.
Refs #1. Generated the package with: ``` $ npm run lerna create bxpb-runtime > root@ lerna /home/dparker/Source/bxpb > lerna "create" "bxpb-runtime" lerna notice cli v3.20.2 package name: (bxpb-runtime) version: (0.0.0) description: Runtime library for browser extension protocol buffers. keywords: browser-extension browser-extensions chrome-extension chrome-extensions protocol-buffers library service services homepage: https://github.com/dgp1130/bxpb/packages/bxpb-runtime/ license: (ISC) MIT entry point: (lib/bxpb-runtime.js) git repository: (https://github.com/dgp1130/bxpb) ``` Additional edits were deleting the generated JavaScript code and dropping the `main`, `files`, and `directories` keys in the `package.json` file. I'll re-add those as they become relevant. I also filled in the README with expected usage details.
Refs #1. Generated the package with: ``` $ npm run lerna create bxpb-runtime > root@ lerna /home/dparker/Source/bxpb > lerna "create" "bxpb-runtime" lerna notice cli v3.20.2 package name: (bxpb-runtime) version: (0.0.0) description: Runtime library for browser extension protocol buffers. keywords: browser-extension browser-extensions chrome-extension chrome-extensions protocol-buffers library service services homepage: https://github.com/dgp1130/bxpb/packages/bxpb-runtime/ license: (ISC) MIT entry point: (lib/bxpb-runtime.js) git repository: (https://github.com/dgp1130/bxpb) ``` Additional edits were deleting the generated JavaScript code and dropping the `main`, `files`, and `directories` keys in the `package.json` file. I'll re-add those as they become relevant. I also filled in the README with expected usage details.
Refs #1. Generated the `tsconfig.json` file with: ```shell (cd packages/bxpb-runtime && npx typescript --init) ``` Updated some settings as well: * Generated declarations so downstream packages can take advantage of TypeScript. * Added sourcemaps to improve debuggability. * Targetted ES2019 to use real Promises and iterators. * Used ES2015 modules, as there is no need for CommonJS just yet (might need to switch later for protobufs). * Accepted source files from `src/` and output to `dist/`. I'll still need to figure out exactly how to package with this format. Added `tsc` to the `build` script and added documentation for how to build the package itself. I included a dummy `hello.ts` with "Hello world" just to validate the build.
Refs #1. Created the Jasmine config with: ```shell (cd packages/bxpb-runtime/ && npx jasmine init) ``` Since we're using TypeScript and I wanted to leverage the existing build configuration (or else risk divergence between the build and test commands), I use a test-only `tsconfig.spec.json` which extends the base `tsconfig.json`. This writes to a different output directory so tests can be included without polluting the release `dist/` directory. Jasmine then runs on these files to execute the tests. I don't see an easy way of debugging tests just yet. I tried setting up VSCode's debugger, but I couldn't find a configuration I was happy with, as it needs to pass the debugging port through `npm run` which executes two subcommands. This could be avoided by running the tests directly and not implicitly building with `npm test`, but I'm not comfortable with that as it is far too easy to forget to build. I could split it into multiple tasks in VSCode's configuration, but then I need an `npm run build-for-test` and `npm run exec-tests` which are just both called by `npm test` which is confusing and awkward and has non-obvious reasons for existing. At the end of all that it binds debugging support to VSCode directly, which I'm not a fan of. In the end, I could probably get debugging to work as is, but I'm not able to find a developer experience I'm satisfied with. I'll probably set up Karma soon to get a quick edit/refresh cycle with DevTools.
Quick rant about Lerna, I've been trying really hard to use existing Node tooling for this project, but I'm still not satisfied. I was hoping I could use Lerna to link local dependencies together and then use The possible ways I can test a package include:
Debugging is weird too, because I need to set a node port on a deeply nested command. Add to this that npm run lerna run build --scope packages/foo/ --include-dependencies && npm run lerna run test:build --scope packages/foo/ && npm run lerna exec node --scope packages/foo/ -- node --inspect=${DEBUG_PORT} node_modules/.bin/jasmine Then the package.json would need to be: {
"scripts": {
"test": "npm run -s test:build && jasmine",
"test:build": "tsc -p tsconfig.spec.json"
}
} The final exec command is needed to put the Node port exactly where it's needed and breaks abstraction in a really ugly way. I also need to separate the test build step just so it can be manually run for the package before executing the test. I'd also need to set this up separately for VSCode using multiple tasks, and TL;DR: I don't see an elegant way of managing/running tests with Lerna which I'm happy with. Maybe there's something I'm missing. I suspect Karma would solve a few of these problems, but mainly just by sidestepping them rather than Lerna really doing what I want? I'll see if I can get a decent debug flow with Karma, and if not, I might swap out Lerna for Bazel instead. |
Refs #1. Forked from https://github.com/actions/starter-workflows/blob/6adb515f324bc8ef4c0091053438298bbc34a85a/ci/node.js.yml. Modified this slightly to use Lerna for the dependency management piece. I wasn't able to find any explicit documentation for `npm ci`-like behavior for Lerna, but this appears to be implemented. Looking through lerna/lerna#1360, it appears to support `--ci`.
Refs #1. This includes a `test:debug` script to execute Karma. Karma's config uses the existing `tsconfig.spec.json` so test builds should be consistent between test and debug, though I suspect Karma adds its own options for sourcemapping. I had a lot of fun trying to get sourcemaps to work. The compiled JavaScript would be wrapped in coverage instrumentation, so the sourcemap comment was not at the end of the file, and thus would not be picked up by the browser. Eventually, I found the option to disable coverage which removed the instrumentation and Chrome was able to pick up the sourcemap. This does mean that coverage is very likely broken, but I'm not that concerned about it at the moment. I also discovered that Lerna's `--scope` command was not actually working. I would run `npm run lerna run foo --scope packages/bar/`, however `--scope` is actually an NPM argument, so NPM accepts it and does not pass it to Lerna. Since the repo is currently just one package, the lack of a scope would run on all packages, which is just that one so I had not noticed. Running `npm run -- foo-script` properly escapes arguments after the `--`. At this point I also discovered that the `--scope` flag expects a package name, not a path, so the *correct* way to run a command through Lerna is actually: ```shell npm run -- lerna run foo --scope bar ``` Updated the documentation to reflect this. If `foo` expects arguments, though should also follow a second `--`, for example: ```shell npm run -- lerna run foo --scope bar -- --baz hello/world.txt # Runs `npm run foo --baz hello/world.txt` in package `bar` ```
Refs #1. This defines descriptor types for services and methods as well as a `serve()` function which is able to infer the implementation methods required and type check them accordingly. The test required a mock service, however there is no easy way of having a `Message` implementation without compiling a real *.proto file, which would complicate Karma significantly or yield wierd rebuilds. I'm avoiding that for now, but it may be necessary to add in the future. This might be possible by having a private `bxpb-testdata` package which builds protobufs in its `npm run build`. Then Lerna would be able to build it as a dependency without requiring changes the Karma config, however Karma's live reload would not automatically rebuild it. Also updated the test commands in the README to include `--stream`, which streams test output to the terminal.
Refs #1. Somehow this got missed previously. I think I had it as a dev dependency, and then removed it but forgot to add it back as a real dependency.
Refs #1. Previously we used Jasmine, however this runs within Node. Using Karma for debug tests and Jasmine for normal tests, means that one set runs in a browser and the other set runs in Node, which creates some discrepancies. By using Karma, all tests are consistently running in a browser. Running in a browsers is preferable to Node, because the browser is closer to the real runtime where bxpb will be used.
Refs #1. I realized there's no need to mock `chrome.runtime.onMessage()` because it is provided in the input. Instead I made a `FakeEvent` class which maintains the listeners and exposes a new `trigger()` function for use in testing. I also included tests for `FakeEvent`, though I have not found a good way to test that `fail()` is used correctly. I found a Stack Overflow question which does this, but I was not able to get it to work as `jasmine.Env.prototype.execute()` does not appear to execute tests synchronously. I wasn't able to find a good way to wait for all tests to be completed. Instead I left those untested for now, hopefully I can find a better way to cover those cases in the future.
I just spent a couple hours trying to compile a I tried to generate JavaScript protos instead, and then run them through Clutz to generate TypeScript definitions. I believe this is how it works in google3, so I figured that was closest I could get to a directly supported option. Unfortunately, this doesn't really seem to work either. Closest I got was this command:
The first conflict is about the The second conflict is The third conflict seems to be exporting the proto. I added Node externs to handle CommonJS symbols (ie. See angular/clutz#334 for more info. I could keep debugging and just try to get Closure itself to compile the generated protobuf, then worry about Clutz afterwards. Maybe I'll take another crack at this later, but it seems like Clutz and Closure are not effectively working together here. If I can't get anything usable from this, I might just have to fall back to the community alternatives. |
I took another quick pass at Clutz, but didn't have much luck. I can get Closure to compile if I use (cd packages/bxpb-runtime/ && npx google-closure-compiler --js test-out/test_data/greeter_pb.js --compilation_level SIMPLE) I tried the $ (cd packages/bxpb-runtime/ && npx google-closure-compiler --js test-out/test_data/greeter_pb.js --js node_modules/google-protobuf/**/*.js --js node_modules/google-protobuf/package.json --compilation_level ADVANCED --process_common_js_modules --module_resolution NODE)
test-out/test_data/greeter_pb.js:10: ERROR - [JSC_UNDEFINED_VARIABLE] variable module$node_modules$google_protobuf$google_protobuf is undeclared
var jspb = require('google-protobuf');
^^^^^^^^^^^^^^^^^
test-out/test_data/greeter_pb.js:27: ERROR - [JSC_UNDEFINED_VARIABLE] variable proto is undeclared
proto.foo.bar.GreetRequest = function(opt_data) {
^^^^^ There are also unrelated errors directly in I also tried changing the output format of This is just becoming a big waste of time. Clutz, Closure, and protobufs are just too unergonomic as to be effectively unusable in the OSS ecosystem. For now, I think I'll use some community resources to generate TypeScript code and worry about potential google3 compatibility if/when that becomes relevant. |
Refs #1. Without this, the compilation will sometimes include files from other generated directories and conflict with itself.
Refs #1. This introduces `greeter.proto` which includes a simple service. It generates TypeScript declarations and puts them in `generated/`, using TypeScript's `paths` feature to map imports. Karma also needs to map the JavaScript in a similar fashion to resolve itself. I also refactored the way the `tsconfig.json` files worked to better support this. Now there are three config files which are all a bit different: * `tsconfig.json` is meant for the editor and contains all TypeScript files (including generated). This allows the IDE to resolve all imports in all files so there are no awkward red squigglies that don't go away. * `tsconfig.lib.json` is for actually running the compilation. It simply excludes test files, so as not to accidentally use them in a production build. * `tsconfig.spec.json` is for running tests. It forces a different output directory and format to be compatible with Karma. One downside of this strategy is that the editor belives all imports are available from all locations which isn't strictly true. For example, `service.ts` should **not** reference Jasmine's `describe()` function, but the editor will not complain about this. In TypeScript 3.9, solution style configs allow for more flexibility in the editor and should be able to better support this format. Filed #3 to look into this when TS3.9 launches.
Refs #1. This introduces `greeter.proto` which includes a simple service for testing purposes. It generates TypeScript declarations and puts them in `generated/`, using TypeScript's `paths` feature to map imports. Karma also needs to map the JavaScript in a similar fashion to resolve itself. I also refactored the way the `tsconfig.json` files worked to better support this. Now there are three config files which are all a bit different: * `tsconfig.json` is meant for the editor and contains all TypeScript files (including generated). This allows the IDE to resolve all imports in all files so there are no awkward red squigglies that don't go away. * `tsconfig.lib.json` is for actually running the compilation. It simply excludes test files, so as not to accidentally use them in a production build. * `tsconfig.spec.json` is for running tests. It forces a different output directory and format to be compatible with Karma. One downside of this strategy is that the editor belives all imports are available from all locations which isn't strictly true. For example, `service.ts` should **not** reference Jasmine's `describe()` function, but the editor will not complain about this. In TypeScript 3.9, solution style configs allow for more flexibility in the editor and should be able to better support this format. Filed #3 to look into this when TS3.9 launches.
Refs #1. This defines the wire format of an RPC request/response pair. It listens to the provided transport to `serve()` and awaits a message. When a message is received, it is validated against the expected wire format (`ProtoRequest`) and the method implementation is identified and called. Most of this is attempting to handle all the possible errors which can be encountered, as all the data provided cannot be trusted to be consistent with the specification. Proto messages are encoded in base64 because that is the simplest option for now. It probably does not matter for the time being, and proper performance analysis would identify the most ideal message format in the future. I also changed the method implementation to use the exact method name provided in the `*.proto` file. Most of the time, this will be in pascal case per protobuf conventions, which might be awkward for some JavaScript linters/programmers. For example: ```proto service Greeter { // Name "Greet" is pascal case, normal for *.proto files. rpc Greet(GreetRequest) returns (GreetResponse) { } } ``` ```typescript serve(chrome.runtime.onMessage, GREETER_SERVICE, { // Name "Greet" is also pascal case, weird for JavaScript/TypeScript files. async Greet(req: GreetRequest): Promise<GreetResponse> { return new GreetResponse(); }, }); ``` I think this is simpler as it is does not include an awkward conversion of the RPC name in the `.proto` file, which can be tricky to do correctly. Messages like `JSObject` would naively be turned into `jSObject` which doesn't make much sense. There are lots of edge cases which cannot be handled algorithmically. This is also simpler for developers as there is a direct one-to-one correlation. Downside is that this looks less JavaScript-y and some linters may complain about it. The linter warnings may be annoying, so we'll have to see exactly how well they play with this format.
Refs #1. This introduces a `ProtoClient` class with an `rpc` function. Generated clients will extend `ProtoClient` with their implementation. This gives a consistent constructor interface for usage of the clients and allows protected members to be added in the future if necessary. `rpc()` does the real work of sending a message to the backend service and awaiting the response. Again, most of the work here is validating data, but this turned out to be simpler than the service side. Both of these APIs are marked via comment as being private. They need to be exported so generated code can call them, but should **not** be called directly by user code. This allows the generated code structure to be entirely an implementation detail which I can change at will. That means the runtime can have a new function added and the compiler can generate code which uses it without any backwards-compatibility concerns. Downside of this decision is that it does not allow client/service code to be independently rolled out. I think this is ok, as browser extensions are released monolithically through a store, so decoupled releases is a pretty specific use case. If this decision becomes a real problem for a significant number of users, then maybe we can revisit this in the future. `rpc()` could have been a protected method of `ProtoClient`, but I opted for a separate function so it would be harder to misuse, as developers would need to import it before calling it. As long as I expose this code at some kind of `import { rpc } from 'internal/DO_NOT_DEPEND_OR_ELSE/client';`, then I think we should be ok.
Refs #1. This will contain a simple "Hello world" example using BXPB. It will also serve as a test case to use for the rest of the repo.
Refs #1. This creates a minimal Chrome extension (just a manifest that does nothing). A build script copies the file to an output directory with instructions on how to install it.
Refs #1. This adds a simple popup window to the browser extension to act as the client. Currently this just displays simple HTML. Also updated the README with additional details around reloading local changes.
Refs #1. This adds a background script to the extension manifest so it is loaded on startup. The build strives to be as simple as possible, so it just uses `tsc` and no bundler. This is accomplished by leveraging native ESM imports from the browser. https://medium.com/front-end-weekly/es6-modules-in-chrome-extensions-an-introduction-313b3fce955b Typically, background scripts are referenced as JavaScript files from the manifest. In this case, we reference an HTML file, with a `<script type="module">` in order to use ESM imports. Because there is no bundler, one weird aspect is that imports need to include `.js` at the end. Otherwise module resolution in the browser will fail to find the file. See microsoft/TypeScript#16577 and https://github.com/alshdavid/tsc-website/pull/1 I used a subdirectory for TypeScript/JavaScript code rather than including them directly in `src/` and `dist/` because the build process does not output a single file. This meant I had to pick a name for the subdirectory and the best I came up with was `js/`. I wanted the name under `src/` and `dist/` to be the same, so it is a little strange that TypeScript source files live under the `js/` directory, but I've definitely seen weirder naming conventions. The `tsconfig.json` file came from `(cd examples/greeter/ && npx typescript --init)` with some settings pulled from `packages/bxpb-runtime/` for consistency. This uses inline sourcemaps for simplicity since they do not need to be exported to downstream projects like the runtime.
Refs #1. Switched the build pipeline to use Rollup to bundle TypeScript. This is necessary because Protobuf generated code can only use Closure or ComonJS, there is currently no ESM option. As a result I opted for CommonJS and the compilation must work with that format. This example was intended to be the simplest, using only NPM scripts with no Webpack/Rollup/Grunt/Gulp/etc. However, `tsc` doesn't provide any means of actually using CommonJS without a bundler. As a result, I added Rollup to this example as I felt it was the "simplest", based on my totally subjective personal experience. I've had trouble with Webpack in the past for multiple output JavaScript files with disjoint dependencies, which is exactly the use case I'm trying to support with this library. I'm still using NPM scripts as the main driver for the build, only using Rollup for the TypeScript compilation and bundling. After all this, I'm able to generate protobuf code and import it into the build. This uses `grpc-tools` which ships a version of `protoc` which is installable via NPM. Otherwise devs have to install `protoc` themselves, which I would much rather avoid. This generates the JavaScript files, but no TypeScript definitions unfortunately. I used `protoc-gen-ts` to generate typings, though it is really sad that I need a community tool just to use protobufs in TypeScript. One side effect of using the gRPC tool is that it generates a `myprotofile_grpc_pb.d.ts` file, which contains definitions for gRPC and links to that runtime. This is useless for the purposes of this project but I could not find a way to prevent the file from being generated. Instead, I simply delete that file after generation, which is the best I can do. `protoc` is more awkward to use than I had hoped, as it requires all input files to be explicitly named. I wanted to just say "Compile all protos under the proto/ directory", however there is no way to do this. Instead, I had to use a `find` command to list out all such files, though this does create more of a build dependency on Linux. None of this would work for Windows, though that isn't too much of a concern right now. I also wanted generated protos to be available under `import * as foo_pb from 'protos/...'`. This requires a `paths` argument in the `tsconfig.json` file, which works fine with `tsc` but not with Rollup. There is probably a way to configure this to be compatible with Rollup, but I couldn't find any obvious documented solution in `@rollup/plugin-typescript`. Instead, attempting to be as simple as possible, I decided to just import from `../`. I think that is a bit of a code smell, but it's the most straightforward and obvious solution for any users who look at this example. I also converted `background.js` to a simple script as it no longer needs to be a module.
Refs #1. Switched the build pipeline to use Rollup to bundle TypeScript. This is necessary because Protobuf generated code can only use Closure or ComonJS, there is currently no ESM option. As a result I opted for CommonJS and the compilation must work with that format. This example was intended to be the simplest, using only NPM scripts with no Webpack/Rollup/Grunt/Gulp/etc. However, `tsc` doesn't provide any means of actually using CommonJS without a bundler. As a result, I added Rollup to this example as I felt it was the "simplest", based on my totally subjective personal experience. I've had trouble with Webpack in the past for multiple output JavaScript files with disjoint dependencies, which is exactly the use case I'm trying to support with this library. I'm still using NPM scripts as the main driver for the build, only using Rollup for the TypeScript compilation and bundling. After all this, I'm able to generate protobuf code and import it into the build. This uses `grpc-tools` which ships a version of `protoc` which is installable via NPM. Otherwise devs have to install `protoc` themselves, which I would much rather avoid. This generates the JavaScript files, but no TypeScript definitions unfortunately. I used `protoc-gen-ts` to generate typings, though it is really sad that I need a community tool just to use protobufs in TypeScript. One side effect of using the gRPC tool is that it generates a `myprotofile_grpc_pb.d.ts` file, which contains definitions for gRPC and links to that runtime. This is useless for the purposes of this project but I could not find a way to prevent the file from being generated. Instead, I simply delete that file after generation, which is the best I can do. `protoc` is more awkward to use than I had hoped, as it requires all input files to be explicitly named. I wanted to just say "Compile all protos under the proto/ directory", however there is no way to do this. Instead, I had to use a `find` command to list out all such files, though this does create more of a build dependency on Linux. None of this would work for Windows, though that isn't too much of a concern right now. I also wanted generated protos to be available under `import * as foo_pb from 'protos/...'`. This requires a `paths` argument in the `tsconfig.json` file, which works fine with `tsc` but not with Rollup. There is probably a way to configure this to be compatible with Rollup, but I couldn't find any obvious documented solution in `@rollup/plugin-typescript`. Instead, attempting to be as simple as possible, I decided to just import from `../`. I think that is a bit of a code smell, but it's the most straightforward and obvious solution for any users who look at this example. I also converted `background.js` to a simple script as it no longer needs to be a module.
Refs #1. Switched the build pipeline to use Rollup to bundle TypeScript. This is necessary because Protobuf generated code can only use Closure or ComonJS, there is currently no ESM option. As a result I opted for CommonJS and the compilation must work with that format. This example was intended to be the simplest, using only NPM scripts with no Webpack/Rollup/Grunt/Gulp/etc. However, `tsc` doesn't provide any means of actually using CommonJS without a bundler. As a result, I added Rollup to this example as I felt it was the "simplest", based on my totally subjective personal experience. I've had trouble with Webpack in the past for multiple output JavaScript files with disjoint dependencies, which is exactly the use case I'm trying to support with this library. I'm still using NPM scripts as the main driver for the build, only using Rollup for the TypeScript compilation and bundling. After all this, I'm able to generate protobuf code and import it into the build. This uses `grpc-tools` which ships a version of `protoc` which is installable via NPM. Otherwise devs have to install `protoc` themselves, which I would much rather avoid. This generates the JavaScript files, but no TypeScript definitions unfortunately. I used `protoc-gen-ts` to generate typings, though it is really sad that I need a community tool just to use protobufs in TypeScript. One side effect of using the gRPC tool is that it generates a `myprotofile_grpc_pb.d.ts` file, which contains definitions for gRPC and links to that runtime. This is useless for the purposes of this project but I could not find a way to prevent the file from being generated. Instead, I simply delete that file after generation, which is the best I can do. `protoc` is more awkward to use than I had hoped, as it requires all input files to be explicitly named. I wanted to just say "Compile all protos under the proto/ directory", however there is no way to do this. Instead, I had to use a `find` command to list out all such files, though this does create more of a build dependency on Linux. None of this would work for Windows, though that isn't too much of a concern right now. I also wanted generated protos to be available under `import * as foo_pb from 'protos/...'`. This requires a `paths` argument in the `tsconfig.json` file, which works fine with `tsc` but not with Rollup. There is probably a way to configure this to be compatible with Rollup, but I couldn't find any obvious documented solution in `@rollup/plugin-typescript`. Instead, attempting to be as simple as possible, I decided to just import from `../`. I think that is a bit of a code smell, but it's the most straightforward and obvious solution for any users who look at this example. I also converted `background.js` to a simple script as it no longer needs to be a module.
Refs #1. We're only testing on one version of Node anyways and it was already hard-coded in the config. No need for the matrix as well, it didn't do anything anyways.
Refs #1. This copies the background Rollup config to apply to the popup as well with a separate top-level file for it. Code could pretty easily be shared for the Rollup config between the two binaries, but I decided simplicity of the configuration is more important than maintainability in this particular case.
…vice. Refs #1. Expands test coverage to include multiple methods in a single service and ensure they are generated and aligned correctly.
Refs #1. This more closely matches how service and client code is generated and provides inline documentation to users of the generated code. Ideally this would `{@link ...}` to the proto source, but I have no symbols to link to due as this code itself is the service and method descriptors. Instead I just did inline code blocks with the service and method names.
Refs #1. This generates the JavaScript and TypeScript definition code for BXPB clients. Relatively straightforward implementation after going through services and descriptors. There is a bit more code to generate which adds some complexity, in particular because this deals with both services **and** methods, unlike generated service code.
Refs #1. This calls the newly implemented client generation code from the plugin. Tests simply assert that the function is called correctly as it's implementation tests it pretty thoroughly already.
Refs #1. Greeter was previously using hand-written client code as a stand-in. Now that the compiler implements code generation, the existing hard-coded client is no longer necessary. The replacement is a drop-in replacement with no additional work changes needed to use it effectively. Simply altering the import is sufficient.
At this point all the major requirements are supported. The library is technically in a releaseable state. Before I actually release version Edit: Crossed out items have been ignore/skipped due to #1 (comment).
|
Refs #1. Sometimes when generated the output JavaScript binary is not executable. This makes it executable at the end of the build process.
Refs #1. This updates the package name to use the new NPM scope. This improves protection against typo-squatting and clearly indicates that it is an "officially" supported package. I opted to have the directory structure as `packages/runtime/` rather than `packages/@bxpb/runtime/` as all packages should be under `@bxpb`, so it would just be extra typing and an extra directory which serves no benefit.
Refs #1. This moves the package under the `@bxpb/` scope to better protect it from typosquatting and clearly designate it as an "officially" supported package. The binary exported by `@bxpb/protoc-plugin` can't be renamed similarly however, as binaries don't fit that model. I left that name as `bxpb-protoc-plugin` as I didn't want to just use `protoc-plugin` and be too generic. Hopefully this one exception of using `bxpb-*` over `@bxpb/*` won't cause too much confusion.
I looked into making an |
Refs #1. This is generally good for documentation, but the main purpose here is to identify `bxdescriptors{.js,.d.ts}` as private, internal-only code not suitable for direct use by consumers. This explicitly declares that the file is subject to change and is not covered by semantic versioning. This should relieve the package from having to support a particular file format and allow it to change as needed to support APIs which are actually exported.
Refs #1. This took a few hours of debugging and was quite complicated in its root cause. I was trying to do some random refactorings and I kept coming across very strange errors. Mainly I kept getting an error along the lines of "Cannot assign (req: GreetRequest) => Promise<GreetResponse> to MethodImplementation<any>" in `background.ts` when calling `serveGreeter()`. The `MethodImplementation` type should never have an `any` in it, as it is always generated with the most specific type parameter, so this seemed quite strange. After a **lot** of trial and error, I found that I could reproduce the problem by simply adding `export` to `type ServiceImplementation = ...` in `packages/runtime/src/service.ts`. See https://github.com/dgp1130/bxpb/commits/tsc-dts-export-error. Eventually I discovered that even without this change, the generated `greeter_bxservices.d.ts` file imports were not working correctly (thought it would build successfully). In that file, we import `Transport` and `ServiceImplementation`. However, these both seemed to get resolved to `any` (VSCode tooltips unhelpfully label these as `import Transport`, rather than resolving their types as `any`, so it took a while to figure this out). I didn't see any typos and there was no compiler error on the import which made this especially strange. Eventually I discovered that `Transport` and `ServiceImplementation` simply were not exported by `packages/runtime/src/service.ts`. It turns out that `.d.ts` files do not display import errors and instead resolve them as `any`! See: microsoft/TypeScript#36971. Adding the `export` to `Transport` and `ServiceImplementation` didn't quite fix the problem though. After more debugging I found that the generated `GreeterService` type was *also* being resolved to `any` in the generated `greeter_bxservices.d.ts`. I came to realize that I was using `ServiceImplementation<descriptors.GreeterService>` when `descriptors.GreeterService` was a **value**, not a **type**. I should have used either `typeof descriptors.GreeterService` or `descriptors.IGreeterService`, which is the actual interface type it implements. Changing the generator to use the interface as the type **finally** fixed the problem and passed the build. `serveGreeter()` now correctly resolves the types and will give meaningful and accurate error messages if used incorrectly. Another interesting aspect is that we had a test for specifically this kind of error. When I first implemented `serveGreeter()`, I knew that having strong type inference of generated code was an important requirement which should be tested, so I specifically included a test to verify this exact use case. I did it by calling `serve()` (the backend of generated code like `serveGreeter()`) with an invalid input using `@ts-ignore` (with the intent of upgrading it to `@ts-expect-error`, when TS3.9 came out, just haven't gotten around to it yet): https://github.com/dgp1130/bxpb/blob/c9dea344d5f8bcbfc7760fa1e2cc1b66b3f74da0/packages/runtime/src/service_spec.ts#L147 However, even `@ts-expect-error` would **not** have caught this bug. This is because the test imports from `./src/service.ts` which defines `ServiceImplementation` in the same file. Production usage however imports through the built file, `./dist/service.d.ts`, thus requiring `ServiceImplementation` to be exported in a way that the test did not. Basically, the test imports the source `*.ts` file, while production usage imports the generated `*.d.ts` file. The difference in the import graph means that the test failed to catch this particular error. That also means I had to write very unintuitive regression tests to assert that `ServiceImplementation` is actually exported. This fundamental import difference also means that a unit test like this simply cannot fully test the behavior that `serve()` is properly typed to its callers. I would need a full integration test with bad user code that checks the error message of the build to be truly certain of this. My takeaways from the last few days of debugging are: 1. `serveGreeter()` has had broken types for some time now. It was being resolved to `any` and I didn't notice until now. 2. I didn't notice the problem because I expected `.d.ts` files to give errors on cases like this. TypeScript developers simply cannot rely on the compiler to sanity check `.d.ts` files. Just because it compiles, does **not** mean it is correct. I'm also not aware of any strictness flag which will address this problem (we already use `--strict` for instance). 3. Tests may not catch typing problems. We already had a test to cover this exact use case, but the differences in the build and test pipeline make it effectively impossible to test this at a unit level. 4. Debugging types is still pretty difficult. All I could really do was use VSCode hover tooltip to find the type of specific symbols, but the way it displays imported types makes this harder than it should be. Maybe there are better ways I am not aware of.
I was looking into refactoring The biggest lesson from the relevant TypeScript issue (microsoft/TypeScript#36971) is that you should not hand-write Thinking back, if we had emitted real In our compiler, we could generate I think the proper solution here is to use I filed #4 to make this infrastructure change. I'm not convinced that this is necessary for MVP, but I would really like to avoid such complicated and in-depth debugging like this again. |
Refs #1. This allows users to import directly from `@bxpb/runtime` and clearly defines what the supported APIs is (everything exported at that location). Currently all the exported symbols are only used by generated code and are considered "private". I put these under `internalOnlyDoNotDependOrElse` to be clear that these APIs are not explicitly supported and are not fit for direct use by end-users. I also chose to clear out the README for `@bxpb/runtime` because I don't want to advertise the existence of these APIs. I couldn't find a good way of re-exporting types under a namespace. It seems like `import export Foo` syntax is supposed to be used here, but I simply could not get that to work. Instead, the generics need to be re-declared a bit awkwardly, but it shouldn't be a real problem.
Refs #1. This type has a generic `Implementation` parameter which itself is a subtype of `ServiceDescriptor`, making this awkwardly recursive in a very unnecessary fashion. I believe this was originally done while following similar types in gRPC generated code, however this simply isn't necessary. An object literal is already a subtype of `Record<K, V>`, so the `methods` property of `ServiceDescriptor` was ok without needing the generic. This makes the types overall simpler and easier to work with.
Refs #1. These tests only exist to cause compile-errors if their respective types are not exported. Since they are compile-time checks, there is nothing to do at runtime and they were emitting warnings that no expections were present. Adding `expect().nothing()` suppresses the warning. Also took the liberty to use `any` as a generic type argument to make the test less brittle to changes, since all it cares about is that the type is exported.
I've been thinking a lot about this project over the past couple days and have slowly convinced myself that it cannot be successful in the current ecosystem. I'm going to try to put my thoughts down coherently and explain my change of heart: The problemI was always a bit frustrated and disappointed with the end developer experience I am able to provide. I've talked before about how protocol buffers are a pain to use with current web tooling, but something more has been bothering me that I was never quite able to identify until now. The core problem with protocol buffers in the context of BXPB is that they require the buy-in of the entire data model system of a given application. Conveniently calling a function with BXPBWhat I mean by this is that in order to conveniently call a given function using BXPB, an application needs to already be able to express its core data model in protocol buffers. If I have a method like In this scenario my logic comes roughly follows:
QED If there is a flaw in this proof, I think it is that using BXPB does not inherently require an application to define its entire set of data models in protocol buffers. Whatever operation is being performed via RPC may be simple and limited enough that a relatively small subset of the data model needs to be expressed. Such an application could still benefit from BXPB, but would not be likely to accept the onboarding cost of setting up protocol buffers, learning and using them correctly, and getting the required organization buy-in. How users should use BXPBThe ideal BXPB user already uses protocol buffers for their entire existing data model and wants to leverage them to easily call functions across multiple contexts in a browser extension. How users will actually use a BXPBPractically speaking, few if any browser extensions actually have protocol buffer-definitions for their entire data set and uses them uniformly throughout the application. The different between the starting point of "no protobufs" to the end state of "all protobufs" force an application to choose among some very undesirable options:
The first option requires total buy-in of the application into protocol buffers. Unless you are already using protobufs heavily in your application and call to gRPC backends, this is almost certainly not worth the benefit. If you already have non-protobuf data models, you'd need to migrate them to this point and would be a significant amount of effort. Don't forget that the primary selling point of protocol buffers if that they provide strong forwards and backwards compatibility guarantees between client and service. However a browser extension is typically released monolithically, so the entire codebase is built at a single point in time, with no version skew between components. As a result, protobuf compatibility is effectively worthless to browser extensions. The only real use case I see are for calling gRPC backends, where version skew does become a meaningful problem. Hypothetically, an extension could become large and complex enough that a team might choose to more strongly version different components owned by different teams, but at that point the system is so large and complex that the team would likely develop and maintain their own RPC system rather than rely on one supported by only a single open-source developer. The second option requires duplicating a significant amount of code with manual gluing between the two. It's a hack at best and unmaintainable at worst. The third option would effectively define all your methods as: message GreetRequest {
string json = 1;
}
message GreetResponse {
string json = 1;
}
service Greeter {
rpc Greet(GreetRequest) returns (GreetResponse) { }
} The actual meaningful data would be stored as a simple JSON string, only using protocol buffers as a wire format. This is effectively JSON-over-RPC. Both the client and service would be responsible for serializing/deserializing the real data models to/from JSON. At this point, why even bother using protocol buffers? The bottom-lineNone of these are great options and are definitely against the general spirit of protocol buffers. The only way I can see a team using and actually benefiting from BXPB is if:
I have a hard time seeing any team choose this state of affairs, they only find themselves in it due to other circumstances. The only viable use case I think of is Google itself, where:
So now what?So having now recognized this problem, what next? I could try another IDL which is more compatible with existing web tooling, but I'm not aware of any particularly good candidate. Even then, any other IDL will encounter the same problem of requiring a team to define their entire data model with it, which is just not how modern web development is done. REST APIs are typically done with JSON, not any special IDL format. Accepting that any IDL format leads to the above problems (in some form or another), then to be effective we need to simplify the problem to avoid a dependency on the data format entirely. The most immediate answer I see: let user-code serialize the data. If the API contracts simply treat request and response data as strings or simple (serializable) JavaScript objects, then defining a function looks something like: // common.ts
export interface GreeterService {
greet(request: string): Promise<string>;
}
export class GreetRequest {
constructor(readonly name: string) { }
serialize(): string {
return JSON.stringify({ name: this.name });
}
static deserialize(serialized: string): GreetRequest {
const json = JSON.parse(serialized);
return new GreetRequest(json.name);
}
}
export class GreetResponse {
constructor(readonly message: string) { }
serialize(): string {
return JSON.stringify({ message: this.message });
}
static deserialize(serialized: string): GreetResponse {
const json = JSON.parse(serialized);
return new GreetResponse(json.message);
}
} // background.ts
serve<GreeterService>(chrome.runtime.onMessage, {
async greet(request: string): Promise<string> {
const req = GreetRequest.deserialize(request);
const res = new GreetResponse({
message: `Hello, ${request.name}!`,
});
return res.serialize();
}
}); // popup.ts
const client = new Client<GreeterService>(chrome.runtime.sendMessage);
const req = new GreetRequest('Dave');
const serializedRes = await client.greet(req.serialize());
const res = GreetResponse.deserialize(serializedRes);
console.log(res.message); // Hello, Dave! I'm taking some creative liberties with the implementations of However, the main take away here is that a simple TypeScript interface is being used as the IDL format here. It has a few critical advantages:
There are also a few cons however:
This definitely requires some more thought and exploration to evaluate further and see if it has any merit. So what happens to BXPB?To be totally honest, as much fun as I had making this project, I don't see a valid path where it can become successful. I can see couple teams at Google potentially using it (after Bazel and third party integration, which is non-trivial amount of setup work before it becomes viable to individual projects), possibly one or two teams at other companies with similar tech stacks, and possibly a couple more teams using BXPB where they really shouldn't have and regretting it later. As a result, I can't justify additional work and effort going into the project at this point. Things may change in the future. Maybe protobufs take off. Maybe they embrace the web ecosystem and become more usable and ergonomic. Maybe I'm overstating the perceived requirement of using protobufs throughout an application's entire data model. As it stands, we're so close to a After that, I think I'll explore the viability of not using an IDL by letting user-code serialize the data. I think this has a lower potential than BXPB (as it is inherently less ergonomic by design, after all tooling is hypothetically perfected), but is much more achievable given the current state and culture of modern web development. I can only hope there is real potential in that direction. |
Refs #1. Both `@bxpb/runtime` and `@bxpb/protoc-plugin` will be published to NPM under a scope. According to NPM and Lerna, this value must be set for them to be publically visible. https://docs.npmjs.com/misc/config#access
Refs #1. Test suites were already excluded, but test helpers and data should also be excluded from production builds.
Refs #1. This adds an allowlist of files to include in the published package. This should only contain the built files, no source or other config files. https://docs.npmjs.com/files/package.json#files
Refs #1. Per [Lerna's lifecycle docs](https://github.com/lerna/lerna/tree/master/commands/publish#lifecycle-scripts), many scripts are executed during the publish process, including `prepack`. This will clean and build each package from scratch to ensure no dirty data gets pushed to NPM. I'm honestly not sure which of the 4 pre-publish scripts should be used here. I went with `prepack` as the others seem special cased for local `npm install` cases, which I don't really need because Lerna already handles all that for local development.
Refs #1. This reinstalls `node_modules/` and bootstraps the entire codebase to reset `node_modules/` to a consistent state, no matter what state it may have been in to begin with. It then cleans all packages to remove any previously generated files. Each package's `prepack` script can now assume they are starting from a clean state. They each `build` and then `test` to verify that tests are passing before publish. That also means that greeter needs to build and test itself to make sure everything passes, even though it won't be published.
Refs #1. I just realized the `prepack` hook is run from Lerna, so the only way this is executed, is if the user has already run `npm install` (or `npm ci`). As a result, deleting and resetting `node_modules/` mid-execution of Lerna is likely going to cause more problems than it will solve.
Refs #1. This discusses how to do a canary release and a follow up production release. I am somewhat guessing at the production release, as I haven't done it yet. I'll update this doc when I discover additional steps necessary.
Refs #1. Updates the `package.json` `bin` reference to point to the JavaScript file in the `dist/` directory. It seems that NPM publish does not include symlinks, so `bin/protoc-plugin` was not being published and was not available. The easiest solution is to just point to the actual executable file.
I published Lastly I've updated the README and Wiki to indicate that the project is shut down, with a brief explanation of the decision and a link to the above comment which provides more thoughts on the matter. I'm closing all the other open issues as obsolete (technically this one should be considered "Done", as I've accomplish all the requirements initially specified). I'll also archive the repository as it only exists for historical purposes now. RIP BXPB. |
This issue tracks the mainline work to be done to launch minimum viable product (MVP). It gives me somewhere to write down general thoughts and notes that don't necessarily relate to commit messages or markdown docs.
In my view, MVP requires:
(req: RequestMessage) => Promise<ResponseMessage>
const res = await myService.myMethod(req);
.Edit: All the major requirements are currently complete, but there are a few cleanup tasks still to do before the
1.0.0
release.Requirements explicitly not covered by MVP:
The text was updated successfully, but these errors were encountered: