Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better support for local dev and CI workflows, so I can more easily test with a live gateway using the output of Managed Federation #695

Closed
prasek opened this issue Jul 27, 2021 · 11 comments
Labels
feature 🎉 new commands, flags, functionality, and improved error messages needs design ✏️

Comments

@prasek
Copy link

prasek commented Jul 27, 2021

Problem

Doing smoke tests on a real live gateway during local dev and in CI workflows is helpful to ensure things are working end-to-end beyond just schema checks.

This is easy to do on your local laptop with rover supergraph compose or in a CI workflow by passing the supergraphSDL directly to the Gateway e.g. via docker volume mount.

However, doing e2e tests in local dev or in CI with managed federation requires obtaining and then publishing all subgraphs to the Apollo Registry and then spinning up a gateway in a CI workflow that pulls the resulting supergraph from the Apollo Uplink. There are some issues with this approach, where managed federation build delays could result in the gateway accidentally picking up an older supergraph schema (and passing tests incorrectly), unless you're starting with a fresh dev/CI variant.

Alternate Solutions / Workarounds

Try to get a fresh dev/CI variant using something like:

The supergraph-demo has a set of scripts for CI:

  • subgraphs.sh - common set of subgraph definitions for import
  • config.sh - generates supergraph.yaml config from subgraphs.sh variables
  • publish.sh - publish all subgraphs
  • unpublish.sh - to delete all subgraph in a CI variants

@lennyburdette has created a helper https://github.com/apollosolutions/roverx that makes it easier to:

roverx supergraph publish --config supergraph.yaml - publish all subgraphs

Try to skip variants entirely for dev workflows

Along these lines: https://github.com/lennyburdette/development-gateway

What can we do to help?

Support a clean way to do e2e tests with the results of managed federation in:

Some ideas:

  • subgraph check returns a url to download a supergraph built using managed federation
  • something ensure a clean CI variant for e2e testing, like:
    • ephemeral variants with hard delete
    • helper to delete all subgraphs in a long-lived CI variant, so it can be cleanly reset for e2e testing
    • something in between
@prasek prasek added feature 🎉 new commands, flags, functionality, and improved error messages triage issues and PRs that need to be triaged labels Jul 27, 2021
@prasek prasek changed the title Add --config supergraph.yaml support for new supergraph check, supergraph publish and supergraph delete commands Better support for CI variants, so I can test with a live gateway in CI jobs using the output of Managed Federation Jul 29, 2021
@EverlastingBugstopper
Copy link
Contributor

To ensure managed federation build delays don't result in the live gateway in CI doesn't accidentally picking up an older supergraph schema (and passing tests incorrectly), ideally you'd start with a fresh/empty CI variant.

I'm a bit confused by the ask here. It seems there are maybe two asks?

  1. I want an easy way to create temporary variants in Apollo Studio from Rover, and tear them down easily
  • This seems like something we'd need API support for as, IIRC, deleting variants isn't actually possible w/o bogging down a graph's performance over time
  1. We want to be able to publish/check all subgraphs at once
  • This also seems like something we'd need API support for so that if you made changes in more than one subgraph, the checks would actually behave like you'd expect them to. IMO these would absolutely need to be atomic operations in order to prevent some serious confusion.

@lennyburdette
Copy link
Contributor

(Btw I renamed the repo to multi-check-publish to avoid any branding confusion.)

I agree with @EverlastingBugstopper 's breakdown above. And I'm not as convinced that point 1 is all that necessary—I'm exploring the ability to skip variants entirely for development workflows: https://github.com/lennyburdette/development-gateway

@prasek prasek changed the title Better support for CI variants, so I can test with a live gateway in CI jobs using the output of Managed Federation Better support for local dev and CI workflows, so I can more easily test with a live gateway using the output of Managed Federation Jul 29, 2021
@prasek
Copy link
Author

prasek commented Jul 29, 2021

I'm a bit confused by the ask here. It seems there are maybe two asks?
I agree with @EverlastingBugstopper 's breakdown above.

Cool, sounds good. 👍

For (1) - reworded this issue to address better support for dev and CI workflows in general with some ideas. If there are underlying API changes needed to provide a better UX, let's sync with the Studio folks on what's needed.

For (2) - split out #698 to support use of supergraph.yaml across multiple rover commands like docker-compose.

This also seems like something we'd need API support for so that if you made changes in more than one subgraph, the checks would actually behave like you'd expect them to. IMO these would absolutely need to be atomic operations in order to prevent some serious confusion.

@EverlastingBugstopper Agree, a batch/atomic check makes sense for subgraph check. For publish/delete think we could just do them individually to start, but also agree that doing a batch/atomic publish could result in some much cleaner output vs: #632

@prasek
Copy link
Author

prasek commented Jul 30, 2021

@lennyburdette https://github.com/lennyburdette/development-gateway seems to have similar goals to #479 that @ndintenfass suggested a while back.

@abernix suggested: rover watch but with a config. perhaps rover supergraph compose --watch or similar?

It would be really nice for a local dev gateway to simply watch a local supergraph.graphql, regardless of how it was generated, so the Gateway could pickup supergraph changes regardless of how they were delivered:

  • rover supergraph compose > supergraph.graphql - run manually in your dev flow
  • rover supergraph compose --watch -o supergraph.graphql - watches all sources (local fs, Studio subgraphs)
  • rover supergraph fetch mygraph@prod --watch -o supergraph.graphql - as a sidecar to the Gateway

In https://github.com/lennyburdette/development-gateway it's doing a curl http://localhost:4000/compose which passes back some gateway headers. Is that needed for a shared dev gateway or would it be sufficient to just have a local dev gateway that just hot loads the locally composed supergraph.graphql?

@lennyburdette
Copy link
Contributor

I was focused on making shared gateway development in a remote dev environment easier—especially the ability to share links or header values with QA or mobile teams so they can test against in-progress schema changes.

I'm not super interested in solving the local gateway development story. I don't think it's common for teams to be able to proxy traffic to subgraph servers running in a remote cluster (or to run all the subgraphs locally).

But if you did want a decent local story before rover --watch is a thing, adding this to your package.json gets you pretty far:

"scripts": {
  "watch:rover": "nodemon --exec 'rover supergraph compose --config supergraph.yaml > supergraph.graphql' --watch ./schemas -e .graphql",
  "watch:server": "nodemon src/index.js -e .graphql --ignore ./schemas",
  "watch": "concurrently 'yarn watch:server' 'yarn watch:rover'"
}

and running npm run watch.

@abernix
Copy link
Member

abernix commented Jul 30, 2021

would it be sufficient to just have a local dev gateway that just hot loads the locally composed supergraph.graphql

This is along the lines of thinking we've had for Apollo Server. And similarly, the Router.

@lennyburdette
Copy link
Contributor

Oh, and you can get hot-reloads of a supergraph.yaml already today. I have it enabled by default in my container experiment:
https://github.com/apollosolutions/gateway-container/blob/main/src/gateway.js#L55-L81

@abernix
Copy link
Member

abernix commented Jul 30, 2021

Ah, that's pretty good. At some point, I could see us taking a file path as the supergraphSdl input value and merely watch it in the same way we watch the cloud. (Seems like we should have no problem telling the difference between a file path and SDL. 🦆)

@prasek
Copy link
Author

prasek commented Jul 31, 2021

@lennyburdette for shared gateway dev, wonder if kubectl apply -k k8s/router/dev with a Kustomize ConfigMap generator for the local supergraph might be a more k8s-native approach for updating a remote cluster, like this https://github.com/apollographql/supergraph-demo/tree/main/k8s/router/dev.

Granted not everyone is using k8s, but for those who are you mentioned skaffold the other day and that might work well with https://skaffold.dev/docs/pipeline-stages/deployers/kustomize/.

Then for the dev headers you could just setup your ingress to route to the appropriate dev gateway. Not as dynamic / ephemeral as what you have going in devgateway, but might work for many use cases.

@EverlastingBugstopper
Copy link
Contributor

local dev - support something like this lennyburdette/development-gateway

^ This feels like something that should maybe be solved w/the new router implementation


CI workflows that trigger on.pull_request and on.push, to run e2e smoke tests on a live gateway using managed federation build results
Some ideas:

subgraph check returns a url to download a supergraph built using managed federation

I think I'd want to be able to run another native Rover command rather than have to curl a random URL.

That makes me think we'd want to be able to run rover supergraph fetch on the result of a check (which reminds me an awful lot of #499! 😄 )

Today, rover subgraph check checks a proposed schema against a given graph ref. This command will either error due to composition errors, or create a check that lives in our database. This check could have passed or failed, but it is in a state that can be retrieved. I believe they also have hashes associated with them.

As part of rover subgraph check, we could output the hash of that check, and allow it to be referred to as part of a graph ref.

rover supergraph fetch my-graph@current+a10ce8d would act just like it does today, except instead of fetching the latest successfully composed supergraph, it would go and fetch the supergraph SDL that was composed as part of a subgraph check (and probably output a warning if there were operation check failures).

One problem here is that I think we would want supergraph fetch to error if there were composition errors as part of that check, but I'm not entirely sure if we store check objects in our database if there were composition errors. We would need to consult Gravity on this and if checks are not persisted without successful composition then we'll want to start doing so if we want this to work.

To fit the graph ref extension elsewhere:

rover subgraph check my-graph@current+a10ce8d would fetch the check we already ran and display the same results.
rover graph fetch my-graph@current+a10ce8d would fetch the API schema produced by a check.

publishes wouldn't really work so we'd probably have to come up with an idea of "read-only" graph refs and have proper errors and documentation around usage.


something to ensure a clean CI variant for e2e testing, like:
ephemeral variants with hard delete
helper to delete all subgraphs in a long-lived CI variant, so it can be cleanly reset for e2e testing

I think we need Gravity's expertise on this one - aanndd it might honestly just be #698 in a trench coat?

@EverlastingBugstopper
Copy link
Contributor

closing this as there is much more recent internal discussion on this topic, and #1190 is WIP

@EverlastingBugstopper EverlastingBugstopper removed the triage issues and PRs that need to be triaged label Jul 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature 🎉 new commands, flags, functionality, and improved error messages needs design ✏️
Projects
None yet
Development

No branches or pull requests

4 participants