-
Notifications
You must be signed in to change notification settings - Fork 10.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce experiments/feature toggles #14636
Comments
@wardpeet I think this looks wonderful. I have a few comments/questions for clarity.
// note: I am not 100% sure this is joi's exact API but you get the idea
Joi.object.shape({
experiments: Joi.object.shape({
useLokiDatabase: Joi.boolean()
})
}) |
No, so good that you brought this up 😛. We could add experiments section to telemetry like we do with gatsby version and such so we only have to register it once.
I wouldn't expose it as an api but I would like to expose it as a
Great idea!
The idea for existing process.env experiments like loki is to set the default value to
Oh yeah happy to bikeshed, I just thought the ___ was 😎 (cool). Yeah they are indeed maintained by us which means they shouldn't break gatsby, they might break your build though. Unsure if I have answered all your questions correctly |
Are we talking here about plugins manipulating experiments settings or just plugin having access to those settings? Because in example of webpack 5 experiment - plugins that modify webpack configuration would potentially need to know that webpack 5 is being used - so passing |
I like this idea but it risks introducing a new problem. One advantage of the way we use env vars and tagged releases is that they're quite awkward to manage from the development side. This is a bit annoying, but also good because it encourages experimental features to be short-lived (although I can think of exceptions). Experiments should be short-lived and have a clear path to being non-experimental. Maybe we can enforce this by convention and culture, but I worry that eventually people will end up running their Gatsby sites with:
which will be increasingly unstable and confusing as experimental features are created and updated. Admittedly this is a very exaggerated scenario but I feel that making experiments easier makes this nightmare future more likely :) Edit: you did mention this in the original issue, so I guess I'm expanding on your point:
Maybe there are other ways around this? Could we force experiments to have an expiration date? It would be straightforward to extend the expiration date, but would require active development of a feature for it to remain available as an experiment. Or stick with the approach we're already using? Or something else? |
Not setting them, just consuming them as we want them to be owned by the Gatsby package.
We only accept
Feels a bit weird to me to set it with dates but I do get your feer of not cleaning things up. These cleanup tickets would be good-first-issue tickets though 😄 |
@m-allanson that’s a really good point! That being said, I think in any scenario these are going to be “sticky,” so I think asserting a little more control over them makes a fair amount of sense. Contrived example (and not even sure we want to do this), but if we have officially merged in an experiment (eg Loki is the default in 2.100.0) we could mark the key as deprecated and define a custom message (or even outright remove it with some upgrade command or something). I generally agree that there is some risk here of these sticking around, but I don’t think this is a new risk introduced with this proposal. I’d imagine there are already package.json or .env filed with GATSBY_USE_LOKI=true (or whatever it is) and GATSBY_DANGEROUSLY_DISABLE_OOM=true. If we can gain some insights into how people are using these (integrating telemetry cleanly seems like a crucial concern of the value of this PR), and provide actionable ways to clean up in future versions—seems like that’s probably worthwhile and an OK trade-off. |
I'm actually strongly against this. We don't want long-running experiments. We don't want people relying on experiments in any way. We either finish it and get it in or we drop it. ENV variables & canary builds are nice because they're hard to discover and hard to use — which is excellent. We only want highly motivated people to test and use experimental features. They'll work closely with us to develop out the feature and decide whether we want to use it or not. QUICKLY. Loki is the perfect example of what happens when we dawdle on merging something — it's been very difficult to keep it up to date and we've broken it quite a few times. But luckily no one was using it because it was hidden :-) If we had an official "experiments" section — we'd have gotten tons of bug reports and we'd have wasted a lot of time dealing with them. There's nothing more wasteful in software development than long running feature branches that we never quite decide to keep around or not. This document about the troubles browsers have had with enabling experiments (through vendor prefixes (remember those!?!) is a great illustration of the problems that result from allowing anyone to use experimental features https://github.com/GoogleChrome/OriginTrials/blob/gh-pages/explainer.md |
Oh, I see, I missed your comment. It seems like the visibility of these things are key here. That's a good point. From reading all comments and my own opinion, we're all against long-living experiments and making them more visible might not be the best idea. It probably won't harm small features. If Loki was an experiment, it probably would have been used more, and we would have broken some workflows. I can live with that. Let me close the issue. |
One thing I've thought about with experiments is that they are useful for allowing us to do discovery on a feature without committing to a version lock. Meaning if we ship a feature at gatsby 2, but it has bugs or shortcomings that require an api change there is not a simple mechanism to fix this. It would require shipping a new major, which is it's own discussion. An example of this is @sidharthachatterjee's recent feature to auto import any I personally don't see the issue with long-living experiments. Experiments represent features we want to ship, they just give us a safe-guard of not having to lock ourselves to version cycles. I believe some of the above mentioned concerns could be handled pretty easily. e.g., the growing experiments list could easily be pruned by a deprecation warning + code-mod for our users. |
Re-opening this. With the recent work we've been doing shipping speedups to the dev server behind flags + upcoming plans to ship breaking changes for v3 behind flags — I'm reconsidering the tradeoffs for having direct support in gatsby-config.js for experiments. We want more people trying out experiments in anger. This is both a discovery problem (how can people find out there's experiments to try?) & a UX problem — how do people enable them for everyone working on the site. Currently to find out about an experiment you have to read PRs or release notes – which only a subset of Gatsby users do. We've started directly prompting people to try out an experiment when we think it'll help them a lot e.g. if it takes more than 20 seconds to compile a site's JS, we invite them to try out lazy js bundling — this is reasonably effective at getting people to try it out and report back but we need people to use it in anger and consistently in order for us to really validate it works well. To do that they have to make it a permanent part of their site — which env variables make difficult as they don't work consistently cross-platform & the only natural place to put them is in an npm script — which if someone on the team doesn't use, they wouldn't be using the experiment. So long-story short — I'm strongly in favor of doing this now. My main worry before — do we have lingering experiments that we aren't actively evaluating/improving towards either making real or removing (as exemplified by Loki) — is a product decision making problem — not whether this proposal makes sense or not. |
Summary
Gatsby's ecosystem is growing every day. This makes it harder and harder to ship new features and not break things. Experiments give us the opportunity to ship features in gatsby behind a flag and gather feedback from the community before turning it on for everyone. Experiments is nothing new, we're already using it with env variables (ex. GATSBY_DB_NODES). In my opinion, they don't really scale well and it might be problematic when sharing these flags inside your project.
The other approach we've been doing is pushing alpha/beta versions out with a custom tag on npm (ex. per-page-manifest). Npm tags won't go away, we'll still want them for big features where it is hard to feature switch the different code paths.
The idea is to enable experiments inside
gatsby-config.js
under the___experiments
property. The property will be aMap<string, boolean>
to keep things simple. We probably want to expose a helper function from gatsby calledinExperiment('<key>')
to check if an experiment is running.We also need to be carefull as feature flags also introduce more technical debt as you'll add multiple code paths/abstractions to the application.
Seems like we're not the only ones that are going this road:
webpack/webpack#9226
vercel/next.js#6218
Extra info:
https://martinfowler.com/articles/feature-toggles.html#ReleaseToggles
Basic example
I'm happy to bikeshed on names 😄
Motivation
Experiments give us a consistent way of shipping new features to the masses which people can try out. We could introduce breaking changes in v2 to improve gatsby for people who aren't using specific features or want to migrate to the bleeding edge 🤘
The text was updated successfully, but these errors were encountered: