Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Development Experience #34557

Closed
eliperelman opened this issue Apr 4, 2019 · 37 comments
Closed

Development Experience #34557

eliperelman opened this issue Apr 4, 2019 · 37 comments
Labels
discuss Team:Operations Team label for Operations Team

Comments

@eliperelman
Copy link
Contributor

I am opening this issue to discuss potential improvements to the development experience of Kibana. The topics I am bringing up here are mostly specific to the development toolchain as I have a bit of experience with the current field of JS project tools.

Potential issues

Configuration is spread across many files in various locations.

From Babel to webpack, TypeScript, Jest, files in scripts, grunt files, dotfiles, etc., the configuration of tools is spread across many files in various locations across the repo. It takes a fair amount of source diving to connect all the pieces to get a clear understanding of how a build works.

Some custom configuration deviates from community norms, slightly increasing the barrier of familiarity for external contributors.

With our very custom build setup and monorepo management structure, we deviate from patterns and setups that the community is used to, which raises the barrier to entry for external contributors. Possibly re-evaluate tools like lerna make yarn workspaces more consistent or organized.

Logging is verbose and can be obscure.

Processes can be slow.

Things like build, test, and lint times could always be improved. 😃

Tooling is hard-coded to repository.

There are many pieces of our toolchain that could be used outside the toolchain to make development consistent and easier. If non-application-specific configuration was extracted, we could re-use the tooling across more repositories and expose it as tooling for other projects and plugin authors to consume. By co-mingling the application's concerns with that of the tools, we make it so these must be versioned and shipped together when this isn't necessarily a requirement.


The invitation is open for anyone to add their ideas around improvements or more challenges to development experience. Adding @tylersmalley and @tsouza for their insight and direction on this.

@tylersmalley tylersmalley added the Team:Operations Team label for Operations Team label Apr 4, 2019
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-operations

@eliperelman
Copy link
Contributor Author

Under the "community deviation" section, you could put linting rules in with this. Adopting as many of the defaults of the Airbnb and Prettier linting settings could eliminate a lot of deviation and customization of rules.

@jinmu03
Copy link
Contributor

jinmu03 commented Apr 8, 2019

@skaapgif @restrry If you notice any other potential improvements to the developer experience, pls comment here

@mshustov
Copy link
Contributor

mshustov commented Apr 8, 2019

I will more points later, right now the main problem for me is to understand how to run any kind of tests (unit, *_integration, functional, etc.) and in what environment it is executed (jest, mocha, karma, etc.)

@rudolf
Copy link
Contributor

rudolf commented Apr 8, 2019

My biggest pain point is exactly the same as @restrry

On a bad day my workflow looks like this:

  1. Make some changes
  2. Try to run only the relevant test files.
  3. Give up and push to CI
  4. Find something else to do while I wait for test results (~40mins)
  5. Repeat

To improve this, I'd like to be able to:

  1. Easily find out how to run a specific test file so that I can quickly run the tests most likely to be affected by my change (or even better, have a single test command that takes the file I'd like to test as argument). Edit: with the reference table in docs: Add test file location -> test runner table #34986 I can now reliably find the instructions for running a particular test. I do have to go back and read it fairly often so it would be great if there was a single test command that took a location and grep pattern as arguments and figured out how to invoke the right test runner so that I wouldn't have to care about it.
  2. If CI fails I want to know how to run the failing test cases locally so that I can have a short feedback loop when addressing these. (solved by implementing github checks #34673) (1) would mostly solve this, but it would be even easier if the test suite had as output a command I could copy and paste that would target all the failed test cases.

@rudolf
Copy link
Contributor

rudolf commented Apr 8, 2019

The build system is hard to understand:

  1. It uses several "task runners" (I use the term broadly here): grunt, package.json scripts and files in the scripts/* folder. This means it's not always obvious how I perform a specific step in the build process or what the dependencies between steps are.
  2. Using our own abstractions or custom code for common build tasks in scripts/* makes it harder to understand than reading the equivalent written as a gulp/grunt task with which many JS developers are more familiar.
  3. We copy configuration files (tsconfig files specifically), then modify them in place before running build tasks. This makes it hard to get an overview of what options are enabled where.
  4. We've got a large amount of tsconfig files inheriting from kibana/tsconfig.json and managing includes/exclude file lists is rather complex. It makes sense to standardise on some typescript compiler options, but perhaps these should be put into a kibana/tsconfig.defaults.json file from which other files can inherit. That way each typescript file has full control of included/excluded files.

@jinmu03
Copy link
Contributor

jinmu03 commented Apr 8, 2019

@crob611 @peterschretlen @lizozom @emmacunningham @flash1293 @wylieconlon @aaronjcaldwell
If you noticed any other potential improvements to the developer experience, pls comment here

@peterschretlen
Copy link
Contributor

Talking to some of the downstream teams (ML, APM, CodeSearch etc.) two things were mentioned several times: 1) The cycle time on a PR, and 2) difficulty figuring out how to run tests.

I think particularly for solutions, the monorepo format feels like a burden because many of their changes feel specific/isolated, and they'd like to "just run the APM" tests for example.

@mattkime
Copy link
Contributor

mattkime commented Apr 9, 2019

It uses several "task runners" (I use the term broadly here): grunt, package.json scripts and files in the scripts/* folder. This means it's not always obvious how I perform a specific step in the build process or what the dependencies between steps are.

I've attempted to address this through my github checks pr - #34673

Each check corresponds to a CI task and the check states the command and its arguments.

(Yes, it would be better to have more consistency but perhaps this helps.)


but it would be even easier if the test suite had as output a command I could copy and paste that would target all the failed test cases.

@skaapgif - The command you might run locally is different from the command run in CI. That said, I do think communicating such a thing is fairly simple. Might be worthy of more conversation.

@rudolf
Copy link
Contributor

rudolf commented Apr 9, 2019

@mattkime #34673 Definitely solves my biggest developer pain at the moment 🎉 If the CI gave me 1 command I could copy and paste that would only target the failed tests instead of showing me the failed test suite command, that would reduce the cycle time even further, but I would consider that a luxury, not a necessity.

It uses several "task runners" (I use the term broadly here): grunt, package.json scripts and files in the scripts/* folder. This means it's not always obvious how I perform a specific step in the build process or what the dependencies between steps are.

This comment was specifically related to making changes to the build system as opposed to understanding the results of the CI system. I had to read through the source code of several build scripts to understand how (and which files) we copy to the build directory, how we modify the typescript config files, what flags the typescript compiler is run with etc. Not impossible, but much harder to wrap my head around than a high-level abstraction like:

gulp.task('ts', function () {
    return gulp
        .src('scripts/**/*.ts')
        .pipe(typescript(tscConfig.compilerOptions))
        .pipe(gulp.dest('wwwroot'));
});

(of course our build system wouldn't be that simple even if it was all gulp tasks, but I think the high-level stream + plugin abstractions would make it easier to follow)

@eliperelman
Copy link
Contributor Author

If we are talking about solutions, I've got quite a bit of experience wrangling separate tooling into organized chunks via Neutrino. It may be a helpful tool to employ to solve some of these issues, but granted not all.

@eliperelman
Copy link
Contributor Author

eliperelman commented Apr 11, 2019

I think some other important parts of the development process are code reviews and source-diving. There have been numerous case studies and guidelines that recommend generous whitespace to improve readability and retention, both in design as well as reading. I think opting for more generous and automated whitespace changes could make for a more positive and efficient process. I also understand that some of these rules could be quite subjective.

I used to codify changes like these in ESLint (example here enforces newlines between conditionals, declarations, and modules):

'padding-line-between-statements': [
  'error',
  {
    blankLine: 'always',
    prev: ['const', 'let', 'var'],
    next: '*',
  },
  {
    blankLine: 'never',
    prev: ['const', 'let', 'var'],
    next: ['const', 'let', 'var'],
  },
  {
    blankLine: 'always',
    prev: ['cjs-import'],
    next: '*',
  },
  {
    blankLine: 'always',
    prev: ['import'],
    next: '*',
  },
  {
    blankLine: 'always',
    prev: '*',
    next: ['cjs-export'],
  },
  {
    blankLine: 'always',
    prev: '*',
    next: ['export'],
  },
  {
    blankLine: 'never',
    prev: ['import'],
    next: ['import'],
  },
  {
    blankLine: 'never',
    prev: ['cjs-import'],
    next: ['cjs-import'],
  },
  {
    blankLine: 'any',
    prev: ['export'],
    next: ['export'],
  },
  {
    blankLine: 'any',
    prev: ['cjs-export'],
    next: ['cjs-export'],
  },
  { blankLine: 'always', prev: 'multiline-block-like', next: '*' },
  {
    blankLine: 'always',
    prev: '*',
    next: ['if', 'do', 'for', 'switch', 'try', 'while'],
  },
  { blankLine: 'always', prev: '*', next: 'return' },
],

I like to think of whitespace as a controller for context switching into a decision tree. Whenever code needs to branch or change context, whitespace can be a good indicator for your brain to also switch context while reading.

import a from 'a';
import b from 'b';
import c from 'c';
const alpha = 'alpha';
const beta = 'beta';
const gamma = 'gamma';
if (alpha === beta) {
  return doSomething();
} else if (beta === gamma) {
  const value = await somethingElse();
  return `delta %{value}`;
} else {
  throw new Error();
}
return 'anything';
// module context
import a from 'a';
import b from 'b';
import c from 'c';

// declaration context
const alpha = 'alpha';
const beta = 'beta';
const gamma = 'gamma';

// conditional context
if (alpha === beta) {
  return doSomething();
} else if (beta === gamma) {
  const value = await somethingElse();

  // return context
  return `delta %{value}`;
} else {
  // error context
  throw new Error();
}

// return context
return 'anything';

There are many places in Kibana where this seems to be followed via conventions, but it is certainly not consistent.

Thoughts on standardizing on some sort of whitespace automation to improve readability and retention?

@cjcenizal
Copy link
Contributor

@eliperelman I absolutely agree and the way you described whitespace as a tool for isolating contexts is how I think of it too. The analogy I like to use is that grouping logic together using whitespace is like breaking up long-form text into paragraphs. The same way each paragraph communicates an intention or idea, so does a group of logic communicate an intention within a context.

When I'm writing code, this helps me structure the contexts in a way that makes sense. And as a reader, it helps me scan the code's contexts and focus into the ones in which I'm interested.

@zfy0701

This comment has been minimized.

@azasypkin

This comment has been minimized.

@zfy0701

This comment has been minimized.

@zfy0701
Copy link
Contributor

zfy0701 commented Apr 15, 2019

another feedback is very hard to write mocha test (under __test__) for plugin with typescript, because the entire project is configured with global jest type and that makes us use a lot of ts-ignore for mocha test

I have a PR here:
#34754

to address some of our problem, but it might not be an universal solution

@rudolf

This comment has been minimized.

@zfy0701

This comment has been minimized.

@jinmu03
Copy link
Contributor

jinmu03 commented Jul 15, 2019

  1. We need to document the Functional Test Runner, so developers don't have to dig into the code to figure out how to use it.

  2. Improve the Functional test runner. For example, the Functional Test Runner requires a hard-coded "configuration" for some of the permutation test, so trying to test the various combinations becomes really challenging from the coding perspective, and it can drastically increase the time it takes for CI to run.

  3. We need to set the test up as much as possible using the Kibana API, the SavedObjectClient, ES Client, esArchiver instead of using the UI.

  4. Developers should know clearly what QA cover and not cover.

  5. Developer should write automated functional test for their feature.

  6. Try not to make very fine grained functional test.

@zfy0701
Copy link
Contributor

zfy0701 commented Jul 16, 2019

To add to my previous comment, use --grep definitely works. But it's still very error prone if we want to use it more frequently. what we did is like canvas, put some scripts, e.g.

https://github.com/elastic/kibana/blob/master/x-pack/legacy/plugins/code/scripts/functional_test.js#L7

but it would be nice if we could have some universal test entry point like
node scripts/$testType --plugin $pluginName

@zfy0701
Copy link
Contributor

zfy0701 commented Jul 17, 2019

one more suggestion, I'd really like the backport process could be automatic, I filed another issue #41321 to describe the detail there

@mshustov
Copy link
Contributor

additional pain points:

@tylersmalley
Copy link
Contributor

@mattkime can address the points about the Github reporter. But as of right now, re-running a specific test won't take less time. We're working on possibly utilizing pipelines to address this.

Is there a way to invalidate cache less frequently

I have an issue I opened when creating kbn-es #17419, but so far there hasn't been much support for it. I added a comment mentioning it being raised here.

These snapshots are built on a nightly basis which expires after a couple of weeks

The snapshots expire, meaning they are no longer available since we utilize TTL's on the S3 bucket. That comment is really only relevant when attempting to run kbn-es from an old branch, say 6.7. Additionally, having something like #17419 would alleviate that if you already had that branch cached.

oss and x-pack tests are run separately. it's a bit counterintuitive when some scripts work from the root for both projects (type_check, e.g.) probably addressed by #41129, #41130

I agree - Prior to 6.3, X-Pack was a separate repository and we still need to work making a more cohesive, unified experience now that is no longer the case. However, we still have to make sure that people can run/develop only OSS if necessary.

@jinmu03
Copy link
Contributor

jinmu03 commented Dec 13, 2019

More requests for functional testing:

  • Reduce overall number of FTR configs by eliminating most common needs for new configuration permutations (licensing, TLS)
  • Reach out to ES team about how to better handle license needs (expired, Gold) for testing

@kobelb
Copy link
Contributor

kobelb commented Dec 18, 2019

@tylersmalley, @brianseeders, @spalger and I have been discussing how the functional tests can be improved to address:

Improve the Functional test runner. For example, the Functional Test Runner requires a hard-coded "configuration" for some of the permutation test, so trying to test the various combinations becomes really challenging from the coding perspective, and it can drastically increase the time it takes for CI to run.

and

Reduce overall number of FTR configs by eliminating most common needs for new configuration permutations (licensing, TLS)

Spencer will be working on a proof-of-concept for a solution that he has in mind.

@Dosant
Copy link
Contributor

Dosant commented Dec 19, 2019

Not sure if it is the right place to ask, but the thing I am missing (or maybe I just don't know it exists) is a publicly exposed kibana endpoint built from a pull-request branch.

If we could provide a URL to kibana which we built for running ci tests on a branch, that would:

  1. Reduce the time needed for a reviewer to see the change in action. No need to checkout the branch and to build it locally. This will likely increase the number of reviews, where not just code is checked but also the UI is clicked around.
  2. We had edge cases, where an app built on ci works differently comparing to local yarn start (build differences). These led to tedious troubleshooting, as building the code locally in the same way as it is built on ci is not the first thing we tried. But having an endpoint exposed from ci would help us find such kind of issues faster.

@timroes
Copy link
Contributor

timroes commented Dec 19, 2019

The point @Dosant mentioned also came up a couple of times during our team syncs. There is a high demand for having preview builds on the PR automatically for the listed reasons. Do you think this is anything that'll be feasible in the mid-term to achieve that?

@mshustov
Copy link
Contributor

mshustov commented Feb 21, 2020

  • xpack jest tests are extremely slow. it takes ~15min to pass on the latest macbook pro
  • integration tests script is disabled in x-pack. as a result, plugins mix their integration test with the unit tests

@jinmu03
Copy link
Contributor

jinmu03 commented Mar 3, 2020

@tylersmalley Anything we can do to address Mikhail's above feedback?

@mshustov
Copy link
Contributor

All packages within the /packages folder have to duplicate their development setup. Could we provide a utility package consolidating the dev dependencies list and providing a set of dev scripts (build, liniting, etc.)

@mshustov
Copy link
Contributor

mshustov commented May 21, 2020

Regarding @kbn/optimizer perf complaints. I added a plugin analyzing the time spent in different loaders. It looks like sass-loader is the bottleneck at the moment o_O
Results for a random plugin:

sass-loader took 5 mins, 20.91 secs
  module count = 4
modules with no loaders took 3 mins, 3.15 secs <<< plain js from node_modules
  module count = 151
babel-loader took 2 mins, 41.88 secs
  module count = 145

yeah, I think it makes sense to focus on improving caching strategy, but we might squeeze a couple of minutes tuning build pipeline (at least for development)

Is there an issue in the Kibana repo to tune optimizer perf? @elastic/kibana-operations

@mshustov
Copy link
Contributor

While debugging x-pack tests I noticed React component testing is extremely slow.
For example, it takes ~ 5mins to complete https://github.com/elastic/kibana/blob/master/x-pack/plugins/cross_cluster_replication/public/__jest__/client_integration/auto_follow_pattern_list.test.js

@cjcenizal
Copy link
Contributor

Thanks for mentioning this, @restrry. Would it help if we excluded our client integration tests from the test:jest script, and created a separate script for them?

@mshustov
Copy link
Contributor

mshustov commented Jun 11, 2020

@cjcenizal there is a dedicated jest runner for integration tests x-pack/scripts/jest_integration. It's had been disabled for some time, but it's available again.

@lizozom
Copy link
Contributor

lizozom commented Jun 15, 2020

@rudolf do you think it should be possible to apply the logic from #34986 to write a single script that runs any type of test 🤔 ?

@tylersmalley
Copy link
Contributor

Closing in favor of #70733

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discuss Team:Operations Team label for Operations Team
Projects
None yet
Development

No branches or pull requests