Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use new elasticsearch-js client in Elasticsearch service #35508

Closed
11 of 14 tasks
epixa opened this issue Apr 23, 2019 · 27 comments
Closed
11 of 14 tasks

Use new elasticsearch-js client in Elasticsearch service #35508

epixa opened this issue Apr 23, 2019 · 27 comments
Assignees
Labels
chore Feature:New Platform Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc

Comments

@epixa
Copy link
Contributor

epixa commented Apr 23, 2019

The elasticsearch-js client library we've been using for years is now deprecated, and the new node library has been published: https://github.com/elastic/elasticsearch-js

We should move our server-side Elasticsearch service over to use this new client. We might still need to use the legacy client at the same time for certain pieces of functionality that we can't easily switch over yet, but there should be no problem using them at the same time.

Since most code is consuming ES through callWithRequest or callWithInternalUser, swapping out the underlying client should be doable without significant changes in plugins.

Once this initial issue is wrapped up, we can then expose the new client directly and deprecate the callWithRequest and callWithInternalUser abstractions.

Subtasks

@epixa epixa added chore Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc Feature:New Platform labels Apr 23, 2019
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-platform

@epixa
Copy link
Contributor Author

epixa commented Apr 23, 2019

cc @delvedor

@delvedor
Copy link
Member

Hello! Thank you for opening this :)

You can find the entire breaking changes list here, it contains some code examples as well.
I tried at the beginning of the year integrating the new client myself, but I wasn't able to keep up with the pace of the new changes that land in Kibana. You can find all the work I did in #27051.

Please feel free to ask any question if you have doubts!

@joshdover
Copy link
Contributor

Ideas that have been discussed in various meetings, posting here so we don't lose them:

  • We could introduce the new interface alongside the legacy one. This would allow migrations to the NP to proceed without having to change anything immediately. Plugins could then migrate to the new client with they're ready and we could deprecate the old one.
  • We could power the legacy interface with the new client under the hood. @delvedor is currently working on a compatibility layer that we could leverage for this.

@epixa
Copy link
Contributor Author

epixa commented Jul 11, 2019

I think we should do both of those ideas. We should replace the implementation of callWithRequest/callWithInternalUser with the new client, which shouldn't break compatibility with existing code at all. Then we should expose the new client interface the way we want to expose it without functions in front of it. This way new code can use the client directly, existing code can migrate away from callWith* independently of their new platform migration, and really old code that is barely touched can just keep using the old stuff.

@delvedor
Copy link
Member

We should replace the implementation of callWithRequest/callWithInternalUser with the new client, which shouldn't break compatibility with existing code at all.

It depends™. You can easily adapt the surface API, but the errors will be different, see the breaking changes document.

I'm working on a module that makes compatible the surface API of the new client with the legacy one. I'm wondering if remap also the errors would be useful.

@mshustov
Copy link
Contributor

mshustov commented Sep 5, 2019

This way new code can use the client directly, existing code can migrate away from callWith* independently of their new platform migration,

I think having our abstraction on top of a client is still useful - we can follow our own update cycle, introduce custom logic in front of the client library, prevent monkey patching of the internals etc.

We should replace the implementation of callWithRequest/callWithInternalUser with the new client, which shouldn't break compatibility with existing code at all.

On the one hand, we can test the client library in real-world conditions. On the other hand, we have to implement a compatibility layer, that also requires some time to implement and cover with tests. @delvedor how hard would it be to finish work on https://github.com/elastic/elasticsearch-js-compatibility-mode/blob/master/index.js?
some small bits of inconsistency can pop up later, for example, #39430 (comment)

Probably it makes sense for New platform elasticsearch service to expose 2 types of clients:

  • the current version (almost legacy compatible, except naming callAsInternalUser/callAsCurrentUser, but we can even fall back to legacy callWithInternalUser/callWithRequest). It still uses legacy elasticsearch library under the hood. The simplest option for plugins to migrate in the face of time pressure. We need to discuss how long we maintain this compatibility layer. Having plugins migrated to the New platform simplifies further refactoring anyway.
  • version with elasticsearch-js under the hood. Plugin authors may choose to migrate to it if they have time capacity. Although it will require more commitment for them to adopt code to the breaking changes.

@delvedor
Copy link
Member

I think having our abstraction on top of a client is still useful - we can follow our own update cycle, introduce custom logic in front of the client library, prevent monkey patching of the internals etc.

The new client has an API that allows you to extend it, and add new and custom APIs to it. doc.

client.extend('customSearch', ({ makeRequest }) => {
  return function customSearch (params, options) {
    // your custom logic
    makeRequest({ ... }) // send the request to Elasticsearch
  }
})

client.customSearch({ ... })

Regarding wrapping the client with your already existing code, I tried to tackle that in #27051, but I wasn't happy with the solution since I ended up with a mixed old and new code, which was quite confusing.

My suggestion would be to keep two separate clients, the legacy with its own wrapper, and the new one without any wrapper (or a very thin wrapper around it).
This approach has some benefits:

  • it's easy to understand if a code has been migrated to the new client or not
  • it avoids sneaky bugs, since every wrapper, even a very good one may introduce a bug that is hard to undercover.
  • the new client is designed to be extendible out of the box and thanks to the child client feature you can easily create multiple clients that share a common connection pool.

On the other hand, have two separate clients in this way probably may taker a longer time to finish the migration, but I still think that it's better, since the final result will be cleaner and less hacky.

@delvedor how hard would it be to finish work on elastic/elasticsearch-js-compatibility-mode:index.js@master?

It should be ready, but I cannot be sure it works in all cases until we try it in a codebase that is using the old client extensively.
But as said above, I would recommend to move to the new client in one time instead of doing it twice.

@joshdover
Copy link
Contributor

We have decided to punt on this a bit.

  • We will not be removing the legacy client in 8.0. With all plugin teams focused on migrating to the Kibana Platform plugin system, requiring this change would be a further wrinkle in that migration without much direct benefit.
  • We will introduce the new client alongside the legacy client once Platform Migration work is complete. All new development should be against this new client.
  • We will remove the legacy client in a future 8.x version, as a breaking change to plugin developers.

@ruflin
Copy link
Member

ruflin commented Nov 20, 2019

@joshdover I'm curious about your second point "once Platform Migration work is complete". When is this the case? We would like to already use the new client today in the package manager to for example load ILM policies. The new client has nice API's for this (https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html#_ilm_putlifecycle) but couldn't find it in the legacy one. We could work around this by writing the request ourself but would be nicer to directly use the new client. Is this possible?

@joshdover
Copy link
Contributor

@ruflin At the very earliest, I think this could get prioritized in 3-5 months when the Platform team is not 100% focused on supporting plugin migrations to the Platform. The Platform migration is the top priority for the entire Kibana project and a blocker for 8.0 so we just can't justify spending time on much else right now. We also don't want to rush adding it now and not doing it in the right way.

In the meantime, I suggest using the transport.request method which allows for custom requests while still using all the authentication machinery. There are many examples of this in the repository where we use some of the under-the-hood xpack endpoints.

@joshdover
Copy link
Contributor

So, if all of the above makes sense, then I think this is justification for offering an extend API.

Sounds like a good reason to me, thanks for weighing in! 👍

@mshustov
Copy link
Contributor

the internal logger is no more
Main issue is more with the ElasticsearchClientConfig.log option, as if we want to preserve a way for consumers of esService.createClient to use a custom logger, we will have to expose a new ClientLogger interface.

This log option has been the only way to customize the logging format & destination. KP logging system already supports context-specific configuration. The only not-covered use-case might be when you want to log plugin-specific ES requests. I think such enhancement could be done on the platform level (attach the current caller context, for example). Btw, I didn't manage to find any usages of custom log in the repo. I'd rather remove this log option and deprecate logQueries.

The plugin API changed
The plugins option has been removed. If you want to extend the client now, you should use the client.extend API.

So, if all of the above makes sense, then I think this is justification for offering an extend API.

Correct me if I'm wrong, but plugins leverage custom plugins functionality to extend clients with custom methods (client actions). If a user needs to request a new experimental endpoint, they could use transport.request without new client instantiation. @cjcenizal

Do we want/need a new version of the APICaller
client.asCurrentUser().cat.count(params);

My +1 for the newer version: autocomplete, type-safety, easier supported method discoverability.

wrap401Errors

I cannot find any usages in the existing code-base. Is it still a relevant API? @elastic/kibana-security

How should we expose errors returned from the client

I agree with Josh. It might require even RFC to get input from a broad audience.

Do we need to wrap / convert the response from the client

Let's not repeat the same mistake that has been done in the fetch API. Headers/warnings, etc. are important pieces of response, and a user must have access to them.

Bonus question: Should we remove ClusterClient altogether.

interface ElasticSearchServiceStart {
  getInternalClient: () => Client,
  getScopedClient: (request) => Client,
}

Client and ScopedClient aren't interchangeable. It's easy to make a mistake making a call on the wrong client. We will have to add an explicit type ScopedClient to prevent such problems. For me, it sounds like we move complexity from the core to the plugin's code, so I'd rather keep the existing API.

Legacy client renaming

+1

@cjcenizal
Copy link
Contributor

Correct me if I'm wrong, but plugins leverage custom plugins functionality to extend clients with custom methods (client actions). If a user needs to request a new experimental endpoint, they could use transport.request without new client instantiation.

Based on experience and some discussion I've seen on Slack, my understanding is that transport.request and custom plugins are different ways of essentially doing the same thing. I believe custom plugins use transport.request under the hood. Do you know if transport.request is supported by the new JS client?

I'm not entirely sure about how they compare in terms of tradeoffs. In terms of downsides, @kobelb mentioned this possible downside to using transport.request on Slack:

the original issue was that it leaves it up to the consumer to call encodeURIComponent when building URLs which contain user input

it also just makes it harder to track specific usage of ES APIs

I also found this PR (https://github.com/elastic/kibana/pull/68331/files) which highlights a mistake that's easy to make and difficult to discover when using transport.request, by accidentally leaving off the leading slash in the path.

@pgayvallet
Copy link
Contributor

Do you know if transport.request is supported by the new JS client?

It is.

const client = new Client();
// request(params: TransportRequestParams, options?: TransportRequestOptions): Promise<ApiResponse>;
client.transport.request(/* */);

Also note that the new ES client is supposed to have an exhaustive list of ES endpoint/commands. So I'm not even sure we would really have any need of using this, but I did not do an audit of current usages of custom plugins in our codebase. Does anyone have a better vision than me on that. @kobelb maybe?

@mshustov
Copy link
Contributor

In terms of downsides, @kobelb mentioned this possible downside to using transport.request on Slack:

Maybe it's okay? Since it's meant to be used for development purposes only when you need to play around with a new experimental endpoint. When an endpoint officially supported, we bump the library version and use type-safe library API instead.

@pgayvallet
Copy link
Contributor

pgayvallet commented Jun 24, 2020

That SGTM. I feel like we can wait and see if we really need to implement plugin support for the new client, as transport.request is an acceptable workaround (or even alternative).

@mshustov
Copy link
Contributor

mshustov commented Jun 24, 2020

@delvedor Can new https://github.com/elastic/elasticsearch-js client work in browser env? I see this in the official repo https://github.com/elastic/bower-elasticsearch-js#submit-issues-pull-requests-etc-to-elasticsearch-js

@elastic/kibana-app-arch Is there a specific reason why we need to use elasticsearch-browser?

import { default as es } from 'elasticsearch-browser/elasticsearch';

Can we get rid of it on the client side? Data plugin proxies request to the server-side where the new ES client will be available anyway.

@delvedor
Copy link
Member

delvedor commented Jun 24, 2020

@restrry the legacy client was compatible with the browser environment, the new one is not.
It could work, but it's not tested nor supported.
The reason is that we want to discourage people from connecting to Elasticsearch from the browser, as it could easily expose them to security issues.


The new client does support the entire Elasticsearch API surface, unless an API is not defined in the json spec here and here.
If you need to add a custom API, I would recommend using client.extend instead of transport.request, so other parts of the code could benefit this addition. Also, inside an extend function you can run validation as well! The only drawback about using the extend functionality is that then you will need to do a type augmentation.
Please feel free to ping me if you feel that the client is missing a functionality or you think it should solve a problem differently :)

@lukasolson
Copy link
Member

We want to remove any usages of the legacy browser APIs. That effort is tracked here: #55139. Once that is completed we can get rid of the dependency altogether.

@kobelb
Copy link
Contributor

kobelb commented Jun 30, 2020

Maybe it's okay? Since it's meant to be used for development purposes only when you need to play around with a new experimental endpoint. When an endpoint officially supported, we bump the library version and use type-safe library API instead.

This seems reasonable to me. I do think we should be minimizing the use of transport.request, as it's potentially dangerous, and not type-safe. However, having some option for when an Elasticsearch endpoint hasn't been added to the client seems reasonable.

@mshustov
Copy link
Contributor

mshustov commented Jul 6, 2020

One area we may still need this is for interfacing with the Kibana system index, which will have it's own HTTP routes. However, I believe that should all be Core-only usages.

Monitoring uses it to configure connection to the custom index

this._cluster = elasticsearch.legacy.createClient('monitoring-direct', config.elasticsearch);

wrap401Errors

This option has been introduced in #9583 to inform a user that request to ES has failed due to expired credentials and they need to log in again.
I looked at the current implementation, and it seems we already have it broken for KP http endpoints from the very beginning of HTTP service in KP. Kibana Platform, by design, doesn't support throwing errors to control execution flow in runtime. Any exception raised in a route handler will result in 500 generated by Kibana HTTP service.
However, some HTTP routes manually reproduce the Hapi logic via compatibility wrapper or response.customError

return response.customError(wrapError(error));

I'm surprised that we haven't got any issues reported so far.
We need to decide whether we want to support this in KP at all. As I can see we aren't going to support use-cases with Security settings in Kibana and ES. So the next question is whether we want to support auth via intermediate proxy between Kibana and ES.
That would require handling 401 errors on the platform level:

 try {
 // call route handler
} catch (e) {
  this.log.error(e);
  if(is401Error(e)) {
    const wwwAuthHeader = get(error, 'body.error.header[WWW-Authenticate]') || 'Basic realm="Authorization Required"'
    return response.unauthorized({ headers:  {
      'WWW-Authenticate': wwwAuthHeader
    }});
  }
  return hapiResponseAdapter.toInternalError();
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chore Feature:New Platform Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc
Projects
None yet
Development

No branches or pull requests