Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Reduce default number of max concurrent requests #83

Merged
merged 1 commit into from
Mar 12, 2018

Conversation

orangejulius
Copy link
Member

The BatchManager (roughly) manages the maximum number of bulk index requests that can be in processing by Elasticsearch simultaneously.

The default of 50 is good for very large clusters, but not small ones.

In order to make Pelias work better out of the box on smaller setups, the defaults should be changed. Worst case, this will make imports on larger Elasticsearch clusters slightly slower, but I doubt we'll even
notice. It might even make them faster.

Connects pelias/openaddresses#328
Connects #76

The BatchManager (roughly) manages the maximum number of bulk index
requests that can be in processing by Elasticsearch simultaneously.

The default of 50 is good for very large clusters, but not small ones.

In order to make Pelias work better out of the box on smaller setups,
the defaults should be changed. Worst case, this will make imports on
larger Elasticsearch clusters slightly slower, but I doubt we'll even
notice. It might even make them faster.

Connects pelias/openaddresses#328
Connects #76
@orangejulius
Copy link
Member Author

Testing this change in the Dockerfiles setup on the OA and OSM importers has completely resolved any issues with Elasticsearch, so it's time to merge this!

@orangejulius orangejulius merged commit 5257eee into master Mar 12, 2018
@orangejulius orangejulius deleted the reduce-max-outstanding-requests branch March 20, 2018 20:02
orangejulius added a commit that referenced this pull request May 12, 2019
This package has historically been very aggressive regarding how many
requests it will allow to be in flight to Elasticsearch.

We lowered the maximum number of in-flight requests to 10 recently
(see #76), but I think this is still too high. Recently we have seen
some Elasticsearch timeouts when running highly parallel imports.

My suspicion is that it's very unlikely a high number of in-flight bulk
index requests is the best way to ensure high performance. For
geocode.earth, we run planet builds on a 36 core machine, with a total
of 6 importer processes running at once at the start (2 OA, OSM,
polylines, geonames, WOF).

Since the bulk import endpoint already allows importing many records in
parallel (500 by default in this package), 6 importers could lead to up
to 60 bulk requests in flight at once. My guess is even 2-3 bulk
requests is enough to keep Elasticsearch busy.

Eventually I'd like to allow us to configure this option easily across
all importers, but for now lets test this value.

Connects #76
Connects #83
orangejulius added a commit that referenced this pull request May 12, 2019
This package has historically been very aggressive regarding how many
requests it will allow to be in flight to Elasticsearch.

We lowered the maximum number of in-flight requests to 10 recently
(see #76), but I think this is still too high. Recently we have seen
some Elasticsearch timeouts when running highly parallel imports.

My suspicion is that it's very unlikely a high number of in-flight bulk
index requests is the best way to ensure high performance. For
geocode.earth, we run planet builds on a 36 core machine, with a total
of 6 importer processes running at once at the start (2 OA, OSM,
polylines, geonames, WOF).

Since the bulk import endpoint already allows importing many records in
parallel (500 by default in this package), 6 importers could lead to up
to 60 bulk requests in flight at once. My guess is even 2-3 bulk
requests is enough to keep Elasticsearch busy.

Eventually I'd like to allow us to configure this option easily across
all importers, but for now lets test this value.

Connects #76
Connects #83
orangejulius added a commit that referenced this pull request May 12, 2019
This package has historically been very aggressive regarding how many
requests it will allow to be in flight to Elasticsearch.

We lowered the maximum number of in-flight requests to 10 recently
(see #76), but I think this is still too high. Recently we have seen
some Elasticsearch timeouts when running highly parallel imports.

My suspicion is that it's very unlikely a high number of in-flight bulk
index requests is the best way to ensure high performance. For
geocode.earth, we run planet builds on a 36 core machine, with a total
of 6 importer processes running at once at the start (2 OA, OSM,
polylines, geonames, WOF).

Since the bulk import endpoint already allows importing many records in
parallel (500 by default in this package), 6 importers could lead to up
to 60 bulk requests in flight at once. My guess is even 2-3 bulk
requests is enough to keep Elasticsearch busy.

Eventually I'd like to allow us to configure this option easily across
all importers, but for now lets test this value.

Connects #76
Connects #83
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant