Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fleet] Fix some vars from preconfiguration not being added to package policies #113204

Merged
merged 1 commit into from
Sep 28, 2021

Conversation

jen-huang
Copy link
Contributor

Summary

Resolves elastic/fleet-server#742.

This PR fixes an issue where some variables defined in xpack.fleet.agentPolicies were not having their values correctly applied to the final package policies. This is seen on 7.15 Cloud deployments where the Fleet Server policies is missing values for max_connections and custom, even though they are specified in the preconfiguration setup.

This is due to a mistake in the ordering of objects that are merged in the helper function deepMergeVars. The override value should get merged last as, well, it is meant to override ;)

Testing

With Postman or cURL, send the below request to set up some preconfigured policy (I am using the exact Cloud setup in this case) prior to this PR. Observe the policy in Fleet UI, notice the empty fields:

image

After applying this PR, run the request again after incrementing the id. The new policy should have the fields correctly filled out:

image

PUT /api/fleet/setup/preconfiguration
{
  "agentPolicies": [
    {
      "id": "some-test-1",
      "name": "Some test 1",
      "description": "Default agent policy for agents hosted on Elastic Cloud",
      "is_default": false,
      "is_managed": true,
      "is_default_fleet_server": false,
      "namespace": "default",
      "monitoring_enabled": [],
      "unenroll_timeout": 600,
      "package_policies": [
        {
          "name": "Fleet Server",
          "package": {
            "name": "fleet_server"
          },
          "inputs": [
            {
              "type": "fleet-server",
              "keep_enabled": true,
              "vars": [
                {
                  "name": "host",
                  "value": "0.0.0.0",
                  "frozen": true
                },
                {
                  "name": "port",
                  "value": 8220,
                  "frozen": true
                },
                {
                  "name": "max_connections",
                  "value": 200
                },
                {
                  "name": "custom",
                  "value": "cache:\n  num_counters: 2000        # Limit the size of the hash table to rougly 10x expected number of elements\n  max_cost: 2097152         # Limit the total size of data allowed in the cache, 2 MiB in bytes.\nserver.limits:\n   policy_throttle: 200ms  # Roll out a new policy every 200ms; roughly 5 per second.\n   checkin_limit:\n     interval: 50ms        # Check in no faster than 20 per second.\n     burst: 25             # Allow burst up to 25, then fall back to interval rate.\n     max: 100              # No more than 100 long polls allowed. THIS EFFECTIVELY LIMITS MAX ENDPOINTS.\n   artifact_limit:\n     interval: 100ms       # Roll out 10 artifacts per second\n     burst: 10             # Small burst prevents outbound buffer explosion.\n     max: 10               # Only 10 transactions at a time max.  This should generally not be a relavent limitation as the transactions are cached.\n   ack_limit:\n     interval: 10ms        # Allow ACK only 100 per second.  ACK payload is unbounded in RAM so need to limit.\n     burst: 20             # Allow burst up to 20, then fall back to interrval rate.\n     max: 20               # Cannot have too many processing at once due to unbounded payload size.\n   enroll_limit:\n     interval: 100ms       # Enroll is both CPU and RAM intensive.  Limit to 10 per second.\n     burst: 5              # Allow intial burst, but limit to max.\n     max: 10               # Max limit.\nserver.runtime:\n  gc_percent: 20          # Force the GC to execute more frequently: see https://golang.org/pkg/runtime/debug/#SetGCPercent\n"
                }
              ]
            }
          ]
        }
      ]
    }
  ]
}

@jen-huang jen-huang added release_note:fix v8.0.0 Team:Fleet Team label for Observability Data Collection Fleet team v7.16.0 v7.15.1 labels Sep 28, 2021
@jen-huang jen-huang self-assigned this Sep 28, 2021
@jen-huang jen-huang requested a review from a team as a code owner September 28, 2021 00:12
@elasticmachine
Copy link
Contributor

Pinging @elastic/fleet (Team:Fleet)

@jen-huang jen-huang added the auto-backport Deprecated - use backport:version if exact versions are needed label Sep 28, 2021
@kibanamachine
Copy link
Contributor

💚 Build Succeeded

Metrics [docs]

✅ unchanged

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

cc @jen-huang

@juliaElastic
Copy link
Contributor

LGTM

Copy link
Contributor

@joshdover joshdover left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change LGTM. Can we open an issue for adding test coverage for this?

@kpollich
Copy link
Member

Change LGTM. Can we open an issue for adding test coverage for this?

Created #113248 to capture test coverage. I will take this on.

@kibanamachine
Copy link
Contributor

💚 Backport successful

Status Branch Result
7.x
7.15

The backport PRs will be merged automatically after passing CI.

kibanamachine added a commit that referenced this pull request Sep 28, 2021
…es (#113204) (#113251)

Co-authored-by: Jen Huang <its.jenetic@gmail.com>
kibanamachine added a commit that referenced this pull request Sep 28, 2021
…es (#113204) (#113250)

Co-authored-by: Jen Huang <its.jenetic@gmail.com>
@jen-huang jen-huang deleted the fix/cloud-preconfiguration branch September 28, 2021 18:00
@dikshachauhan-qasource
Copy link

Hi @kpollich

We have validated this PR And found this PR is fixed now in 8.0 snapshot, 7.15.1 snapshot and 7.16 snapshot build.

  • Observation now default settings are available under Fleet server Integration on Cloud deployment.

Screenshots:
image

image

image

Hence, it is working fine now.

Thanks
QAS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Deprecated - use backport:version if exact versions are needed release_note:fix Team:Fleet Team label for Observability Data Collection Fleet team v7.15.1 v7.16.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fleet server no longer throttled in 7.15
7 participants