Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce a new flag to explicitly permit legacy monitoring #16586

Conversation

andsel
Copy link
Contributor

@andsel andsel commented Oct 18, 2024

Release notes

Introduce a new flag setting xpack.monitoring.allow_legacy_collection which eventually enable the legacy monitoring collector.

What does this PR do?

Update the method to test if monitoring is enabled so that consider also xpack.monitoring.allow_legacy_collection to determine if monitoring.* settings are valid or not.
By default it's false, the user has to intentionally enable it to continue to use the legacy monitoring settings.

Why is it important/What is the impact to the user?

Permit to the user to force the usage of X-Pack legacy internal collector

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • [ ] I have made corresponding change to the default configuration files (and/or docker env variables)
  • I have added tests that prove my fix is effective or that my feature works

Author's Checklist

  • [ ]

How to test this PR locally

  • Run ES and KB locally with:
curl -fsSL https://elastic.co/start-local | sh
  • from the console record the password for elastic user.
  • configure Logstash internal monitoring (in config/logstash.yml), like:
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password:s3cr3t
xpack.monitoring.elasticsearch.hosts: ["http://localhost:9200"]
  • verify no monitoring data is present in Kibana
  • stop Logstash
  • override the legacy collection enabling explicitly in config/logstash.yml with:
xpack.monitoring.allow_legacy_collection: true
  • verify that Kibana can show monitoring data

Related issues

Use cases

As a developer I want to definitively drop the internal legacy collector in favor of ElasticAgent

Logs

Logs without collection override

% bin/logstash
Using system java: /Users/andrea/.sdkman/candidates/java/current/bin/java
Sending Logstash logs to /Users/andrea/workspace/logstash_andsel/logs which is now configured via log4j2.properties
[2024-10-21T10:12:39,642][INFO ][logstash.runner          ] Log4j configuration path used is: /Users/andrea/workspace/logstash_andsel/config/log4j2.properties
[2024-10-21T10:12:39,645][WARN ][logstash.runner          ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2024-10-21T10:12:39,645][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"9.0.0", "jruby.version"=>"jruby 9.4.8.0 (3.1.4) 2024-07-02 4d41e55a67 OpenJDK 64-Bit Server VM 21.0.4+7-LTS on 21.0.4+7-LTS +indy +jit [arm64-darwin]"}
[2024-10-21T10:12:39,646][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-10-21T10:12:39,647][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-10-21T10:12:39,647][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-10-21T10:12:39,655][INFO ][logstash.configmanagement.bootstrapcheck] Using Elasticsearch as config store {:pipeline_id=>["main", "apache_logs"], :poll_interval=>"TimeValue{duration=5, timeUnit=SECONDS}ns"}
[2024-10-21T10:12:39,811][INFO ][logstash.configmanagement.elasticsearchsource] Configuration Management License OK
[2024-10-21T10:12:40,003][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-10-21T10:12:40,006][INFO ][logstash.configmanagement.elasticsearchsource] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}}
[2024-10-21T10:12:40,017][WARN ][logstash.configmanagement.elasticsearchsource] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"}
[2024-10-21T10:12:40,017][INFO ][logstash.configmanagement.elasticsearchsource] Elasticsearch version determined (8.15.3) {:es_version=>8}
[2024-10-21T10:12:40,017][WARN ][logstash.configmanagement.elasticsearchsource] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-10-21T10:12:40,111][INFO ][org.reflections.Reflections] Reflections took 53 ms to scan 1 urls, producing 149 keys and 522 values
[2024-10-21T10:12:40,217][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-10-21T10:12:40,222][WARN ][logstash.javapipeline    ][main] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2024-10-21T10:12:40,227][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["central pipeline management"], :thread=>"#<Thread:0x3de7334d /Users/andrea/workspace/logstash_andsel/logstash-core/lib/logstash/java_pipeline.rb:139 run>"}
[2024-10-21T10:12:40,406][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.18}
[2024-10-21T10:12:40,415][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2024-10-21T10:12:40,423][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
^C[2024-10-21T10:12:54,390][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2024-10-21T10:12:54,477][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2024-10-21T10:12:55,411][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2024-10-21T10:12:55,419][INFO ][logstash.runner          ] Logstash shut down.

Logs with collection override

% bin/logstash
Using system java: /Users/andrea/.sdkman/candidates/java/current/bin/java
Sending Logstash logs to /Users/andrea/workspace/logstash_andsel/logs which is now configured via log4j2.properties
[2024-10-21T10:13:19,015][INFO ][logstash.runner          ] Log4j configuration path used is: /Users/andrea/workspace/logstash_andsel/config/log4j2.properties
[2024-10-21T10:13:19,017][WARN ][logstash.runner          ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2024-10-21T10:13:19,017][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"9.0.0", "jruby.version"=>"jruby 9.4.8.0 (3.1.4) 2024-07-02 4d41e55a67 OpenJDK 64-Bit Server VM 21.0.4+7-LTS on 21.0.4+7-LTS +indy +jit [arm64-darwin]"}
[2024-10-21T10:13:19,018][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-10-21T10:13:19,019][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-10-21T10:13:19,019][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-10-21T10:13:19,026][INFO ][logstash.configmanagement.bootstrapcheck] Using Elasticsearch as config store {:pipeline_id=>["main", "apache_logs"], :poll_interval=>"TimeValue{duration=5, timeUnit=SECONDS}ns"}
[2024-10-21T10:13:19,187][INFO ][logstash.configmanagement.elasticsearchsource] Configuration Management License OK
[2024-10-21T10:13:19,427][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2024-10-21T10:13:19,427][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2024-10-21T10:13:19,445][INFO ][logstash.configmanagement.elasticsearchsource] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}}
[2024-10-21T10:13:19,464][WARN ][logstash.configmanagement.elasticsearchsource] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"}
[2024-10-21T10:13:19,464][INFO ][logstash.configmanagement.elasticsearchsource] Elasticsearch version determined (8.15.3) {:es_version=>8}
[2024-10-21T10:13:19,464][WARN ][logstash.configmanagement.elasticsearchsource] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-10-21T10:13:19,466][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-10-21T10:13:19,588][INFO ][org.reflections.Reflections] Reflections took 69 ms to scan 1 urls, producing 149 keys and 522 values
[2024-10-21T10:13:19,640][INFO ][logstash.javapipeline    ] Pipeline `.monitoring-logstash` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-10-21T10:13:19,649][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://localhost:9200"]}
[2024-10-21T10:13:19,652][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}}
[2024-10-21T10:13:19,664][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"}
[2024-10-21T10:13:19,664][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch version determined (8.15.3) {:es_version=>8}
[2024-10-21T10:13:19,664][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-10-21T10:13:19,671][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2024-10-21T10:13:19,675][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x6075fcff /Users/andrea/workspace/logstash_andsel/logstash-core/lib/logstash/java_pipeline.rb:139 run>"}
[2024-10-21T10:13:19,676][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-10-21T10:13:19,678][WARN ][logstash.javapipeline    ][main] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2024-10-21T10:13:19,679][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["central pipeline management"], :thread=>"#<Thread:0x5f62301d /Users/andrea/workspace/logstash_andsel/logstash-core/lib/logstash/java_pipeline.rb:139 run>"}
[2024-10-21T10:13:19,866][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.19}
[2024-10-21T10:13:19,866][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.19}
[2024-10-21T10:13:19,875][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2024-10-21T10:13:19,896][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2024-10-21T10:13:19,907][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
^C[2024-10-21T10:14:36,525][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2024-10-21T10:14:36,616][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2024-10-21T10:14:37,544][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2024-10-21T10:14:37,968][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2024-10-21T10:14:38,553][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:".monitoring-logstash"}
[2024-10-21T10:14:38,562][INFO ][logstash.runner          ] Logstash shut down.

@andsel andsel self-assigned this Oct 18, 2024
@andsel andsel marked this pull request as draft October 18, 2024 14:46
@andsel andsel changed the title Introduce a new flag setting 'legacy.monitoring.enabled' which eventu… Introduce a new flag to explicitly permit legacy monitoring Oct 18, 2024
@andsel andsel linked an issue Oct 18, 2024 that may be closed by this pull request
@andsel andsel marked this pull request as ready for review October 21, 2024 08:42
Copy link
Contributor

@kaisecheng kaisecheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the suggestion of renaming the flag, I would suggest to add allow.legacy.monitoring to env2yaml.go. The main reason is to ease the transition for docker users as they can setup monitoring by just passing the env variable today.

docs/static/settings-file.asciidoc Outdated Show resolved Hide resolved
config/logstash.yml Outdated Show resolved Hide resolved
config/logstash.yml Outdated Show resolved Hide resolved
Copy link
Contributor

@kaisecheng kaisecheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add the flag to benchmark buildkite 🙏

When I use the xpack.* to enable monitoring in old way, Logstash does not give any hints/ logs that it is missing the flag. I think we need to print logs to guide user.

Let @karenzone to have a look on the words, otherwise, LGTM

docs/static/monitoring/monitoring-internal-legacy.asciidoc Outdated Show resolved Hide resolved
config/logstash.yml Outdated Show resolved Hide resolved
x-pack/spec/monitoring/pipeline_register_hook_spec.rb Outdated Show resolved Hide resolved
@kaisecheng
Copy link
Contributor

One more missing in default_metadata.rb for curl localhost:9600/

@andsel
Copy link
Contributor Author

andsel commented Oct 22, 2024

Thanks @kaisecheng for your review, added the allow.legacy.monitoring flag to BK pipeline and now logs a warn when legacy monitoring is enabled and not allowed.

Example of line logged:

[2024-10-22T14:27:17,977][WARN ][logstash.monitoringextension.pipelineregisterhook] Legacy internal monitoring is enabled (xpack.monitoring.enabled or monitoring.enabled is true) but not allowed. Please check your configuration and eventually set allow.legacy.monitoring

Copy link
Contributor

@kaisecheng kaisecheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more missing in default_metadata.rb for curl localhost:9600/

☝️ Almost there

x-pack/lib/monitoring/monitoring.rb Outdated Show resolved Hide resolved
@andsel
Copy link
Contributor Author

andsel commented Oct 22, 2024

@kaisecheng
Copy link
Contributor

@andsel https://github.com/elastic/logstash/blob/main/logstash-core/lib/logstash/api/commands/default_metadata.rb#L73-L76
It is about logstash API. When enabled_xpack_monitoring? is enabled, monitoring related info will return. I think it needs to add && LogStash::SETTINGS.get("allow.legacy.monitoring")

@kaisecheng
Copy link
Contributor

@andsel btw, I have triggered exhausted test for this pr, but got red

Copy link
Contributor

@kaisecheng kaisecheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :)

Copy link
Contributor

@karenzone karenzone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this apply only to internal collection? If so, we should call out internal collection specifically. Technically, any collection method we've used in the past would fall under the "legacy" umbrella, and we might need more precision and granularity than would be provided by labeling this one implementation as legacy.
Please LMKWYT.

@andsel
Copy link
Contributor Author

andsel commented Nov 4, 2024

@karenzone yes it's related just to the internal collector. Do you thing we should switch naming from allow.legacy.monitoring to allow.internal.legacy.monitoring ?

Copy link
Member

@robbavey robbavey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should consider renaming the setting to live under xpack.monitoring WDYT?

.buildkite/scripts/benchmark/config/logstash.yml Outdated Show resolved Hide resolved
andsel and others added 17 commits November 14, 2024 10:49
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
…ollection' and moved the registering of such setting from core to xpack
@andsel andsel force-pushed the fix/add_feature_flag_to_forcibly_enable_legacy_monitoring_collector branch from 20b2db2 to c0e0aba Compare November 14, 2024 09:49
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Copy link

Quality Gate passed Quality Gate passed

Issues
0 New issues
0 Fixed issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
No data about Duplication

See analysis details on SonarQube

@elasticmachine
Copy link
Collaborator

💚 Build Succeeded

History

cc @andsel

Copy link
Contributor

@karenzone karenzone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@andsel andsel merged commit d4fb06e into elastic:main Nov 19, 2024
7 checks passed
donoghuc added a commit to donoghuc/logstash that referenced this pull request Nov 20, 2024
In elastic#16586 the module include was
removed. This causes failures in the script when the module is referenced. This
commit re-enables the include for the yaml module.
donoghuc added a commit that referenced this pull request Nov 20, 2024
In #16586 the module include was
removed. This causes failures in the script when the module is referenced. This
commit re-enables the include for the yaml module.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Move Internal Collection stack monitoring behind a feature flag
6 participants