Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ERROR][logstash.pipeline] A plugin had an unrecoverable error. Will restart this plugin #6279

Closed
get2arun opened this issue Nov 18, 2016 · 12 comments

Comments

@get2arun
Copy link

I have logstash-5.0.1-1.noarch installed in redhat server. I have ELK in one server and filebeat installed in another server. Filebeat config looks good and no errors. It is pushing the logs to logstash server. From logstash to Eleasticsearch the data is not pushed to ES database. The error is shown when logstash pipleline is started.
Here is the configuration, please help figure out the issue.

The below is the /etc/logstash/logstash.yml

path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
log.level: debug
path.logs: /var/log/logstash
 cat /etc/logstash/conf.d/logstash.conf
input {
  beats {
    port => 5044
  }
}
output {
  elasticsearch {
    hosts => ["elasticserver.domain.name:9200"]
    manage_template => false
    index => "filebeat-%{+YYYY.MM.dd}"
    user => elasticuser
    password => elasticpassword
  }
}

Error output from /var/log/logstash/logstash-plain.log

[2016-11-18T14:11:05,138][DEBUG][logstash.agent ] starting agent
[2016-11-18T14:11:05,139][DEBUG][logstash.agent ] starting pipeline {:id=>"main"}
[2016-11-18T14:11:05,142][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2016-11-18T14:11:05,155][DEBUG][io.netty.channel.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 16
[2016-11-18T14:11:05,177][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.noKeySetOptimization: false
[2016-11-18T14:11:05,177][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
[2016-11-18T14:11:05,183][DEBUG][logstash.outputs.elasticsearch] Normalizing http path {:path=>nil, :normalized=>nil}
[2016-11-18T14:11:05,192][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2016-11-18T14:11:05,204][DEBUG][io.netty.util.internal.JavassistTypeParameterMatcherGenerator] Generated: io.netty.util.internal.matchers.org.logstash.beats.BatchMatcher
[2016-11-18T14:11:05,227][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.processId: 11821 (auto-detected)
[2016-11-18T14:11:05,230][DEBUG][io.netty.util.NetUtil ] Loopback interface: lo (lo, 127.0.0.1)
[2016-11-18T14:11:05,231][DEBUG][io.netty.util.NetUtil ] /proc/sys/net/core/somaxconn: 128
[2016-11-18T14:11:05,232][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.machineId: 00:50:56:ff:fe:b2:06:ef (auto-detected)
[2016-11-18T14:11:05,399][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://hidden:hidden@myelkserver.domain.name:9200"]}}
[2016-11-18T14:11:05,400][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["myelkserver.domain.name:9200"]}
[2016-11-18T14:11:05,403][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2016-11-18T14:11:05,407][INFO ][logstash.pipeline ] Pipeline main started
[2016-11-18T14:11:05,415][DEBUG][logstash.agent ] Starting puma
[2016-11-18T14:11:05,416][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2016-11-18T14:11:05,418][DEBUG][logstash.api.service ] [api-service] start
[2016-11-18T14:11:05,434][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9601}
[2016-11-18T14:11:05,435][DEBUG][logstash.api.service ] [api-service] start
[2016-11-18T14:11:05,438][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601}
[2016-11-18T14:11:05,536][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:05 +0100}
[2016-11-18T14:11:06,545][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:06 +0100}
[2016-11-18T14:11:07,547][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:07 +0100}
[2016-11-18T14:11:08,551][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:08 +0100}
[2016-11-18T14:11:09,553][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:09 +0100}
[2016-11-18T14:11:10,408][DEBUG][logstash.pipeline ] Pushing flush onto pipeline
[2016-11-18T14:11:10,554][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:10 +0100}
[2016-11-18T14:11:11,557][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:11 +0100}
[2016-11-18T14:11:11,695][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"3e41b992054e8ff9dabbc5e4da48cdf1b344fcd6-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_483a8f13-8c7f-4280-938b-40369032b893", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: Address already in use

Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:223)
sun.nio.ch.ServerSocketAdaptor.bind(sun/nio/ch/ServerSocketAdaptor.java:74)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:125)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:554)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1258)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:511)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:496)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:980)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:250)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:363)
io.netty.util.concurrent.SingleThreadEventExecutor.safeExecute(io/netty/util/concurrent/SingleThreadEventExecutor.java:451)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:418)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:401)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:877)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(io/netty/util/concurrent/DefaultThreadFactory.java:144)
java.lang.Thread.run(java/lang/Thread.java:745)
[2016-11-18T14:11:12,560][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:12 +0100}
[2016-11-18T14:11:12,695][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2016-11-18T14:11:12,705][WARN ][io.netty.channel.AbstractChannel] Force-closing a channel whose registration task was not accepted by an event loop: [id: 0x6ab3d2e5]
java.util.concurrent.RejectedExecutionException: event executor terminated
at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:840) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:342) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:335) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:765) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:475) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:80) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:74) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:85) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:330) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:281) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:277) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:259) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at org.logstash.beats.Server.listen(Server.java:70) [logstash-input-beats-3.1.8.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_73]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_73]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_73]
at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_73]
at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:436) [?:?]
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:300) [?:?]
at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:35) [?:?]
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:134) [?:?]
at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:60) [?:?]
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105) [?:?]
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) [?:?]
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) [?:?]
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182) [?:?]
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203) [?:?]
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:168) [?:?]
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) [?:?]
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105) [?:?]
at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221) [?:?]
at org.jruby.ast.RescueNode.interpret(RescueNode.java:116) [?:?]
at org.jruby.ast.EnsureNode.interpret(EnsureNode.java:96) [?:?]
at org.jruby.ast.BeginNode.interpret(BeginNode.java:83) [?:?]
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105) [?:?]
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) [?:?]
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) [?:?]
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182) [?:?]
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203) [?:?]
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326) [?:?]
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170) [?:?]
at org.jruby.ast.FCallOneArgNode.interpret(FCallOneArgNode.java:36) [?:?]
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105) [?:?]
at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:112) [?:?]
at org.jruby.runtime.Interpreted19Block.evalBlockBody(Interpreted19Block.java:206) [?:?]
at org.jruby.runtime.Interpreted19Block.yield(Interpreted19Block.java:194) [?:?]
at org.jruby.runtime.Interpreted19Block.call(Interpreted19Block.java:125) [?:?]
at org.jruby.runtime.Block.call(Block.java:101) [?:?]
at org.jruby.RubyProc.call(RubyProc.java:300) [?:?]
at org.jruby.RubyProc.call(RubyProc.java:230) [?:?]
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:99) [?:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]
[2016-11-18T14:11:12,708][ERROR][io.netty.util.concurrent.DefaultPromise.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:840) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:342) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:335) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:765) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:767) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:435) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:111) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1062) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:486) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:80) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:74) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:85) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:330) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:281) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:277) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:259) [netty-all-4.1.3.Final.jar:4.1.3.Final]

@jsvd
Copy link
Member

jsvd commented Nov 18, 2016

looking at Error: Address already in use, are you sure there isn't more than one logstash instance running? check with something like ps axf | grep logstash

@get2arun
Copy link
Author

Thanks. Two instances were running. Its fixed now. I will update the ticket.

Thank you.

On Nov 18, 2016 10:26 PM, "João Duarte" notifications@github.com wrote:

looking at Error: Address already in use, are you sure there isn't more
than one logstash instance running? check with something like ps axf |
grep logstash


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#6279 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ARZWJWDyZmBtYKbeQs0juGW0etQBrvF3ks5q_dirgaJpZM4K2fro
.

@suyograo
Copy link
Contributor

we should shutdown gracefully or provide a better error message when the port is in use. The stacktrace is not useful to our users. I've created an issue here: logstash-plugins/logstash-input-beats#161

@get2arun
Copy link
Author

get2arun commented Nov 25, 2016 via email

@julienvey
Copy link

Something else that can cause the issue, is if you have two identical input configuration in your logstash config

input {
	beats {
		port => 5044
	}
}

twice, it will cause the same error

@get2arun
Copy link
Author

get2arun commented Jan 6, 2017 via email

@RamyaGowda397
Copy link

João Duarte

Two instances were running ,used kill command ,it is working fine now .

thank you

@suraj1538
Copy link

Hi, Maybe a newbie question.
We are setting up ELK pipeline and facing an error on logstash similar to the one posted on this page except that "Error: event executor terminated" is the actual error. What does this mean? Any help is appreciated.

][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Beats port=>9997, id=>"f273c84dfa2daf4dc99df25f4478fe5863d366c1-2", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0808c58 c-f936-4ad3-a9ee-45a64253ed07", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_tim eout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA 384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA 384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: event executor terminated

@RamyaGowda397
Copy link

@suraj1538
If your using linux OS , type the following command and check weather 2 instance of logstash are running
"ps -aux|grep logstash"

if two instances are running then kill the process and restart logstash server

@samratbe
Copy link

samratbe commented Aug 3, 2018

getting the same issue .. any suggestion.

@codeachange
Copy link

logstash will read *.conf.bak files if you use path.config

@saalihou
Copy link

saalihou commented Jan 13, 2020

If anyone stumbles upon this, I had a logstash.conf.save file generated by the nano editor, and that was causing logstash to read the same config twice, which caused the address already in use error.

Edit: it does not seem to be related to the filename, just having a file in the pipelines folder was enough to get logstash to read it, probably because I'm using https://github.com/deviantony/docker-elk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants