Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Splunk returner failure with different error #50815

Closed
mruepp opened this issue Dec 11, 2018 · 9 comments
Closed

Splunk returner failure with different error #50815

mruepp opened this issue Dec 11, 2018 · 9 comments
Labels
Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Milestone

Comments

@mruepp
Copy link

mruepp commented Dec 11, 2018

Description of Issue/Question

We configure a splunk returner on the master with master job cache.
This is the config file in master.d, no config for the minion - as described here:

splunk_http_forwarder:
  token: 'SOME-TOKEN'
  indexer: 'https://splunkindexer'
  sourcetype: 'salt'
  index: 'salt_index'

master_job_cache: splunk

When running local on salt master we get this in master log:
sudo salt-call test.ping --return splunk

2018-12-11 17:29:47,299 [salt.utils.job   :45  ][ERROR   ][10985] Returner 'splunk' does not support function prep_jid 2018-12-11 17:29:47,300 [salt.master      :1795][ERROR   ][10985] Error in function _return:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/master.py", line 1788, in run_func
    ret = getattr(self, func)(load)
  File "/usr/lib/python2.7/site-packages/salt/master.py", line 1587, in _return
    self.opts, load, event=self.event, mminion=self.mminion)
  File "/usr/lib/python2.7/site-packages/salt/utils/job.py", line 46, in store_job
    raise KeyError(emsg)
KeyError: u"Returner 'splunk' does not support function prep_jid"

When running this on salt master:
sudo salt 'salt.dev*' test.ping --return splunk
This is the output on the salt cli:

Salt request timed out. The master is not responding. You may need to run your command with `--async` in order to bypas
s the congested event bus. With `--async`, the CLI tool will print the job id (jid) and exit immediately without listen
ing for responses. You can then use `salt-run jobs.lookup_jid` to look up the results of the job in the job cache later.    

And this is the master log output:

2018-12-11 17:32:24,442 [salt.master      :2141][ERROR   ][10986] Failed to allocate a jid. The requested returner 'splunk' could not be loaded.
2018-12-11 17:32:24,444 [salt.transport.zeromq:691 ][ERROR   ][10986] Some exception handling a payload from minion
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/salt/transport/zeromq.py", line 687, in handle_message
    ret, req_opts = yield self.payload_handler(payload)
  File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 870, in run
    value = future.result()
  File "/usr/lib64/python2.7/site-packages/tornado/concurrent.py", line 214, in result
    raise_exc_info(self._exc_info)
  File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 215, in wrapper
    result = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/salt/master.py", line 1022, in _handle_payload
    'clear': self._handle_clear}[key](load)
  File "/usr/lib/python2.7/site-packages/salt/master.py", line 1053, in _handle_clear
    ret = getattr(self.clear_funcs, cmd)(load), {'fun': 'send_clear'}
  File "/usr/lib/python2.7/site-packages/salt/master.py", line 2087, in publish
    payload = self._prep_pub(minions, jid, clear_load, extra, missing)
  File "/usr/lib/python2.7/site-packages/salt/master.py", line 2179, in _prep_pub
    self.event.fire_event({'minions': minions}, clear_load['jid'])
  File "/usr/lib/python2.7/site-packages/salt/utils/event.py", line 741, in fire_event
    salt.utils.stringutils.to_bytes(tag),
  File "/usr/lib/python2.7/site-packages/salt/utils/stringutils.py", line 63, in to_bytes
    return to_str(s, encoding, errors)
  File "/usr/lib/python2.7/site-packages/salt/utils/stringutils.py", line 118, in to_str
    raise TypeError('expected str, bytearray, or unicode')
TypeError: expected str, bytearray, or unicode
2018-12-11 17:32:24,445 [tornado.general  :452 ][ERROR   ][10986] Uncaught exception, closing connection.
Traceback (most recent call last):
  File "/usr/lib64/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 444, in _handle_events
    self._handle_send()
  File "/usr/lib64/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 487, in _handle_send
    status = self.socket.send_multipart(msg, **kwargs)
  File "/usr/lib64/python2.7/site-packages/zmq/sugar/socket.py", line 363, in send_multipart
    i, rmsg,
TypeError: Frame 0 (u'Some exception handling minion...) does not support the buffer interface.
2018-12-11 17:32:24,445 [tornado.application:611 ][ERROR   ][10986] Exception in callback None
Traceback (most recent call last):
  File "/usr/lib64/python2.7/site-packages/tornado/ioloop.py", line 865, in start
    handler_func(fd_obj, events)
  File "/usr/lib64/python2.7/site-packages/tornado/stack_context.py", line 274, in null_wrapper
    return fn(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 444, in _handle_events
    self._handle_send()
  File "/usr/lib64/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 487, in _handle_send
    status = self.socket.send_multipart(msg, **kwargs)
  File "/usr/lib64/python2.7/site-packages/zmq/sugar/socket.py", line 363, in send_multipart
    i, rmsg,
TypeError: Frame 0 (u'Some exception handling minion...) does not support the buffer interface.     

Output of curl:

[user@saltmaster ~]$ curl -k -v splunk:8088                                                                        
* About to connect() to splunk port 8088 (#0)                                                                        
*   Trying 10.0.0.1...                                                                                             
* Connected to splunk (10.0.0.1) port 8088 (#0)                                                                  
> GET / HTTP/1.1                                                                                                       
> User-Agent: curl/7.29.0                                                                                              
> Host: splunk:8088                                                                                                  
> Accept: */*                                                                                                          
>                                                                                                                      
< HTTP/1.1 404 Not Found                                                                                               
< Date: Tue, 11 Dec 2018 16:36:03 GMT                                                                                  
< Content-Type: text/html; charset=UTF-8                                                                               
< X-Content-Type-Options: nosniff                                                                                      
< Content-Length: 223                                                                                                  
< Connection: Keep-Alive                                                                                               
< X-Frame-Options: SAMEORIGIN                                                                                          
< Server: Splunkd                                                                                                      
<                                                                                                                      
<!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><title>404 Not Found</tit
le></head><body><h1>Not Found</h1><p>The requested URL was not found on this server.</p></body></html>                 
* Connection #0 to host splunk left intact      

Setup

CentOS Linux release 7.6.1810 (Core)
Linux devs0241 3.10.0-862.14.4.el7.x86_64 #1 SMP Wed Sep 26 15:12:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Steps to Reproduce Issue

(Include debug logs if possible and relevant.)

Versions Report

salt 2018.3.3 (Oxygen) Master/Minion

@dwoz dwoz added Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around P1 Priority 1 labels Dec 11, 2018
@dwoz dwoz added this to the Approved milestone Dec 11, 2018
@dwoz
Copy link
Contributor

dwoz commented Dec 11, 2018

@mruepp Thank you for reporting this issue.

@mruepp
Copy link
Author

mruepp commented Feb 11, 2019

Is there a targetrelease for this bug?

@tariqhy
Copy link

tariqhy commented Aug 15, 2019

Hi. I'm running into the same bug with salt 2019.2.0 (Fluorine). Any idea when it will be addressed?

@kiamatthews
Copy link

kiamatthews commented Aug 21, 2019

Another 2019.2.0 user here with the exact same issue. Any workarounds?

@whytewolf
Copy link
Collaborator

So, For the record. my two cents on this one. I don't think this is possible. splunk would not make a great master_job_cache. splunk isn't a database that can be used to fetch data from in small amounts or making lots of requests against. both functionality that a master_job_cache needs to do.

I can see merit for adding event_returner functionality. but not a full master_job_cache, or ext_job_cache.

see https://docs.saltstack.com/en/latest/ref/returners/ for a list of functions needed for each type of returner.

@kiamatthews
Copy link

Ah yes. I just tried to list recent jobs and couldn't. I removed the ext_job_cache: splunk declaration and I was able to list my jobs. So your assessment is spot on.

@kiamatthews
Copy link

I can see merit for adding event_returner functionality. but not a full master_job_cache, or ext_job_cache.

@whytewolf - Would this need to be configured to run on the master or would this need to be in a pillar for each minion? My hope is the former. It would be much easier for me to receive data to Splunk from a single server (saltmaster) rather than having to whitelist traffic from individual minions.

@whytewolf
Copy link
Collaborator

An event_returner would be set up on the master.

@sagetherage sagetherage added the Magnesium Mg release after Na prior to Al label May 27, 2020
@sagetherage sagetherage removed the P1 Priority 1 label Jun 3, 2020
@sagetherage sagetherage modified the milestones: Approved, Magnesium Jul 29, 2020
@sagetherage sagetherage removed the Magnesium Mg release after Na prior to Al label Sep 29, 2020
@sagetherage sagetherage modified the milestones: Magnesium, Approved Sep 29, 2020
@Ch3LL
Copy link
Contributor

Ch3LL commented Sep 11, 2023

As per @whytewolf 's comments splunk is not a valid use case for master_job_cache, but there was event return capability added to the splunk returner here: #61150 so I will go ahead and close this as resolved.

@Ch3LL Ch3LL closed this as completed Sep 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Projects
None yet
Development

No branches or pull requests

7 participants