SignalFx was acquired by Splunk in October 2019. See https://www.splunk.com/en_us/investor-relations/acquisitions/signalfx.html for more information.
This repository contains legacy libraries for reporting metrics to Splunk Observability Cloud (formerly SignalFx). The only commits that will be made to this repo are organizational or security related patches. No additional features will be added, and the repository will be archived and the final versions published on or prior to March 1, 2025.
Splunk has adopted OpenTelemetry. Please visit official documentation page: Instrument Python applications for Splunk Observability Cloud. Use Splunk Distribution of OpenTelemetry Python Instrumentation or OpenTelemetry Python SDK to send telemetry data to Splunk Observability Cloud. Do not integrate signalfx-python into new services.
This is a programmatic interface in Python for SignalFx's metadata and ingest APIs. It is meant to provide a base for communicating with SignalFx APIs that can be easily leveraged by scripts and applications to interact with SignalFx or report metric and event data to SignalFx. It is also the base for metric reporters that integrate with common Python-based metric collections tools or libraries.
To install with pip:
$ pip install signalfx
To install from source:
$ git clone https://github.com/signalfx/signalfx-python.git
$ cd signalfx-python/
$ pip install -e .
This client library provides programmatic access to SignalFx's APIs:
- the data ingest API;
- the metadata REST API;
- the SignalFlow API.
You start by instantiating a signalfx.SignalFx()
object, which then gives
you access to the API client that you want:
import signalfx
sfx = signalfx.SignalFx(api_endpoint='https://api.{REALM}.signalfx.com',
ingest_endpoint='https://ingest.{REALM}.signalfx.com',
stream_endpoint='https://stream.{REALM}.signalfx.com')
# For the ingest API
ingest = sfx.ingest('ORG_TOKEN')
# For the REST API
rest = sfx.rest('API_TOKEN')
# For the SignalFlow API
flow = sfx.signalflow('ACCESS_TOKEN')
If no endpoints are set manually, this library uses the us0
realm by default.
If you are not in this realm, you will need to explicitly set the
endpoint urls above. To determine if you are in a different realm and need to
explicitly set the endpoints, check your profile page in the SignalFx
web application. You will also need to specify an access token when requesting
one of those clients. For the ingest client, you need to specify your
organization access token (which can be obtained from the
SignalFx organization you want to report data into). For the REST API,
you must use your user access token. For the SignalFlow client, either an
organization access token or a user access token may be used. For more
information on access tokens, see the API's Authentication documentation.
Basic usage of the library for reporting data goes as follows:
import signalfx
with signalfx.SignalFx().ingest('ORG_TOKEN') as sfx:
sfx.send(
gauges=[
{'metric': 'myfunc.time',
'value': 532,
'timestamp': 1442960607000},
...
],
counters=[
{'metric': 'myfunc.calls',
'value': 42,
'timestamp': 1442960607000},
...
],
cumulative_counters=[
{'metric': 'myfunc.calls_cumulative',
'value': 10,
'timestamp': 1442960607000},
...
])
If you're sending data from multiple places in your code, you should create
your ingest client once and use it throughout your application. Each ingest
client instance has an internal queue of datapoints and events that need to be
sent to SignalFx, as well as an internal thread draining that queue. When you
no longer need the client instance, make sure you call .stop()
on it to
ensure the queue is fully drained.
import signalfx
sfx = signalfx.SignalFx().ingest('ORG_TOKEN')
try:
sfx.send(...)
sfx.send(...)
finally:
# Make sure that everything gets sent.
sfx.stop()
The timestamp
must be a millisecond precision timestamp; the number of
milliseconds elapsed since Epoch. The timestamp
field is optional, but
strongly recommended. If not specified, it will be set by SignalFx's ingest
servers automatically; in this situation, the timestamp of your datapoints will
not accurately represent the time of their measurement (network latency,
batching, etc. will all impact when those datapoints actually make it to
SignalFx).
When sending datapoints with multiple calls to send()
, it is recommended to
re-use the same SignalFx client object for each send()
call.
If you must use multiple client objects for the same token, which is not
recommended, it is important to call stop()
after making all send()
calls. Each SignalFx client object uses a background thread to send datapoints
without blocking the caller. Calling stop()
will gracefully flush the
thread's send queue and close its TCP connections.
Reporting dimensions for the data is also optional, and can be accomplished by
specifying a dimensions
parameter on each datapoint containing a dictionary
of string to string key/value pairs representing the dimensions:
import signalfx
with signalfx.SignalFx().ingest('ORG_TOKEN') as sfx:
sfx.send(
gauges=[
{
'metric': 'myfunc.time',
'value': 532,
'timestamp': 1442960607000,
'dimensions': {'host': 'server1', 'host_ip': '1.2.3.4'}
},
...
], ...)
See examples/generic_usecase.py for a complete code sample showing how to send data to SignalFx.
Events can be sent to SignalFx via the send_event()
function. The event
type must be specified, and dimensions and extra event properties can be
supplied as well.
import signalfx
with signalfx.SignalFx().ingest('ORG_TOKEN') as sfx:
sfx.send_event(
event_type='deployments',
dimensions={
'host': 'myhost',
'service': 'myservice',
'instance': 'myinstance'},
properties={
'version': '2015.04.29-01'})
The library includes functions to search, retrieve, and update metric metadata and tags. Deleting tags is also supported.
import signalfx
with signalfx.SignalFx().rest('ORG_TOKEN') as sfx:
sfx.update_tag('tag_name',
description='An example tag',
custom_properties={'version': 'some_number'})
Optionally, the client may be configured to append additional dimensions to all metrics and events sent to SignalFx. One use case for this is to append the AWS unique ID of the current host as an extra dimension. For example,
import signalfx
from signalfx.aws import AWS_ID_DIMENSION, get_aws_unique_id
sfx = signalfx.SignalFx().ingest('ORG_TOKEN')
# This dimension will be added to all datapoints sent.
sfx.add_dimensions({AWS_ID_DIMENSION: get_aws_unique_id()})
try:
sfx.send(
gauges=[
{
'metric': 'myfunc.time',
'value': 532,
'timestamp': 1442960607000,
'dimensions': {'host': 'server1', 'host_ip': '1.2.3.4'}
},
])
finally:
sfx.stop()
pyformance
is a Python library that provides CodaHale-style metrics in a
very Pythonic way. We offer a reporter that can report the pyformance
metric registry data directly to SignalFx.
from signalfx.pyformance import (count_calls, count_calls_with_dims,
gauge, SignalFxReporter)
@count_calls
def callme():
# whatever
pass
@count_calls_with_dims(dimension_key="dimension_value")
def callme_with_dims():
# whatever
pass
sfx = SignalFxReporter(token='ORG_TOKEN')
sfx.start()
callme()
callme()
callme_with_dims()
callme_with_dims()
gauge('test').set_value(42)
See examples/pyformance_usecase.py for a complete code example using Pyformance.
WARNING: The SignalFlow functionality in this repository has been migrated to github.com/signalfx/signalflow-client-python and is now available via the signalflow-client-python package. Please note that SignalFlow functionality in this repository is deprecated and will be removed in future releases.
SignalFlow is SignalFx's real-time analytics computation language. The SignalFlow API allows SignalFx users to execute real-time streaming analytics computations on the SignalFx platform. For more information, head over to our Developers documentation:
The SignalFlow client accepts either an Organization Access Token or a User API Token. Executing a SignalFlow program is very simple with this client library:
import signalfx
program = "data('cpu.utilization').mean().publish()"
with signalfx.SignalFx().signalflow('ACCESS_TOKEN') as flow:
print('Executing {0} ...'.format(program))
computation = flow.execute(program)
for msg in computation.stream():
if isinstance(msg, signalfx.signalflow.messages.DataMessage):
print('{0}: {1}'.format(msg.logical_timestamp_ms, msg.data))
if isinstance(msg, signalfx.signalflow.messages.EventMessage):
print('{0}: {1}'.format(msg.timestamp_ms, msg.properties))
Metadata about the streamed timeseries is received from .stream()
, but it
is automatically intercepted by the client library and made available through
the Computation
object returned by execute()
:
if isinstance(msg, signalfx.signalflow.messages.DataMessage):
for tsid, value in msg.data.items():
metadata = computation.get_metadata(tsid)
# Display metadata and datapoint value as desired
For more examples of how to execute SignalFlow computation with this library, interpret and use the returned stream messages, you can look at the simple example in examples/signalflow/basic.py or at the SignalFlow CLI and its implementation which uses this library.
With the SignalFlow output being programmatically accessible, it's easy to convert this data into any form that you need for further use or analysis. One such use case is to build a Pandas DataFrame with the computation's output. For a complete example of how to do this, see examples/signalflow/dataframe.py.
The reason you are not seeing the metrics in the chart is because the script that is calling the Python client module is exiting right after calling the send method. The Python client library is mainly targeted towards sending a continuous stream of metrics and was implemented to be asynchronous.
To work around this problem (most common in short-lived scripts for example),
register an atexit
function to cleanly stop the datapoint sending thread
when your program exits:
import atexit
import signalfx
sfx = signalfx.SignalFx().ingest('ORG_TOKEN')
atexit.register(sfx.stop)
ERROR:root:Posting to SignalFx failed.
SSLError: hostname 'api.signalfx.com' doesn't match either of '*.signalfuse.com', 'signalfuse.com'.
Root Cause: SignalFx's API endpoints (api.signalfx.com
,
ingest.signalfx.com
and stream.signalfx.com
) have SSL SNI enabled and
the urllib3
module in Python versions prior to 2.7.8 had a bug that causes
the above issue. This was fixed in later versions of Python; we recommend using
Python 2.7.9 or newer when using this library.
The protocol buffer used to transmit data through the ingest API restricts
integers and longs to (-(2**63)
to (2**63)-1
). long
values in
Python 2.x and int
values in 3.x can exceed these values. Any value or
property value less than -(2**63)
or greater than (2**63)-1
will raise
a ValueError
exception.
Apache Software License v2. Copyright © 2014-2019 SignalFx