-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Batching Feature / Configuration to the New Relic Telemetry SDK #278
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yamnihcg Lets remove this since it is not being used
|
||
/** | ||
* An Event is ad hoc set of key-value pairs with an associated timestamp, recorded in the New Relic | ||
* Metric API. | ||
*/ | ||
public final class Event implements Telemetry { | ||
private static final Logger logger = LoggerFactory.getLogger(Event.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets remove this since it is not being used
response = sendBatch(batch); | ||
|
||
// If there is an issue with a batch, stop sending batches | ||
if ((response.getStatusCode() != 200)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking the respond code line won't be reached if there is an exception. The underlying exception from sendPayload
will bubble up to the sendBatch(one batch)
method here and then this whole method of sending batches will exit.
I think what we might need to do is create a pool of threads and have each batch be sent async on its own thread.
Or, we can try catch the exception and log that something went wrong. The rest of the batches in this list will still get sent (at least attempted to). I would prefer the async approach
batches.add(singleEventBatch); | ||
return batches; | ||
} else { | ||
batches = createBatches(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return createBatches(); 😄
logger.debug("Creating Event batch."); | ||
|
||
int currentUncompressedBatchSize = 0; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: line can be removed
for (MetricBatch batch : metricBatches) { | ||
response = sendBatch(batch); | ||
|
||
// If there is an issue with a batch, stop sending batches |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above, this line won't be reached when a response exception is thrown
logger.debug("Creating metric batch."); | ||
ArrayList<MetricBatch> metricBatches = new ArrayList<>(); | ||
Collection<Metric> metricsInBatch = new ArrayList<>(); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: extra line
int currentUncompressedBatchSize = 0; | ||
|
||
// Construct JSON for common attributes and add to uncompressed batch size | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can place the comment directly above the line, no space needed
// JSON generation + calculating payload size | ||
|
||
Metric curMetric; | ||
while ((curMetric = this.metrics.poll()) != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, it looks like this flows this way and I think there is a problem:
get curMetric (assume if this metric gets added to the batch, it will exceed size)
-- uncompressed size is not over, skip to next if block
-- curMetric is a Count, calculate size and add it to the current uncompressized Size. The current uncompressed Size now exceeds max...but it will be too late by the time this gets checked again?
-- curMetric is added to Batch (The batch now exceeds the size)
-- Uncompressed is greater than Max...create Batch(which is too big) and add to list.
Did I follow this incorrectly?
@@ -132,7 +132,8 @@ public Response send(String json, TelemetryBatch<? extends Telemetry> batch) | |||
private byte[] compressJson(String result) throws IOException { | |||
ByteArrayOutputStream compressedOutput = new ByteArrayOutputStream(); | |||
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(compressedOutput); | |||
gzipOutputStream.write(result.getBytes(StandardCharsets.UTF_8)); | |||
gzipOutputStream.write( | |||
result.getBytes(StandardCharsets.UTF_8)); // Used to get number of bytes in piece of data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good...please take a look!
This PR adds a configuration to enable batching in the Telemetry SDK.
What is batching?
The Java Telemetry SDK sends Metric and Event data to New Relic by using the New Relic Ingest API. The Ingest API has limits on the amount of data that can be sent to New Relic in a single request. Specifically, the request payload has to be <= 1 MB. We implemented batching as a way to send reasonably large amounts of data (> 1 MB) without any user intervention. Batching takes request data and splits it into smaller payloads. Each of these payloads is <= 1 MB. So, each payload can then be sent to New Relic.
Enable batching in the Telemetry SDK
Batching can be turned on through constructors in the EventBuffer and the MetricBuffer. By default, batching is not enabled in the Telemetry SDK. Below are two snippets of code where batching is turned off / on. These snippets were taken from EventExample.java.
Batching: Off (by default)
Batching: On
Where does the batching happen in the EventBuffer / MetricBuffer?
First, the createBatch() function is called on the buffer (refer to EventExample and CountExample). This function looks at the splitBatch variable to check if the user has turned batching on / off. If the user has turned batching off, a single batch is created by calling createSingleBatch(). An ArrayList (of size 1) contains this single batch. If the user has turned batching on, multiple batches are created by calling createBatches(). An interactive explanation of the batching algorithm is here.
The pseudocode for createBatches() is below.
How is data sent to New Relic?
Data is sent to New Relic through the .sendBatch() method in the EventBatchSender / MetricBatchSender.