Skip to content

Commit

Permalink
Fix some timers. (#141)
Browse files Browse the repository at this point in the history
Fix some timers and deprecate RequestLog::switchTiming

--We forgot to stop the `DataApiRequest` timer.

--When timing the posting of the Druid Query we
failed to account for the thread hopping during
asynchronous programming. As a result, not only
were we failing to stop the timer in the correct
instance of the RequestLog (the one that is used
by the response handling thread), but we also
attempted to stop a timer in a RequestLog that
has since been cleared. So we end up with two
warnings: One for attempting to stop a timer that
doesn't exist, and one for logging a timer that
was never stopped.

Since there isn't a good way to time just the
posting of the druid query, and it probably
isn't necessary anyway, the timing has been
removed.

--`RequestLog::switchTiming` works by stopping the most recent timer,
and then starting the specified timer. As a result, we can fail to stop
a timer A if we start and stop timer B inside a method invoked while
timer A is running. As a result, in order to ensure that all of your
timers are working correctly, you need to be cognizant of the internal
workings of every method you invoke within the timed block.

--Therefore, in an effort to make timed code more modular, we've removed
the `RequestLog::switchTiming` method, and replaced all instances of it
with try-finally blocks that explicitly stop and start each timer. It's
ugly, but it's also much less brittle.
  • Loading branch information
archolewa authored and cdeszaq committed Jan 23, 2017
1 parent 2ae0605 commit 351c49d
Show file tree
Hide file tree
Showing 4 changed files with 105 additions and 74 deletions.
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,10 @@ Current

### Changed:

- [The druid query posting timer has been removed](https://github.com/yahoo/fili/pull/141)
* There wasn't really a good way of stopping timing only the posting itself. Since the timer is
probably not that useful, it has been removed.

- [Dimension Field Tagging and Dynamic Dimension Field Serilization](https://github.com/yahoo/fili/pull/137)
* Changed `fili-core` dimension endpoint `DimensionField` serialization strategy from hard coded static attributes to dynamic serialization based on `jackson` serializer

Expand Down Expand Up @@ -182,6 +186,12 @@ Current

### Deprecated:

- [`RequestLog::switchTiming` has been deprecated]((https://github.com/yahoo/fili/pull/141)
- `RequestLog::switchTiming` is very context-dependent, and therefore brittle. In particular, adding any
additional timers inside code called by a timed block may result in the original timer not stopping
properly. All usages of `switchTiming` should be replaced with explicit calls to `RequestLog::startTiming`
and `RequestLog::stopTiming`.

- [Dimension Field Tagging and Dynamic Dimension Field Serilization](https://github.com/yahoo/fili/pull/137)
* Deprecated `DimensionsServlet::getDimensionFieldListSummaryView` and `DimensionsServlet::getDimensionFieldSummaryView`
since there is no need for it anymore due to the change in serialization of `DimensionField`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -297,52 +297,47 @@ public void postDruidQuery(
DruidQuery<?> druidQuery
) {
long seqNum = druidQuery.getContext().getSequenceNumber();
RequestLog.startTiming("PostingDruidQuery" + seqNum);
String entityBody;
RequestLog.startTiming("DruidQuerySerializationSeq" + seqNum);
try {
String entityBody;
RequestLog.startTiming("DruidQuerySerializationSeq" + seqNum);
try {
entityBody = writer.writeValueAsString(druidQuery);
} catch (JsonProcessingException e) {
throw new IllegalStateException(e);
} finally {
RequestLog.stopTiming("DruidQuerySerializationSeq" + seqNum);
}
entityBody = writer.writeValueAsString(druidQuery);
} catch (JsonProcessingException e) {
throw new IllegalStateException(e);
} finally {
RequestLog.stopTiming("DruidQuerySerializationSeq" + seqNum);
}

long totalQueries = druidQuery.getContext().getNumberOfQueries();
String format = String.format("%%0%dd", String.valueOf(totalQueries).length());
String timerName;
AtomicLong outstanding;

if (!(druidQuery instanceof WeightEvaluationQuery)) {
if (context.getNumberOfOutgoing().decrementAndGet() == 0) {
RequestLog.stopTiming(REQUEST_WORKFLOW_TIMER);
}
outstanding = context.getNumberOfIncoming();
timerName = DRUID_QUERY_TIMER + String.format(format, seqNum);
} else {
outstanding = new AtomicLong(0);
timerName = DRUID_WEIGHTED_QUERY_TIMER + String.format(format, seqNum);
}
long totalQueries = druidQuery.getContext().getNumberOfQueries();
String format = String.format("%%0%dd", String.valueOf(totalQueries).length());
String timerName;
AtomicLong outstanding;

BoundRequestBuilder requestBuilder = webClient.preparePost(serviceConfig.getUrl())
.setBody(entityBody)
.addHeader("Content-Type", "application/json");

headersToAppend.get().forEach(requestBuilder::addHeader);

LOG.debug("druid json request: {}", entityBody);
sendRequest(
success,
error,
failure,
requestBuilder,
timerName,
outstanding
);
} finally {
RequestLog.stopTiming("PostingDruidQuery" + seqNum);
if (!(druidQuery instanceof WeightEvaluationQuery)) {
if (context.getNumberOfOutgoing().decrementAndGet() == 0) {
RequestLog.stopTiming(REQUEST_WORKFLOW_TIMER);
}
outstanding = context.getNumberOfIncoming();
timerName = DRUID_QUERY_TIMER + String.format(format, seqNum);
} else {
outstanding = new AtomicLong(0);
timerName = DRUID_WEIGHTED_QUERY_TIMER + String.format(format, seqNum);
}

BoundRequestBuilder requestBuilder = webClient.preparePost(serviceConfig.getUrl())
.setBody(entityBody)
.addHeader("Content-Type", "application/json");

headersToAppend.get().forEach(requestBuilder::addHeader);

LOG.debug("druid json request: {}", entityBody);
sendRequest(
success,
error,
failure,
requestBuilder,
timerName,
outstanding
);
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,12 @@ public static void startTiming(String timePhaseName) {
* Time is accumulated if the stopwatch is already registered.
*
* @param nextPhase the name of the stopwatch to be started
*
* @deprecated This method is too dependent on context that can be too easily changed by internal method calls.
* Each timer should be explicitly started by {@link RequestLog#startTiming(String)} and stopped by
* {@link RequestLog#stopTiming(String)} instead
*/
@Deprecated
public static void switchTiming(String nextPhase) {
stopMostRecentTimer();
startTiming(nextPhase);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -346,41 +346,61 @@ public void getData(
) {
try {
RequestLog.startTiming("DataApiRequest");
DataApiRequest apiRequest = new DataApiRequest(
tableName,
timeGrain,
dimensions,
metrics,
intervals,
filters,
havings,
sorts,
count,
topN,
format,
timeZone,
asyncAfter,
perPage,
page,
uriInfo,
this
);
DataApiRequest apiRequest;
try {
apiRequest = new DataApiRequest(
tableName,
timeGrain,
dimensions,
metrics,
intervals,
filters,
havings,
sorts,
count,
topN,
format,
timeZone,
asyncAfter,
perPage,
page,
uriInfo,
this
);
} finally {
RequestLog.stopTiming("DataApiRequest");
}

if (requestMapper != null) {
apiRequest = (DataApiRequest) requestMapper.apply(apiRequest, containerRequestContext);
}

RequestLog.switchTiming("DruidQueryMerge");
// Build the query template
TemplateDruidQuery templateQuery = templateDruidQueryMerger.merge(apiRequest);
RequestLog.startTiming("DruidQueryMerge");
TemplateDruidQuery templateQuery;
try {
templateQuery = templateDruidQueryMerger.merge(apiRequest);
} finally {
RequestLog.stopTiming("DruidQueryMerge");
}

RequestLog.switchTiming("DruidQueryBuilder");
// Select the performance slice and build the final query
DruidAggregationQuery<?> druidQuery = druidQueryBuilder.buildQuery(apiRequest, templateQuery);
DruidAggregationQuery<?> druidQuery;
RequestLog.startTiming("DruidQueryBuilder");
try {
druidQuery = druidQueryBuilder.buildQuery(apiRequest, templateQuery);
} finally {
RequestLog.stopTiming("DruidQueryBuilder");
}

RequestLog.switchTiming("BuildRequestContext");
// Accumulate data needed for request processing workflow
RequestContext context = new RequestContext(containerRequestContext, readCache);
RequestLog.startTiming("BuildRequestContext");
RequestContext context;
try {
context = new RequestContext(containerRequestContext, readCache);
} finally {
RequestLog.stopTiming("BuildRequestContext");
}

//An instance to prepare the Response with different set of arguments
HttpResponseMaker httpResponseMaker = new HttpResponseMaker(
Expand Down Expand Up @@ -408,27 +428,28 @@ public void getData(
httpResponseMaker
);

RequestLog.switchTiming("logRequestMetrics");
logRequestMetrics(apiRequest, readCache, druidQuery);
RequestLog.record(new DruidFilterInfo(apiRequest.getFilter()));
RequestLog.switchTiming(REQUEST_WORKFLOW_TIMER);
RequestLog.startTiming("logRequestMetrics");
try {
logRequestMetrics(apiRequest, readCache, druidQuery);
RequestLog.record(new DruidFilterInfo(apiRequest.getFilter()));
} finally {
RequestLog.stopTiming("logRequestMetrics");
}

// Process the request
RequestLog.startTiming(REQUEST_WORKFLOW_TIMER);
boolean complete = dataRequestHandler.handleRequest(context, apiRequest, druidQuery, response);
if (! complete) {
throw new IllegalStateException("No request handler accepted request.");
}
} catch (RequestValidationException e) {
LOG.debug(e.getMessage(), e);
RequestLog.stopMostRecentTimer();
asyncResponse.resume(RequestHandlerUtils.makeErrorResponse(e.getStatus(), e, writer));
} catch (NoMatchFoundException e) {
LOG.info("Exception processing request", e);
RequestLog.stopMostRecentTimer();
asyncResponse.resume(RequestHandlerUtils.makeErrorResponse(INTERNAL_SERVER_ERROR, e, writer));
} catch (Error | Exception e) {
LOG.info("Exception processing request", e);
RequestLog.stopMostRecentTimer();
asyncResponse.resume(RequestHandlerUtils.makeErrorResponse(BAD_REQUEST, e, writer));
}
}
Expand Down

0 comments on commit 351c49d

Please sign in to comment.