Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aggregations: deprecate pre_zone and post_zone in date_histogram #9722

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/reference/migration/migrate_1_5.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,7 @@ your application from Elasticsearch 1.x to Elasticsearch 1.5.
The `date_histogram` aggregation now support a simplified `offset` option that replaces the previous `pre_offset` and
`post_offset` which are deprecated in 1.5. Instead of having to specify two separate offset shifts of the underlying buckets, the `offset` option
moves the bucket boundaries in positive or negative direction depending on its argument.

Also for `date_histogram`, options for `pre_zone` and `post_zone` options and the `pre_zone_adjust_large_interval` parameter
are deprecated in 1.5 and replaced by the already existing `time_zone` option. The behavior of `time_zone` is equivalent to the former
`pre_zone` option.
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,10 @@ See <<time-units>> for accepted abbreviations.
==== Time Zone

By default, times are stored as UTC milliseconds since the epoch. Thus, all computation and "bucketing" / "rounding" is
done on UTC. It is possible to provide a time zone (both pre rounding, and post rounding) value, which will cause all
computations to take the relevant zone into account. The time returned for each bucket/entry is milliseconds since the
epoch of the provided time zone.
done on UTC. It is possible to provide a time zone value, which will cause all computations to take the relevant zone
into account. The time returned for each bucket/entry is milliseconds since the epoch of the provided time zone.

deprecated[1.5.0, `pre_zone`, `post_zone` are replaced by `time_zone`]

The parameters are `pre_zone` (pre rounding based on interval) and `post_zone` (post rounding based on interval). The
`time_zone` parameter simply sets the `pre_zone` parameter. By default, those are set to `UTC`.
Expand All @@ -67,6 +68,8 @@ UTC: `2012-04-01T04:00:00Z`. Note, we are consistent in the results, returning t

`post_zone` simply takes the result, and adds the relevant offset.

deprecated[1.5.0, `pre_zone_adjust_large_interval` will be removed]

Sometimes, we want to apply the same conversion to UTC we did above for hour also for day (and up) intervals. We can
set `pre_zone_adjust_large_interval` to `true`, which will apply the same conversion done for hour interval in the
example, to day and above intervals (it can be set regardless of the interval, but only kick in when using day and
Expand All @@ -81,6 +84,8 @@ or that monthly buckets go from the 10th of the month to the 10th of the next mo
The `offset` option accepts positive or negative time durations like "1h" for an hour or "1M" for a Month. See <<time-units>> for more
possible time duration options.

deprecated[1.5.0, `pre_offset` and `post_offset` are deprecated and replaced by `offset`]

==== Keys

Since internally, dates are represented as 64bit numbers, these numbers are returned as the bucket keys (each key
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,24 +87,39 @@ public DateHistogramBuilder minDocCount(long minDocCount) {
}

/**
* Set the timezone in which to translate dates before computing buckets.
* Set the time zone in which to translate dates before computing buckets.
* @deprecated use timeZone() instead
*/
@Deprecated
public DateHistogramBuilder preZone(String preZone) {
this.preZone = preZone;
return this;
}

/**
* Set the timezone in which to translate dates after having computed buckets.
* Set the time zone in which to translate dates after having computed buckets.
* @deprecated this option is going to be removed in 2.0 releases
*/
@Deprecated
public DateHistogramBuilder postZone(String postZone) {
this.postZone = postZone;
return this;
}

/**
* Set the time zone in which to translate dates before computing buckets.
*/
public DateHistogramBuilder timeZone(String timeZone) {
// currently this is still equivallent to using pre_zone, will change in future version
this.preZone = timeZone;
return this;
}

/**
* Set whether to adjust large intervals, when using days or larger intervals.
* @deprecated this option is going to be removed in 2.0 releases
*/
@Deprecated
public DateHistogramBuilder preZoneAdjustLargeInterval(boolean preZoneAdjustLargeInterval) {
this.preZoneAdjustLargeInterval = preZoneAdjustLargeInterval;
return this;
Expand All @@ -122,7 +137,7 @@ public DateHistogramBuilder preOffset(String preOffset) {

/**
* Set the offset to apply after having computed buckets.
* @deprecated the preOffset option will be replaced by offset in future version.
* @deprecated the postOffset option will be replaced by offset in future version.
*/
@Deprecated
public DateHistogramBuilder postOffset(String postOffset) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,14 @@ public class DateHistogramParser implements Aggregator.Parser {

static final ParseField EXTENDED_BOUNDS = new ParseField("extended_bounds");
static final ParseField OFFSET = new ParseField("offset");
static final ParseField PRE_OFFSET = new ParseField("", "pre_offset");
static final ParseField POST_OFFSET = new ParseField("", "post_offset");
static final ParseField PRE_OFFSET = new ParseField("pre_offset").withAllDeprecated("offset");
static final ParseField POST_OFFSET = new ParseField("post_offset").withAllDeprecated("offset");
static final ParseField PRE_ZONE = new ParseField("pre_zone").withAllDeprecated("time_zone");
static final ParseField POST_ZONE = new ParseField("post_zone").withAllDeprecated("time_zone");
static final ParseField TIME_ZONE = new ParseField("time_zone");
static final ParseField INTERVAL = new ParseField("interval");
static final ParseField PRE_ZONE_ADJUST = new ParseField("pre_zone_adjust_large_interval").withAllDeprecated("");


private final ImmutableMap<String, DateTimeUnit> dateFieldUnits;

Expand Down Expand Up @@ -102,11 +108,11 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se
} else if (vsParser.token(currentFieldName, token, parser)) {
continue;
} else if (token == XContentParser.Token.VALUE_STRING) {
if ("time_zone".equals(currentFieldName) || "timeZone".equals(currentFieldName)) {
if (TIME_ZONE.match(currentFieldName)) {
preZone = DateMathParser.parseZone(parser.text());
} else if ("pre_zone".equals(currentFieldName) || "preZone".equals(currentFieldName)) {
} else if (PRE_ZONE.match(currentFieldName)) {
preZone = DateMathParser.parseZone(parser.text());
} else if ("post_zone".equals(currentFieldName) || "postZone".equals(currentFieldName)) {
} else if (POST_ZONE.match(currentFieldName)) {
postZone = DateMathParser.parseZone(parser.text());
} else if (PRE_OFFSET.match(currentFieldName)) {
preOffset = parseOffset(parser.text());
Expand All @@ -115,7 +121,7 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se
} else if (OFFSET.match(currentFieldName)) {
postOffset = parseOffset(parser.text());
preOffset = -postOffset;
} else if ("interval".equals(currentFieldName)) {
} else if (INTERVAL.match(currentFieldName)) {
interval = parser.text();
} else {
throw new SearchParseException(context, "Unknown key for a " + token + " in [" + aggregationName + "]: [" + currentFieldName + "].");
Expand All @@ -131,11 +137,11 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se
} else if (token == XContentParser.Token.VALUE_NUMBER) {
if ("min_doc_count".equals(currentFieldName) || "minDocCount".equals(currentFieldName)) {
minDocCount = parser.longValue();
} else if ("time_zone".equals(currentFieldName) || "timeZone".equals(currentFieldName)) {
} else if (TIME_ZONE.match(currentFieldName)) {
preZone = DateTimeZone.forOffsetHours(parser.intValue());
} else if ("pre_zone".equals(currentFieldName) || "preZone".equals(currentFieldName)) {
} else if (PRE_ZONE.match(currentFieldName)) {
preZone = DateTimeZone.forOffsetHours(parser.intValue());
} else if ("post_zone".equals(currentFieldName) || "postZone".equals(currentFieldName)) {
} else if (POST_ZONE.match(currentFieldName)) {
postZone = DateTimeZone.forOffsetHours(parser.intValue());
} else {
throw new SearchParseException(context, "Unknown key for a " + token + " in [" + aggregationName + "]: [" + currentFieldName + "].");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1018,7 +1018,7 @@ public void emptyAggregation() throws Exception {
}

@Test
public void singleValue_WithPreZone() throws Exception {
public void singleValue_WithtimeZone() throws Exception {
prepareCreate("idx2").addMapping("type", "date", "type=date").execute().actionGet();
IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];
DateTime date = date("2014-03-11T00:00:00+00:00");
Expand All @@ -1032,7 +1032,7 @@ public void singleValue_WithPreZone() throws Exception {
.setQuery(matchAllQuery())
.addAggregation(dateHistogram("date_histo")
.field("date")
.preZone("-2:00")
.timeZone("-2:00")
.interval(DateHistogram.Interval.DAY)
.format("yyyy-MM-dd"))
.execute().actionGet();
Expand Down Expand Up @@ -1067,7 +1067,7 @@ public void singleValue_WithPreZone_WithAadjustLargeInterval() throws Exception
.setQuery(matchAllQuery())
.addAggregation(dateHistogram("date_histo")
.field("date")
.preZone("-2:00")
.timeZone("-2:00")
.interval(DateHistogram.Interval.DAY)
.preZoneAdjustLargeInterval(true)
.format("yyyy-MM-dd'T'HH:mm:ss"))
Expand Down Expand Up @@ -1233,7 +1233,7 @@ public void singleValue_WithMultipleDateFormatsFromMapping() throws Exception {

public void testIssue6965() {
SearchResponse response = client().prepareSearch("idx")
.addAggregation(dateHistogram("histo").field("date").preZone("+01:00").interval(DateHistogram.Interval.MONTH).minDocCount(0))
.addAggregation(dateHistogram("histo").field("date").timeZone("+01:00").interval(DateHistogram.Interval.MONTH).minDocCount(0))
.execute().actionGet();

assertSearchResponse(response);
Expand Down