Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(http/kafka-logger): support to log response body #5550

Merged
merged 11 commits into from
Nov 26, 2021
12 changes: 12 additions & 0 deletions apisix/plugins/http-logger.lua
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ local schema = {
inactive_timeout = {type = "integer", minimum = 1, default = 5},
batch_max_size = {type = "integer", minimum = 1, default = 1000},
include_req_body = {type = "boolean", default = false},
include_resp_body = {type = "boolean", default = false},
spacewander marked this conversation as resolved.
Show resolved Hide resolved
concat_method = {type = "string", default = "json",
enum = {"json", "new_line"}}
},
Expand Down Expand Up @@ -162,6 +163,17 @@ local function remove_stale_objects(premature)
end


function _M.body_filter(conf, ctx)
if conf.include_resp_body then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's refactor this part into a method in log-util.

local final_body = core.response.hold_body_chunk(ctx, true)
if not final_body then
return
end
ctx.resp_body = final_body
end
end


function _M.log(conf, ctx)
local metadata = plugin.plugin_metadata(plugin_name)
core.log.info("metadata: ", core.json.delay_encode(metadata))
Expand Down
12 changes: 12 additions & 0 deletions apisix/plugins/kafka-logger.lua
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ local schema = {
inactive_timeout = {type = "integer", minimum = 1, default = 5},
batch_max_size = {type = "integer", minimum = 1, default = 1000},
include_req_body = {type = "boolean", default = false},
include_resp_body = {type = "boolean", default = false},
include_req_body_expr = {
type = "array",
minItems = 1,
Expand Down Expand Up @@ -191,6 +192,17 @@ local function send_kafka_data(conf, log_message, prod)
end


function _M.body_filter(conf, ctx)
if conf.include_resp_body then
local final_body = core.response.hold_body_chunk(ctx, true)
if not final_body then
return
end
ctx.resp_body = final_body
end
end


function _M.log(conf, ctx)
local entry
if conf.meta_format == "origin" then
Expand Down
7 changes: 7 additions & 0 deletions apisix/utils/log-util.lua
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,13 @@ local function get_full_log(ngx, conf)
end
end

if conf.include_resp_body then
local body = ctx.resp_body
if body then
log.response.body = body
end
end

return log
end
_M.get_full_log = get_full_log
Expand Down
1 change: 1 addition & 0 deletions docs/en/latest/plugins/http-logger.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ This will provide the ability to send Log data requests as JSON objects to Monit
| max_retry_count | integer | optional | 0 | [0,...] | Maximum number of retries before removing from the processing pipe line. |
| retry_delay | integer | optional | 1 | [0,...] | Number of seconds the process execution should be delayed if the execution fails. |
| include_req_body | boolean | optional | false | [false, true] | Whether to include the request body. false: indicates that the requested body is not included; true: indicates that the requested body is included. |
| include_resp_body| boolean | optional | false | [false, true] | Whether to include the response body. The response body is included if and only if it is `true`. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing doc of include_resp_body_expr

| concat_method | string | optional | "json" | ["json", "new_line"] | Enum type: `json` and `new_line`. **json**: use `json.encode` for all pending logs. **new_line**: use `json.encode` for each pending log and concat them with "\n" line. |

## How To Enable
Expand Down
1 change: 1 addition & 0 deletions docs/en/latest/plugins/kafka-logger.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ For more info on Batch-Processor in Apache APISIX please refer.
| max_retry_count | integer | optional | 0 | [0,...] | Maximum number of retries before removing from the processing pipe line. |
| retry_delay | integer | optional | 1 | [0,...] | Number of seconds the process execution should be delayed if the execution fails. |
| include_req_body | boolean | optional | false | [false, true] | Whether to include the request body. false: indicates that the requested body is not included; true: indicates that the requested body is included. |
| include_resp_body| boolean | optional | false | [false, true] | Whether to include the response body. The response body is included if and only if it is `true`. |
| include_req_body_expr | array | optional | | | Whether to logging request body, based on [lua-resty-expr](https://github.com/api7/lua-resty-expr), this option require to turn on `include_req_body` option. |
| cluster_name | integer | optional | 1 | [0,...] | the name of the cluster. When there are two or more kafka clusters, you can specify different names. And this only works with async producer_type.|

Expand Down
1 change: 1 addition & 0 deletions docs/zh/latest/plugins/http-logger.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ title: http-logger
| max_retry_count | integer | 可选 | 0 | [0,...] | 从处理管道中移除之前的最大重试次数。 |
| retry_delay | integer | 可选 | 1 | [0,...] | 如果执行失败,则应延迟执行流程的秒数。 |
| include_req_body | boolean | 可选 | false | [false, true] | 是否包括请求 body。false: 表示不包含请求的 body ; true: 表示包含请求的 body 。 |
| include_resp_body| boolean | 可选 | false | [false, true] | 是否包括响应体。包含响应体,当为`true`。 |
| concat_method | string | 可选 | "json" | ["json", "new_line"] | 枚举类型: `json`、`new_line`。**json**: 对所有待发日志使用 `json.encode` 编码。**new_line**: 对每一条待发日志单独使用 `json.encode` 编码并使用 "\n" 连接起来。 |

## 如何开启
Expand Down
1 change: 1 addition & 0 deletions docs/zh/latest/plugins/kafka-logger.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ title: kafka-logger
| max_retry_count | integer | 可选 | 0 | [0,...] | 从处理管道中移除之前的最大重试次数。 |
| retry_delay | integer | 可选 | 1 | [0,...] | 如果执行失败,则应延迟执行流程的秒数。 |
| include_req_body | boolean | 可选 | false | [false, true] | 是否包括请求 body。false: 表示不包含请求的 body ; true: 表示包含请求的 body 。|
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注意:如果请求 body 没办法完全放在内存中,由于 Nginx 的限制,我们没有办法把它记录下来。 This part is overridden.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry.
I think there is the same limitation in the http-logger plugin. Right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

| include_resp_body| boolean | 可选 | false | [false, true] | 是否包括响应体。包含响应体,当为`true`。 |
| include_req_body_expr | array | 可选 | | | 是否采集请求body, 基于[lua-resty-expr](https://github.com/api7/lua-resty-expr)。 该选项需要开启 `include_req_body`|
| cluster_name | integer | 可选 | 1 | [0,...] | kafka 集群的名称。当有两个或多个 kafka 集群时,可以指定不同的名称。只适用于 producer_type 是 async 模式。|

Expand Down
71 changes: 70 additions & 1 deletion t/plugin/http-logger-json.t
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ run_tests;

__DATA__

=== TEST 1: json body
=== TEST 1: json body with request_body
--- apisix_yaml
routes:
-
Expand All @@ -62,3 +62,72 @@ POST /hello
{"sample_payload":"hello"}
--- error_log
"body":"{\"sample_payload\":\"hello\"}"



=== TEST 2: json body with response_body
--- apisix_yaml
routes:
-
uri: /hello
upstream:
nodes:
"127.0.0.1:1980": 1
type: roundrobin
plugins:
http-logger:
batch_max_size: 1
uri: http://127.0.0.1:1980/log
include_resp_body: true
#END
--- request
POST /hello
{"sample_payload":"hello"}
--- error_log
"response":{"body":"hello world\n"



=== TEST 3: json body with request_body and response_body
--- apisix_yaml
routes:
-
uri: /hello
upstream:
nodes:
"127.0.0.1:1980": 1
type: roundrobin
plugins:
http-logger:
batch_max_size: 1
uri: http://127.0.0.1:1980/log
include_req_body: true
include_resp_body: true
#END
--- request
POST /hello
{"sample_payload":"hello"}
--- error_log eval
qr/(.*"response":\{.*"body":"hello world\\n".*|.*\{\\\"sample_payload\\\":\\\"hello\\\"\}.*){2}/



=== TEST 4: json body without request_body or response_body
--- apisix_yaml
routes:
-
uri: /hello
upstream:
nodes:
"127.0.0.1:1980": 1
type: roundrobin
plugins:
http-logger:
batch_max_size: 1
uri: http://127.0.0.1:1980/log
#END
--- request
POST /hello
{"sample_payload":"hello"}
--- error_log eval
qr/(.*"response":\{.*"body":"hello world\\n".*|.*\{\\\"sample_payload\\\":\\\"hello\\\"\}.*){0}/