-
Notifications
You must be signed in to change notification settings - Fork 10
v3.1 Output
The stdout log messages are filtered with log level. Any with level less than "INFO" are not shown. Also only the messages falling into one of the following categories are passing to the stdout:
- Messages
- Error Messages
- Periodic Metrics Snapshots Per Load Job
- Total Metrics Per Load Job
The periodic/total metrics output depends on the load job configuration.
If there more than 1 load type is configured for the load job (mixed or weighted case) then the metrics formatted as a table like below:
2016-11-18T10:55:12,041 I BasicLoadMonitor logCtxShutDownHook 2016.11.18.10.53.48.375 metrics:
______________________________________________________________________________________________________________________
Load | Concur| Driver| Count | Job | TP [op/s] | | BW [MB/s] |Latency [us]|Duration [us]
Type | rency | Count |-------------------| Time |-----------------| Size |-------------|------------|-------------
| | | Success |Failed| [s] | Mean | Last | | Mean | Last | Mean | Mean
------|-------|-------|------------|------|-------|--------|--------|--------|------|------|------------|-------------
CREATE| 10| 1| 367358| 0| 0|0.0 |5402.411| 3.503GB|0.0 |52.757| 1997| 2081
READ| 20| 1| 543398| 0| 0|0.0 |7861.061| 5.182GB|0.0 |76.768| 2414| 2434
----------------------------------------------------------------------------------------------------------------------
Here 2016.11.18.10.53.48.375 is a load job name.
If only 1 load type is configured for the load job (default behavior) then the metrics formatted in the manner similar to 2.5.x versions:
2016-11-18T16:18:50,532 I BasicLoadMonitor 2016.11.18.16.18.27.786 2016.11.18.16.18.27.786
CREATE-1x1: n=(11518/0); t[s]=(20.0072/18.0382); size=(11.25GB); TP[op/s]=(575.691/566.195); BW[MB/s]=(575.69/566.19); dur[us]=(1564/446/9449); lat[us]=(267/37/7442)
The record pattern is:
<LOAD_TYPE>-<CONCURRENCY_LEVEL>x<DRIVER_COUNT>:
n=(<COUNT_SUCCESS>/<COUNT_FAIL>);
t[s]=(<ELAPSED_TIME>/<OPERATIONS_DURATION_SUM>)
size=(<SIZE_TRANSFERRED>);
TP[op/s]=(<RATE_MEAN>/(RATE_LAST));
BW[MB/s]=(<BYTE_RATE_MEAN>/<BYTE_RATE_LAST>);
dur[us]=(<DURATION_AVG>/<DURATION_MIN>/<DURATION_MAX>);
lat[us]=(<LATENCY_AVG>/<LATENCY_MIN>/<LATENCY_MAX>);
-
The equation (CONCURRENCY_LEVEL * DRIVER_COUNT * ELAPSED_TIME / OPERATIONS_DURATION_SUM) may be used as efficiency estimate
-
Mean rates are calculated as current total count of items/bytes divided by current elapsed time.
-
Last rates are calculated as exponentially decaying moving average where
and "t" is the configured load-metrics-period parameter.
Note:
The metrics are differentiated by the load type so an user will see separate metrics for each load type used for a mixed load job.
File: config.log
File: messages.log
File: 3rdparty.log
File: errors.log
Different components of the application log the error messages into the errors.log file. Also, both storage drivers and load monitors log the failed I/O tasks with "ERR" log marker. A load monitor logs the results with the status different than "SUCCESS" with "DEBUG" level (not visible in the stdout, appears in the log file only). The load monitor's record format:
[endpoint: <STORAGE_NODE_ADDR>, ]
[client: <STORAGE_DRIVER_ADDR>, ]
item: <ITEM_NAME_OR_PATH>,
operation: <IO_TYPE>,
status: <IO_STATUS>
A storage drivers logs the results with the status different than "SUCCESS" with "WARN" level (appears both in the stdout and the log file).
Note:
If
load-metrics-precondition
flag is set to true, the metrics are not persisted to a files (printed to the console only).
By default periodic current metrics dump occurs every 10 seconds and the
metrics data is written to a CSV file metrics.csv
Field Name | Description |
---|---|
DateTimeISO8601 | Timestamp in the ISO8601 format |
TypeLoad | Load type (CREATE/READ/...) |
Concurrency | The concurrency level |
DriverCount | Count of the storage drivers used (may be more than 1 in the distributed mode) |
CountSucc | Total successful I/O tasks count |
CountFail | Total failed I/O tasks count |
Size | Total transferred byte count |
JobDuration[s] | Total job duration |
DurationSum[s] | Total sum of the I/O tasks durations |
TPAvg[op/s] | Total average throughput |
TPLast[op/s] | Last final moving average throughput |
BWAvg[MB/s] | Total average bandwidth |
BWLast[MB/s] | Last final moving average bandwidth |
DurationAvg[us] | Total average I/O tasks duration |
DurationMin[us] | Minimum I/O task duration |
DurationLoQ[us] | Low quartile of the I/O tasks duration distribution |
DurationMed[us] | Median of the I/O tasks duration distribution |
DurationHiQ[us] | High quartile of the I/O tasks duration distribution |
DurationMax[us] | Maximum I/O task duration |
LatencyAvg[us] | Total average I/O tasks latency |
LatencyMin[us] | Minimum I/O task latency |
LatencyLoQ[us] | Low quartile of the I/O tasks latency distribution |
LatencyMed[us] | Median of the I/O tasks latency distribution |
LatencyHiQ[us] | High quartile of the I/O tasks latency distribution |
LatencyMax[us] | Maximum I/O task latency |
The total metrics accounted at the end of each load job and the results
are written to a CSV file metrics.total.csv
.
The layout is the same as for periodic metrics snapshots
TODO
The I/O trace records are generated for each I/O operation performed.
Note:
By default, the I/O trace logging is disabled. To enable the logging modify the
config/logging.json
file: change the log level threshold from "INFO" to "DEBUG" for the ioTraceFile appender (near the line #305)
Available Fields
Field Name | Description |
---|---|
StorageDriver | The address/hostname of the storage driver executed the operation |
StorageNode | The target storage node address/hostname |
ItemPath | The resulting item path |
IoTypeCode | The I/O operation type code |
StatusCode | The I/O operation resulting status code |
ReqTimeStart | The I/O operation start timestamp in microseconds |
Duration | The I/O operation total duration in microseconds |
RespLatency | The I/O operation response latency in microseconds |
DataLatency | The I/O operation response data latency ("1st byte" of the response content) in microseconds |
TransferSize | The count of the bytes transferred within the I/O operation |
IoTypeCode
Code | I/O Type |
---|---|
0 | NOOP |
1 | CREATE |
2 | READ |
3 | UPDATE |
4 | DELETE |
StatusCode
Code | Description | HTTP response codes |
---|---|---|
O | Pending (internal) | |
1 | Active (internal) | |
2 | Cancelled | |
3 | Unknown failure | all other codes |
4 | Success | 2xx |
5 | I/O Failure | |
6 | Timeout | 504 |
7 | Unrecognized storage response | |
8 | Client failure or invalid request | 100, 400, 405, 409, 411, 414, 416 |
9 | Internal storage failure | 413, 415, 429, 500, 501, 502, 503, 505 |
10 | Item not found on the storage | 404 |
11 | Authentication/access failure | 401, 403 |
12 | Data item corrupted | |
13 | Not enough space on the storage | 507 |
Configuration
The I/O trace output layout is configurable. The specific fields output
may be disabled or enabled with corresponding configuration flags under
load-metrics-trace
configuration section.
By default the section looks like:
"trace" : {
"storageDriver" : false,
"storageNode" : false,
"itemPath" : true,
"ioTypeCode" : true,
"statusCode" : true,
"reqTimeStart" : false,
"duration" : true,
"respLatency" : true,
"dataLatency" : false,
"transferSize" : true
}
This example makes to output the following fields:
- ItemPath
- IoTypeCode
- StatusCode
- Duration
- RespLatency
- TransferSize
This is the default list of the fields to output.
Warning:
It's not recommended to disable any field from this list until you exactly know what are you doing. Disabling
ItemPath
field will make the load job unable to persist the list of the processed items. Disabling any other default field may brake the statistic accounting.
Note:
The empty values are written for the disabled fields.
It's possible to configure the load job to save the items list into the
file regardless of configured load type (Create/Read/etc). To do so an
user should use item-output-file
configuration parameter.
CLI example:
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-file=items.csv ...
This is useful to perform a load over the items used earlier (in a different run). For example, to read the items created on the storage some time ago:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-type=read --item-input-file=items.csv ...
- Overview
- Deployment
- User Guide
- Troubleshooting
- Reference