An elasticsearch transport for the winston logging toolkit.
- logstash compatible message structure.
- Thus consumable with kibana.
- Date pattern based index names.
- Custom transformer function to transform logged data into a different message structure.
- Buffering of messages in case of unavailability of ES. The limit is the memory as all unwritten messages are kept in memory.
For Winston 3.x, Elasticsearch 6.0 and later, use the 0.7.0
.
For Elasticsearch 6.0 and later, use the 0.6.0
.
For Elasticsearch 5.0 and later, use the 0.5.9
.
For earlier versions, use the 0.4.x
series.
- Querying.
npm install --save winston winston-elasticsearch
var winston = require('winston');
var ElasticsearchTransport = require('winston-elasticsearch');
var esTransportOpts = {
level: 'info'
};
var logger = winston.createLogger({
transports: [
new ElasticsearchTransport(esTransportOpts)
]
});
The winston API for logging can be used with one restriction: Only one JS object can only be logged and indexed as such. If multiple objects are provided as arguments, the contents are stringified.
level
[info
] Messages logged with a severity greater or equal to the given one are logged to ES; others are discarded.index
[none] the index to be used. This option is mutually exclusive withindexPrefix
.indexPrefix
[logs
] the prefix to use to generate the index name according to the pattern<indexPrefix>-<indexInterfix>-<indexSuffixPattern>
. Can be string or function, returning the string to use.indexSuffixPattern
[YYYY.MM.DD
] a Moment.js compatible date/ time pattern.messageType
[_doc
] the type (path segment after the index path) under which the messages are stored under the index.transformer
[see below] a transformer function to transform logged data into a different message structure.ensureMappingTemplate
[true
] If set totrue
, the givenmappingTemplate
is checked/ uploaded to ES when the module is sending the fist log message to make sure the log messages are mapped in a sensible manner.mappingTemplate
[see fileindex-template-mapping.json
file] the mapping template to be ensured as parsed JSON.flushInterval
[2000
] distance between bulk writes in ms.client
An elasticsearch client instance. If given, all following options are ignored.clientOpts
An object hash passed to the ES client. See its docs for supported options.waitForActiveShards
[1
] Sets the number of shard copies that must be active before proceeding with the bulk operation.pipeline
[none] Sets the pipeline id to pre-process incoming documents with. See the bulk API docs.buffering
[true] Boolean flag to enable or disable messages buffering. ThebufferLimit
option is ignored if set tofalse
.bufferLimit
[null] Limit for the number of log messages in the buffer.apm
[null] Inject apm client to link elastic logs with elastic apm traces.
The default client and options will log through console
.
When changing the indexPrefix
and/ or the transformer
,
make sure to provide a matching mappingTemplate
.
The transformer function allows mutation of log data as provided by winston into a shape more appropriate for indexing in Elasticsearch.
The default transformer generates a @timestamp
and rolls any meta
objects into an object called fields
.
Params:
logdata
An object with the data to log. Properties are:timestamp
[new Date().toISOString()
] The timestamp of the log entrylevel
The log level of the entrymessage
The message for the log entrymeta
The meta data for the log entry
Returns: Object with the following properties
@timestamp
The timestamp of the log entryseverity
The log level of the entrymessage
The message for the log entryfields
The meta data for the log entryindexInterfix
optional, the interfix of the index to use for this entry
The default transformer function's transformation is shown below.
Input A:
{
"message": "Some message",
"level": "info",
"meta": {
"method": "GET",
"url": "/sitemap.xml",
...
}
}
Output A:
{
"@timestamp": "2019-09-30T05:09:08.282Z",
"message": "Some message",
"severity": "info",
"fields": {
"method": "GET",
"url": "/sitemap.xml",
...
}
}
Note that in current logstash versions, the only "standard fields" are
@timestamp
and @version
, anything else ist just free.
A custom transformer function can be provided in the options hash.
error
: in case of any error.
An example assuming default settings.
logger.info('Some message', {});
Only JSON objects are logged from the meta
field. Any non-object is ignored.
The log message generated by this module has the following structure:
{
"@timestamp": "2019-09-30T05:09:08.282Z",
"message": "Some log message",
"severity": "info",
"fields": {
"method": "GET",
"url": "/sitemap.xml",
"headers": {
"host": "www.example.com",
"user-agent": "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)",
"accept": "*/*",
"accept-encoding": "gzip,deflate",
"from": "googlebot(at)googlebot.com",
"if-modified-since": "Tue, 30 Sep 2019 11:34:56 GMT",
"x-forwarded-for": "66.249.78.19"
}
}
}
This message would be POSTed to the following endpoint:
http://localhost:9200/logs-2019.09.30/log/
So the default mapping uses an index pattern logs-*
.
- Install the official nodejs client for elastic-apm
yarn add elastic-apm-node
- or -
npm install elastic-apm-node
Then, before any other require in your code, do:
const apm = require("elastic-apm-node").start({
serverUrl: "<apm server http url>"
})
// Set up the logger
var winston = require('winston');
var Elasticsearch = require('winston-elasticsearch');
var esTransportOpts = {
apm,
level: 'info',
clientOpts: { node: "<elastic server>" }
};
var logger = winston.createLogger({
transports: [
new Elasticsearch(esTransportOpts)
]
});
logger.info('Some log message');
Will produce:
{
"@timestamp": "2020-03-13T20:35:28.129Z",
"message": "Some log message",
"severity": "info",
"fields": {},
"transaction": {
"id": "1f6c801ffc3ae6c6"
},
"trace": {
"id": "1f6c801ffc3ae6c6"
}
}
Some "custom" logs may not have the apm trace.
If that is the case, you can retreive traces using apm.currentTraceIds
like so:
logger.info("Some log message", { ...apm.currentTracesIds })
The transformer function (see above) will place the apm trace in the root object so that kibana can link Logs to APMs.
Custom traces WILL TAKE PRECEDENCE
If you are using a custom transformer, you should add the following code into it:
if (logData.meta['transaction.id']) transformed.transaction = { id: logData.meta['transaction.id'] };
if (logData.meta['trace.id']) transformed.trace = { id: logData.meta['trace.id'] };
if (logData.meta['span.id']) transformed.span = { id: logData.meta['span.id'] };
This scenario may happen on a server (e.g. restify) where you want to log the query
after it was sent to the client (e.g. using server.on('after', (req, res, route, error) => log.debug("after", { route, error }))
).
In that case you will not get the traces into the response because traces would
have stoped (as the server sent the response to the client).
In that scenario, you could do something like so:
server.use((req, res, next) => {
req.apm = apm.currentTracesIds
next()
})
server.on("after", (req, res, route, error) => log.debug("after", { route, error, ...req.apm }))