Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logging methods no longer take a callback, can't reliably use it in some environments (AWS Lambda) #1250

Closed
natesilva opened this issue Mar 29, 2018 · 73 comments
Labels
Help Wanted Additional help desired from the community Important Needs Investigation

Comments

@natesilva
Copy link

In Winston 3, the logging functions no longer take a callback. Previously this could be used to wait until logging had completed:

logger.info('some message', { foo: 42 }, callback);  // Winston 2.x

Winston 3 does allow you to listen to the logged event on your transports, but that doesn't give me any easy way to tell when this message that I am currently logging, has completed. Since everything is async the next logged event that occurs after you write your message may not be related to the message you just logged. And it's more complex if you are listening for the event on multiple transports.

That makes Winston difficult to use in certain environments. For example, in AWS Lambda we set the callbackWaitsForEmptyEventLoop parameter to false (documentation). (sidenote: you do this if you have a database connection or something that cannot be unrefed, because otherwise Lambda will never freeze your process). If set to false, Lambda freezes your process as soon as you return results to the caller and may even terminate your process. If your logging transport hasn't finished writing by the time that happens, then you either lose logs or (worse) the logs get written later when Lambda unfreezes your process.

TL;DR: Our Lambda process would normally await (or the callback equivalent) on Winston before returning results to the caller, guaranteeing that logging is done before the process is frozen.

Is there another way you'd recommend to detect when logging is complete?

@natesilva
Copy link
Author

We’re using a different solution.

@indexzero
Copy link
Member

Does not make it not an issue. Thanks for bringing it up.

@indexzero indexzero reopened this Apr 12, 2018
@dpraul
Copy link

dpraul commented Apr 19, 2018

EDIT: this does not work, see #1250 (comment) for solution


I'm not sure if this is the best way, nor whether it works in all cases, but so far we are handling this by doing the following:

I've created a wait-for-logger.js function as such:

function waitForLogger(logger) {
  return new Promise((resolve) => {
    logger.on('close', resolve);
    logger.close();
  });
}

module.exports = waitForLogger;

At the end of our handler, we wait for the function output before returning:

const logger = require('./logger');
const waitForLogger = require('./wait-for-logger');

async function handler(event, context) {
  // your functionality
  logger.log('info', 'This should make it through all the transports');
  // ...

  await waitForLogger(logger);
}

We are using the nodejs8.10 Lambda runtime - if you are using an older runtime, you can probably do something like the following:

function handler(event, context, callback) {
  // your functionality
  logger.log('info', 'This should make it through all the transports');
  // ...

   waitForLogger(logger).then(() => callback(null));
}

@natesilva
Copy link
Author

Thanks @dpraul. One of our transports logs to an external service, and that could take a few extra ms. Will logger.close() wait for that to complete?

@indexzero
Copy link
Member

@natesilva thought about this some more

  1. In winston@3 both the Logger and the Transport are Node.js streams.
  2. Writeable streams expose an .end method and a 'finish' event.

Therefore in theory this should work, but I have not tested it yet:

const winston = require('winston');
const logger = winston.createLogger({
  transports: [
    new winston.transports.Console()
  ]
});

process.on('exit', function () {
  console.log('Your process is exiting');
});

logger.on('finish', function () {
  console.log('Your logger is done logging');
});

logger.log('info', 'Hello, this is a raw logging event',   { 'foo': 'bar' });
logger.log('info', 'Hello, this is a raw logging event 2', { 'foo': 'bar' });

logger.end();

@indexzero
Copy link
Member

Verified:

{"foo":"bar","level":"info","message":"Hello, this is a raw logging event"}
{"foo":"bar","level":"info","message":"Hello, this is a raw logging event 2"}
Your logger is done logging
Your process is exiting

going to add an example with this as a way to resolve the issue. Please feel free to reopen if this does not work for you in Lambda. Functions-as-a-service platforms are 100% in winston's target user base so care about this working correctly.

Would be great if you could contribute an end-to-end AWS lambda example using winston if you have the time @natesilva.

@dpraul
Copy link

dpraul commented Apr 19, 2018

I am reviewing this as we speak, and I believe I have verified neither of these methods to work. I'll try to share a full test-case here shortly.

@dpraul
Copy link

dpraul commented Apr 19, 2018

const winston = require('winston');

class CustomTransport extends winston.Transport {
  log({ message }, cb) {
    setTimeout(() => {
      console.log(`custom logger says: ${message}`);
      cb(null);
    }, 3000);
  }
}

const logger = winston.createLogger({
  transports: [
    new winston.transports.Console(),
    new CustomTransport(),
  ],
});

logger.on('finish', () => {
  console.log('done!');
});
logger.log('info', 'here is some content');
logger.end();

Expected output:

{"level":"info","message":"here is some content"}
custom logger says: here is some content
done!

Actual output:

{"level":"info","message":"here is some content"}
done!
custom logger says: here is some content

The finish event is firing before all the transports have finished. The same behavior occurs when using .on('close', fn) and the .close() event at the end.

@natesilva
Copy link
Author

Sorry, I deleted my comment. My understanding of these stream events is limited but I can see how finish might fire at the wrong time, per @dpraul’s example.

@dpraul
Copy link

dpraul commented Apr 19, 2018

I've gone through the same example with the end, finish, and close events, and utilizing both stream.Transform.close() and stream.Transform.end() - all have the same results, where the events trigger before CustomerTransport has called its callback.

Might there be some issue in how events are passed between the Transports and Logger?

Regardless, @natesilva or @indexzero, you may want to reopen this issue.

@indexzero
Copy link
Member

@dpraul that seems plausible, but I don't see where that would be:

  1. Transport in winston-transport implements the _write method to fulfill Node.js' Writeable interface. This receives the callback and passes it to the log method.
  2. Console transport in winston calls process.{stderr,stdout}.write and then invokes the callback. Afaik writing to process.stdout and process.stderr are synchronous operations so there shouldn't be any potential risk of back pressure.

Clearly there is something being missed in this flow. Going to chat with some Node.js folks and see what I can dig up.

@indexzero indexzero added Needs Investigation Important Help Wanted Additional help desired from the community labels Apr 19, 2018
@dpraul
Copy link

dpraul commented Apr 20, 2018

I still don't know what's causing the issue, but I found a workaround that waits for all the transports to finish!

const winston = require('winston');

class CustomTransport extends winston.Transport {
  log({ message }, cb) {
    setTimeout(() => {
      console.log(`custom logger says: ${message}`);
      cb(null);
    }, 3000);
  }
}

const logger = winston.createLogger({
  transports: [
    new winston.transports.Console(),
    new CustomTransport(),
  ],
});

async function waitForLogger(l) {
  const transportsFinished = l.transports.map(t => new Promise(resolve => t.on('finish', resolve)));
  l.end();
  return Promise.all(transportsFinished);
}

logger.log('info', 'here is some content');

waitForLogger(logger).then(() => console.log('all done!'));

Output:

{"level":"info","message":"here is some content"}
custom logger says: here is some content
all done!

It seems that logger.end() properly propagates to the transports, but fires the finish event before all the transports have fired theirs. The workaround is to wait on each Transport to fire its own finish event and not rely on the Logger at all.

@indexzero
Copy link
Member

Thanks for continuing to investigate @dpraul. The logger is piped to all of the transports, when the logger is closed it was my understanding that it would not emit finish until all of its pipe targets had, but I may be mistaken about that.

Going to verify with some folks.

@indexzero
Copy link
Member

indexzero commented Apr 20, 2018

Thanks to some input from @mcollina @davidmarkclements & @mafintosh I learned that my understanding of pipe semantics was flawed.

  • 'error' and 'finish' events do not back propagate through the pipe chain.
  • 'error' events cause unpipe to occur.

Going to explore how to either back propagate these events or expose an explicit method on Logger that implements the technique you're using now @dpraul (maybe using end-of-stream).

@natesilva going to leave this issue open until the explicit fix is in, but in the meantime you should be able to use the technique above:

  • Listen for 'finish' events on all transports
  • Call logger.end()
  • When all 'finish' events have been emitted you are done logging.

@mcollina
Copy link
Contributor

Important modules that might be handy: pump and end-of-stream.

@indexzero
Copy link
Member

@mcollina theoretically from a Node.js streams perspective we should be able to implement _final on the logger to force finish events to be emitted later, right?

@mcollina
Copy link
Contributor

@indexzero yes, that would be correct.

@TuxGit
Copy link

TuxGit commented May 17, 2018

Tell me pls, in winston@3 i can't use callbacks like in version 2?
(there is no answer in the documentation)

// ... some check the connection to db ... and if fail:
logger.error('message',  function() {
  process.exit(1);
});

And if not use callback my app closes faster than message was logged =(

@indexzero
Copy link
Member

@TuxGit FYI this has been documented in UPGRADE-3.0 (see: the PR code)

@aidan-ayala
Copy link

Is there a suggested approach to handling this yet? I see there's an unmerged PR #1909 which looks to address it?

@stuckj
Copy link

stuckj commented Jun 8, 2021 via email

@Tilogorn
Copy link

Tilogorn commented Jun 2, 2022

@dpraul I do not know enough about the stream/lambda internals. When the finish event comes before I attach this listener, will this promise ever be resolved?

Based on your example, consider something like this (some long operation before waitForLogger()):

async function lambdaHandler(event, context) {
  const logger = winston.createLogger({
    ...
  });
  logger.log('info', 'some message');
  ...
  await someThingThatTakesLongerThanEmittingLogs();

  // Finishing logic
  await waitForLogger(logger); // will this ever be resolved?
  return {}
}

or even simpler (no log() call at all):

async function lambdaHandler(event, context) {
  const logger = winston.createLogger({
    ...
  });

  // Finishing logic
  await waitForLogger(logger); // will this ever be resolved?
  return {}
}

@dpraul
Copy link

dpraul commented Jun 2, 2022

@dpraul I do not know enough about the stream/lambda internals. When the finish event comes before I attach this listener, will this promise ever be resolved?

I'm not certain. At the time of writing (which was in 2019, so YMMV), the logger wouldn't fire the finish event until logger.end() was called. The waitForLogger function calls this after setting up the listener, so all is good. If you're ending the logger elsewhere, I'm not sure what the behaviour will be like.

@darkwilly08
Copy link

I know this is a long old thread but I will share how i could fix it. First, i want to thank @kasvtv and @dpraul for their shared approach that helped me define the following implementation:

class AMQPTransport extends Transport {
  constructor(opts) {
    super(opts);

    this.callsInProgress = new Map();
    this.index = 0;

    this.name = 'amqpTransport';

    const { amqp } = opts;

    const { exchange } = amqp;

    this._rabbitOptions = {
      appName: opts.appName,
      url: amqp.url,
      options: amqp.options,
      exchange: {
        name: exchange.name,
        type: exchange.type,
        options: exchange.options,
      },
    };

    this.init();
  }

  async _final(callback) {
    console.log('amqp final');
    await Promise.all(this.callsInProgress.values());
    await disconnect();
    callback();
  }

  log(info, callback) {
    const output = {
      level: info.level,
      message: info.message,
      timestamp: info.timestamp,
    };

    const { appName, exchange } = this._rabbitOptions;

    const i = this.index++;
    const promise = publish(exchange.name, `logs.${output.level}.${appName}`, output);

    this.callsInProgress.set(
      i,
      promise.finally(() => {
        console.count('amqp logged');
        this.callsInProgress.delete(i);
        callback();
      }),
    );
  }

  init() {
    const { url, options, exchange } = this._rabbitOptions;

    createProducer(url, options, (channel) => {
      return Promise.all([channel.assertExchange(exchange.name, exchange.type, exchange.options)]);
    });
  }
}

The key here is to define the function _final

async _final(callback) {
    console.log('amqp final');
    await Promise.all(this.callsInProgress.values());
    await disconnect();
    callback();
  }

And in order to wait for the logger before closing the process, you can

const waitForLogger = async () => {
      const loggerFinished = new Promise((resolve) => winstonLogger.on('finish', resolve));
      winstonLogger.end();
      return loggerFinished;
}

The map is still necessary due to the finish event being fired once log.end() is called and the first pending callback function inside log function is fired. If you have multiple log functions pending to call the callback function (async logging), you have to wait for every pending log and here is when the _final function enters to action.

I hope this helps!

@Tilogorn
Copy link

Tilogorn commented Feb 16, 2023

@darkwilly08 Thanks a ton for your contribution! I implemented @dpraul 's approach and ran it in production (AWS Lambda) after initial testing looked good. But yesterday I noticed that emitting a lot of small logs right after each other, then directly calling waitForLogger and exiting the Lambda right away is still buggy (as you explained in your last paragraph).

Your implementation of a local "queue" is simple, elegant and works out of the box.

@Tilogorn
Copy link

Tilogorn commented Feb 24, 2023

@darkwilly08 and all others

I have a strange effect with this solution. I stress tested this solution by emitting 20 logs in a loop. If the logger queue (this.callsInProgress ) has equal or more than 15 elements at the point of async _final(callback) getting called, I get an exception saying NodeError: write after end. If I delay waitForLogger for 1 seconds with setTimeout, this.callsInProgress.size has then less than 15 elements (usually 4-6) and the whole thing finishes without issues.

Any idea how this is explainable? Is there some sort of "hard auto close timeout" by winston that closes the write stream after a certain amount of time after logger.end() has been called, ignoring the callback of _final?

Some insights:

async _final (callback) {
    console.log('_final called. cIP length: ' + this.callsInProgress.size); // if this is >= 15, exception is thrown
    await Promise.all(this.callsInProgress.values());
    callback();
}

This helps as a workaround

async function waitForLogger (logger) {
    const loggerDone = new Promise(resolve => logger.on('finish', resolve));
    setTimeout(() => { // <--
        logger.end(); 
    }, 1000);
    return loggerDone;
}

Edit: I think, I found one more piece of the puzzle: Backpressuring. It seems like the Node.js readable-streams buffers 16 objects before flushing. I guess when I have more than 16 items in my queue, the stream needs to empty it's buffer more than once, but get's already closed before flushing the second portion and that one fails then. I've no idea though why...

@darkwilly08
Copy link

Hi @Tilogorn! I am without my laptop until next week. Then, I will update the code fixing the bug you found it. in the meantime you could try to move the callback call outside the finally method. just call it on the log function and maybe it works :finger-cross:

@Tilogorn
Copy link

Hi @Tilogorn! I am without my laptop until next week. Then, I will update the code fixing the bug you found it. in the meantime you could try to move the callback call outside the finally method. just call it on the log function and maybe it works :finger-cross:

That worked out of the box. 😮 Could you elaborate on that?

log(info, callback) {
  // ...
  const promise = doSomethingAsyncThatReturnsPromise();

  this.callsInProgress.set(
    i,
    promise.finally(() => {
      this.callsInProgress.delete(i);
    }),
  );

  callback();
}

@darkwilly08
Copy link

Hi @Tilogorn! I am without my laptop until next week. Then, I will update the code fixing the bug you found it. in the meantime you could try to move the callback call outside the finally method. just call it on the log function and maybe it works :finger-cross:

That worked out of the box. 😮 Could you elaborate on that?

log(info, callback) {
  // ...
  const promise = doSomethingAsyncThatReturnsPromise();

  this.callsInProgress.set(
    i,
    promise.finally(() => {
      this.callsInProgress.delete(i);
    }),
  );

  callback();
}

@Tilogorn I am glad to hear it! Well as you already discovered buffering more than 15 log callback is potentially dangerous, so if you call the callback right after create the promise - without wait for the result -, you are going to flush the stream buffer instantly and you never will (should) have 15 or more elements on the queue. On the other hand, the promise is still stored in the map to be able to wait for its resolution in the _final method.

Finally, to keep the callback async, I think is better wrapper with setImmediate(callback) in order to be consistent

@Tilogorn
Copy link

@darkwilly08 Thanks, I understand it now!

Wrapping the callback inside setImmediate(callback) brings the error back, though. Seems like this needs to be synchronous for the stream buffer to flush correctly.

@paul-uz
Copy link

paul-uz commented May 9, 2024

I keep getting alerts on this thread, so figure I'll share why I think this still isn't working for a lot of people. Background: our use-case (and the context wherein this issue was opened) is AWS Lambda, so what I talk about here only applies there.

Lambda runs Node under the Lambda Execution Context. Pertinent information being:

After a Lambda function is executed, AWS Lambda maintains the execution context for some time in anticipation of another Lambda function invocation.

and

  • Any declarations in your Lambda function code (outside the handler code, see Programming Model) remains initialized, providing additional optimization when the function is invoked again.

i.e. to speed up launches, Lambda "freezes" and "thaws" environments instead of fully shutting them down. To do this, Lambda doesn't wait for Node to exit, it waits for your handler function to exit. Any asynchronous processes are run outside of the handler function, and thus are apt to be frozen with the rest of the Lambda Execution Context if they are not waited for.

Now let's look at the suggested solution for waiting for Winston - adapted from UPGRADE-3.0.md, assuming we're running in Lambda:

async function lambdaHandler(event, context) {
  const logger = winston.createLogger({
    transports: [
      new winston.transports.Console(),
      new CustomAsyncTransport(),
    ],
  });
  logger.log('info', 'some message');
  logger.on('finish', () => process.exit());
  logger.end();
}

Spot the problem? logger fires logger.end() within the handler function context, but the function fired by logger.on('finish') runs outside the handler context. Any async processes tied-up by CustomAsyncTransport will stall the finish event from firing, making it likely that the Execution Context freezes before that event fires.

To solve this, lambdaHandler must wait for the logger to exit before resolving:

async function lambdaHandler(event, context) {
  const logger = winston.createLogger({
    transports: [
      new winston.transports.Console(),
      new CustomAsyncTransport(),
    ],
  });
  const loggerFinished = new Promise(resolve => logger.on('finish', resolve));
  logger.log('info', 'some message');
  logger.end();
  await loggerFinished;
}

Since lambdaHandler doesn't exit until logger fires the finish event, our CustomAsyncTransport should close before our lambda handler, saving those processes from being frozen (assuming the finish event is correctly implemented by @indexzero).

This can be generalized to something similar to the code I've shared previously:

async function waitForLogger(logger) {
  const loggerDone = new Promise(resolve => logger.on('finish', resolve));
  // alternatively, use end-of-stream https://www.npmjs.com/package/end-of-stream
  // although I haven't tested this
  // const loggerDone = new Promise(resolve => eos(logger, resolve));
  logger.end();
  return loggerDone;
}

async function lambdaHandler(event, context) {
  const logger = winston.createLogger({
    transports: [
      new winston.transports.Console(),
      new CustomAsyncTransport(),
    ],
  });
  logger.log('info', 'some message');
  await waitForLogger(logger);
}

Hope this helps some people.

You absolute legend! I've been battling this for weeks, but with limited activity in this repo, my hopes were dashed. I've been trying to implement Raygun as a custom transport, and it makes perfect sense, but it simply wouldn't work; I even got the Raygun team to do 2 years of updates to the Raygun library cos i thought that was the problem.

This really needs to be documented properly!

@DABH
Copy link
Contributor

DABH commented May 9, 2024

A PR that documents this would be greatly appreciated, in case anyone ever runs into this scenario again (I am sure that other people are using Lambda, etc.)

@paul-uz
Copy link

paul-uz commented May 11, 2024

A PR that documents this would be greatly appreciated, in case anyone ever runs into this scenario again (I am sure that other people are using Lambda, etc.)

I'll see if I have time on Monday whilst in work.

Is there any particular reason the working solution has never been fixed properly? It'd be nice to have this work out of the box.

@DABH
Copy link
Contributor

DABH commented May 26, 2024

No particular reason besides contributor (lack of) bandwidth :) PRs that make this work out of the box would be very welcomed; too :)

@paul-uz
Copy link

paul-uz commented Jun 4, 2024

So, I am getting the dreaded ERR_STREAM_WRITE_AFTER_END error. How do i resolve this for usage in AWS Lambda?

@darkwilly08 i tried your solution in my custom transport, but it's not working. Would be super great to get a full AWS Lambda example of usage with a custom transport

@paul-uz
Copy link

paul-uz commented Jun 4, 2024

OK, after much faffing, I think I have it working for my use case.

I have an abstract class, where the logger is a class property.

  getLogger = () => {
    if (this.options?.enableWinston && !this.logger?.writable) {
      this.logger = this.createLogger();
      if (this.options.raygunApiKey) {
        this.logger.add(
          new RaygunTransport(
            {
              apiKey: this.options.raygunApiKey,
              tags: this.options.raygunTags,
            },
            {
              level: 'error',
            },
          ),
        );
      }
    }
    return this.logger;
  };

  createLogger = () => {
    return winston.createLogger({
      level: 'debug',
      exitOnError: false,
      handleExceptions: false, // not set previously, but set now, docs don't really help me understand what disabling it does?
      handleRejections: false, // not set previously, but set now, docs don't really help me understand what disabling it does?
      format: winston.format.combine(
        winston.format.errors({cause: true, stack: true}),
        winston.format.json(),
      ),
      transports: [
        new winston.transports.Console(),
      ],
    });
  };

  closeLogger = async () => {
    if (this.logger) {
      if (!this.logger.writable) {
        return Promise.resolve();
      }
      const loggerDone = new Promise(resolve => this.logger?.on('finish', resolve));
      this.logger.end();
      return loggerDone;
    }

    return;
  };

In my custom raygun transport class, I am using the solution from @darkwilly08

  async _final(callback: () => void) {
    console.log('raygun final');
    await Promise.all(this.callsInProgress.values());
    callback();
  }

  async log(info: any, callback: () => void) {
    const error = new Error(info.message, {
      cause: new Error(info.stack),
    });
    const i = this.index++;
    const raygun = this.raygunClient.send(error); // previously called via `async` but now uses the `finally()` promise chain method below. Not sure what will happen if this function fails?
    this.callsInProgress.set(
      i,
      raygun.finally(() => {
        this.callsInProgress.delete(i);
      }),
    );
    callback();
  }

@DABH
Copy link
Contributor

DABH commented Jun 4, 2024

Nice! If it seems like others are going to hit this issue, it would be great to add some documentation or examples to the readme, so users don’t just google and find this closed issue :) But whatever you think is best. Thanks for your patience and continued efforts, glad to see you got this working.

@paul-uz
Copy link

paul-uz commented Jun 4, 2024

Nice! If it seems like others are going to hit this issue, it would be great to add some documentation or examples to the readme, so users don’t just google and find this closed issue :) But whatever you think is best. Thanks for your patience and continued efforts, glad to see you got this working.

Yup, I'll try and do a PR to update the docs. But it'd still be nice to get a library native solution for this but I have no idea what that would look like.

@paul-uz
Copy link

paul-uz commented Jun 6, 2024

Back to square one for me. Refactored my logging into its own class, so I can keep it separate from other work. Implemented the various methods mentioned above, and now I get the dreaded "write after end" error again.

@DABH
Copy link
Contributor

DABH commented Jun 6, 2024

So moving it to a separate class caused it to break? That seems suspicious...

@paul-uz
Copy link

paul-uz commented Jun 6, 2024

Looks like its AWS related and how it reuses code after subsequent calls of the function. Trying to find the best place to initialise my logger class.

Here is the class, its not pretty, and im sure its not correct. The issue is I want to be able to call this.logger.debug() from my main class, but on subsequent calls, this.logger is closed, and calling this.logger.debug() doesn't recreate the logger. It's all a bit of a jumble :(

I just want to initialise the logger, use it, close it afterwards, rinse and repeat for subsequent lambda invocations.

logger

import { createLogger, format, transports } from 'winston';
import { RaygunTransport } from './Transports/RaygunTransport.js';
export class Logger {
    logger;
    options;
    constructor(options) {
        if (options) {
            this.options = options;
        }
        this.logger = this.getLogger();
    }
    error = (...args) => {
        return this.getLogger().error([...args]);
    };
    warn = (...args) => {
        return this.getLogger().warn([...args]);
    };
    help = (...args) => {
        return this.getLogger().help([...args]);
    };
    data = (...args) => {
        return this.getLogger().data([...args]);
    };
    info = (...args) => {
        return this.getLogger().info([...args]);
    };
    debug = (...args) => {
        return this.getLogger().debug([...args]);
    };
    prompt = (...args) => {
        return this.getLogger().prompt([...args]);
    };
    http = (...args) => {
        return this.getLogger().http([...args]);
    };
    verbose = (...args) => {
        return this.getLogger().verbose([...args]);
    };
    input = (...args) => {
        return this.getLogger().input([...args]);
    };
    silly = (...args) => {
        return this.getLogger().silly([...args]);
    };
    getLogger = () => {
        if (!this.logger?.writable) {
            this.logger = this.createLogger();
            if (this.options?.raygunApiKey) {
                this.logger.add(new RaygunTransport({
                    apiKey: this.options.raygunApiKey,
                    tags: this.options.raygunTags,
                }, {
                    level: 'error',
                    handleExceptions: false,
                    handleRejections: false,
                }));
            }
        }
        return this.logger;
    };
    createLogger = () => {
        return createLogger({
            level: this.options?.logLevel ?? 'debug',
            exitOnError: false,
            format: format.combine(format.errors({ cause: true, stack: true }), format.json()),
            transports: [
                new transports.Console({
                    handleExceptions: true,
                    handleRejections: true,
                }),
            ],
        });
    };
    closeLogger = async () => {
        if (this.logger) {
            if (!this.logger.writable) {
                return Promise.resolve();
            }
            const loggerDone = new Promise((resolve) => {
                return this.logger?.on('finish', resolve);
            });
            this.logger.end();
            return loggerDone;
        }
        return;
    };
}

this all came about as i needed a way to pass some custom data to the raygun tranport. I can't pass it at the point of creating the logger, as the data isn't available, so i have to set it afterwards, which, because its a tranports, is a ball ache.

@darkwilly08
Copy link

darkwilly08 commented Jun 7, 2024

Hi @DABH @paul-uz! I did some tests:

I created a repository with different transport classes to test the issue.

  • asyncLogger is an implementation extending from winston-transport class. The transport is executed on the file notWorking.js using winston logger.
  • asyncStream is an implementation extending from stream.Writable class. The transport is executed on the file working.js. The stream is used directly without winston.
  • asyncStreamWinston is an implementation extending from stream.Writable class. The transport is executed on the file someChunksAreMissing.js. The stream is used as a transport for winston logger.

If you check each transport class, you will see that the implementation is almost the same.

Extending from winston-transport class only works if the buffer is less than the highWaterMark value. Otherwise, the program will crash. Another option to make it work is remove the setImmediate function and call the callback directly.
Extending from stream.Writable class and using the same options works as expected even with a callback called inside the setImmediate function.
Extending from stream.Writable class and using the same options but as a transport for winston logger, the last chunk is missing. The difference with the first scenario is that I do can call the callback inside the setImmediate function but the last chunk is missing.

The idea of this tests is to understand a bit more where the issue is. I do not have problem to open a PR, but right now I am not sure what is the best approach to fix it.
The already proposed solutions looks a bit hacky to me and I am afraid that it can generate other issues.

Sorry for the long message, I hope in some way it can help to discuss the issue.

@paul-uz
Copy link

paul-uz commented Jun 7, 2024

I think I have a working implementation, based on your code examples. My issue is that I'm struggling to pass custom data to the custom transport.

My other issue with my custom transport is that the stack trace it reports is not from the error that caused the log but of the transport code itself.

@paul-uz
Copy link

paul-uz commented Jun 7, 2024

Hi @DABH @paul-uz! I did some tests:

I created a repository with different transport classes to test the issue.

  • asyncLogger is an implementation extending from winston-transport class. The transport is executed on the file notWorking.js using winston logger.
  • asyncStream is an implementation extending from stream.Writable class. The transport is executed on the file working.js. The stream is used directly without winston.
  • asyncStreamWinston is an implementation extending from stream.Writable class. The transport is executed on the file someChunksAreMissing.js. The stream is used as a transport for winston logger.

If you check each transport class, you will see that the implementation is almost the same.

Extending from winston-transport class only works if the buffer is less than the highWaterMark value. Otherwise, the program will crash. Another option to make it work is remove the setImmediate function and call the callback directly. Extending from stream.Writable class and using the same options works as expected even with a callback called inside the setImmediate function. Extending from stream.Writable class and using the same options but as a transport for winston logger, the last chunk is missing. The difference with the first scenario is that I do can call the callback inside the setImmediate function but the last chunk is missing.

The idea of this tests is to understand a bit more where the issue is. I do not have problem to open a PR, but right now I am not sure what is the best approach to fix it. The already proposed solutions looks a bit hacky to me and I am afraid that it can generate other issues.

Sorry for the long message, I hope in some way it can help to discuss the issue.

I'm a bit confused by all of this. Am I correct in thinking that the best way to write an async transport is to not use the winston transport class and just use a writable stream?

What is the current best solution for a custom async tranport for use in AWS Lambda?

@darkwilly08
Copy link

I'm a bit confused by all of this. Am I correct in thinking that the
best way to write an async transport is to not use the winston transport class and just use a writable stream?

No, no chance. I will keep using winston-transport. Following the example repository I would use asyncLogger.js removing setImmediate from this line and calling the callback function directly.

I made some examples without winston-transport in order to try to understand better where the issue is.

@paul-uz
Copy link

paul-uz commented Jun 7, 2024

I'm a bit confused by all of this. Am I correct in thinking that the
best way to write an async transport is to not use the winston transport class and just use a writable stream?

No, no chance. I will keep using winston-transport. Following the example repository I would use asyncLogger.js removing setImmediate from this line and calling the callback function directly.

I made some examples without winston-transport in order to try to understand better where the issue is.

With that asyncLogger, do you still need to have a helper function to close the logger after your handler code is done?

@darkwilly08
Copy link

Did you mean something like this? Yes, it is still necessary.

@dls314
Copy link

dls314 commented Jun 10, 2024

Are there any transports, like console, for example, where this problem doesn't happen?

Having only used winston with the console transport in lambda, I don't think I've ever seen a missed log despite not performing any explicit handling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Help Wanted Additional help desired from the community Important Needs Investigation
Projects
None yet
Development

No branches or pull requests