-
Notifications
You must be signed in to change notification settings - Fork 406
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate logs: structured and unstructured #115
Comments
Hey,
Thanks for opening up an issue with us - I think it’s an issue
accidentally introduced with the Logger inheritance feature.
It’s possible the parent Logger is propagating the message up to the root
handler that Lambda configures it — I’m almost sure that’s the case.
I’ll look into it tomorrow afternoon and circle back with you here.
Apologies for the inconvenience
…On Tue, 18 Aug 2020 at 19:10, Nxtra ***@***.***> wrote:
My logs are appearing twice in CloudWatch: once in a structured format and
once unstructured.
I'd only like to have the structured logs.
I didn't see this behavior before so it might be something on my side.
Expected Behavior
Having the logs once in structured format.
Current Behavior
Having the logs both structured and unstructured.
Environment
- *Powertools version used*: 1.1.2
- *AWS Lambda function runtime:* Python
- *Debugging logs*
Powertools is setup like this:
from aws_lambda_powertools import Logger, Tracer, Metrics
from aws_lambda_powertools.metrics import MetricUnit
logger = Logger()
tracer = Tracer()
metrics = Metrics()
@metrics.log_metrics
@logger.inject_lambda_context
@tracer.capture_lambda_handler
def handle(event, context):
...
logger.info('validated')
I am seeing the logs twice:
[image: image]
<https://user-images.githubusercontent.com/36597466/90543054-5ddebd80-e185-11ea-824c-98fe9666e263.png>
Seems like the root logger is still capturing and logging.
Why do I have duplicate logs? What am I missing here?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#115>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAZPQBCGK4A42PH3OVIK3KDSBKYWVANCNFSM4QDZMZPA>
.
|
@Nxtra managed to replicate: It's what I suspected - Lambda configures the root logger with a basic unstructured info (it shouldn't), and since you're configuring a new Logger it emits the message twice due to how Logging works in Python (hierarchy: root logger -> your logger, the record traverses from your logger upwards to the root). Why wasn't this happening before On 1.1.0, we introduced Logging Inheritance feature which also made our Logger implementation fully compliant with how Python Logging works, however that also means changes you create in your Parent Logger, or Child Logger propagates between themselves. Solution We remove the root logger handler set by Lambda since you're using ours. An alternative solution would be to disable log propagation when we create a Logger for your app, however, this would remove the ability to propagate structured logging changes between the Parent and the Child - I chose the former. This fix will come within an hour on 1.1.3 |
fix: remove root logger handler set by Lambda #115
1.1.3 is now live in PyPi -- Could you please try that and let us know here @Nxtra? I've deployed a fresh function, run it locally w/ SAM CLI too, and managed to replicate the issue in tests but... just in case |
I updated my project to 1.1.3. |
just an info for everyone coming in here and it's using AWS Chalice. |
I had a similar issue. In my project the PowerTools logger was configured before the main logging conf. I ended up doing this: pt_logger = aws_lambda_powertools.Logger(service="...")
# PowerTools logger sets up its own handler to stdout and adds a filter to prevent duplicates
# in the root logger, but the logging is not configured at this point, so the filter is not attached
# to any handler of the root logger, so it's easier just to disable the propagation
pt_logger._logger.propagate = False # noqa |
Worked like a charm for me as well. Helped me solve a duplicate log issue due to using multiprocessing |
My logs are appearing twice in CloudWatch: once in a structured format and once unstructured.
I'd only like to have the structured logs.
I didn't see this behavior before so it might be something on my side.
Expected Behavior
Having the logs once in structured format.
Current Behavior
Having the logs both structured and unstructured.
Environment
Powertools is setup like this:
I am seeing the logs twice:
Seems like the root logger is still capturing and logging.
Why do I have duplicate logs? What am I missing here?
The text was updated successfully, but these errors were encountered: