Skip to content

Latest commit

 

History

History
646 lines (415 loc) · 29.4 KB

logger.md

File metadata and controls

646 lines (415 loc) · 29.4 KB
title description
Logger
Core utility

Logger provides an opinionated logger with output structured as JSON.

Key features

  • Capture key fields from Lambda context, cold start and structures logging output as JSON
  • Log Lambda event when instructed (disabled by default)
  • Log sampling enables DEBUG log level for a percentage of requests (disabled by default)
  • Append additional keys to structured log at any point in time

Getting started

Logger requires two settings:

Setting Description Environment variable Constructor parameter
Logging level Sets how verbose Logger should be (INFO, by default) LOG_LEVEL level
Service Sets service key that will be present across all log statements POWERTOOLS_SERVICE_NAME service
--8<-- "examples/logger/sam/template.yaml"

Standard structured keys

Your Logger will include the following keys to your structured logging:

Key Example Note
level: str INFO Logging level
location: str collect.handler:1 Source code location where statement was executed
message: Any Collecting payment Unserializable JSON values are casted as str
timestamp: str 2021-05-03 10:20:19,650+0200 Timestamp with milliseconds, by default uses local timezone
service: str payment Service name defined, by default service_undefined
xray_trace_id: str 1-5759e988-bd862e3fe1be46a994272793 When tracing is enabled{target="_blank"}, it shows X-Ray Trace ID
sampling_rate: float 0.1 When enabled, it shows sampling rate in percentage e.g. 10%
exception_name: str ValueError When logger.exception is used and there is an exception
exception: str Traceback (most recent call last).. When logger.exception is used and there is an exception

Capturing Lambda context info

You can enrich your structured logs with key Lambda context information via inject_lambda_context.

=== "collect.py"

```python hl_lines="7"
--8<-- "examples/logger/src/inject_lambda_context.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="8-12 17-20"
--8<-- "examples/logger/src/inject_lambda_context_output.json"
```

When used, this will include the following keys:

Key Example
cold_start: bool false
function_name str example-powertools-HelloWorldFunction-1P1Z6B39FLU73
function_memory_size: int 128
function_arn: str arn:aws:lambda:eu-west-1:012345678910:function:example-powertools-HelloWorldFunction-1P1Z6B39FLU73
function_request_id: str 899856cb-83d1-40d7-8611-9e78f15f32f4

Logging incoming event

When debugging in non-production environments, you can instruct Logger to log the incoming event with log_event param or via POWERTOOLS_LOGGER_LOG_EVENT env var.

???+ warning This is disabled by default to prevent sensitive info being logged

--8<-- "examples/logger/src/log_incoming_event.py"

Setting a Correlation ID

You can set a Correlation ID using correlation_id_path param by passing a JMESPath expression{target="_blank"}.

???+ tip You can retrieve correlation IDs via get_correlation_id method

=== "collect.py"

```python hl_lines="7"
--8<-- "examples/logger/src/set_correlation_id.py"
```

=== "Example Event"

```json hl_lines="3"
--8<-- "examples/logger/src/set_correlation_id_event.json"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="12"
--8<-- "examples/logger/src/set_correlation_id_output.json"
```

set_correlation_id method

You can also use set_correlation_id method to inject it anywhere else in your code. Example below uses Event Source Data Classes utility to easily access events properties.

=== "collect.py"

```python hl_lines="11"
--8<-- "examples/logger/src/set_correlation_id_method.py"
```

=== "Example Event"

```json hl_lines="3"
--8<-- "examples/logger/src/set_correlation_id_method_event.json"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="7"
--8<-- "examples/logger/src/set_correlation_id_method_output.json"
```

Known correlation IDs

To ease routine tasks like extracting correlation ID from popular event sources, we provide built-in JMESPath expressions.

=== "collect.py"

```python hl_lines="2 8"
--8<-- "examples/logger/src/set_correlation_id_jmespath.py"
```

=== "Example Event"

```json hl_lines="3"
--8<-- "examples/logger/src/set_correlation_id_jmespath_event.json"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="12"
--8<-- "examples/logger/src/set_correlation_id_jmespath_output.json"
```

Appending additional keys

???+ info "Info: Custom keys are persisted across warm invocations" Always set additional keys as part of your handler to ensure they have the latest value, or explicitly clear them with clear_state=True.

You can append additional keys using either mechanism:

  • Persist new keys across all future log messages via append_keys method
  • Add additional keys on a per log message basis via extra parameter

append_keys method

???+ note append_keys replaces structure_logs(append=True, **kwargs) method. structure_logs will be removed in v2.

You can append your own keys to your existing Logger via append_keys(**additional_key_values) method.

=== "collect.py"

```python hl_lines="12"
--8<-- "examples/logger/src/append_keys.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="7"
--8<-- "examples/logger/src/append_keys_output.json"
```

???+ tip "Tip: Logger will automatically reject any key with a None value" If you conditionally add keys depending on the payload, you can follow the example above.

This example will add `order_id` if its value is not empty, and in subsequent invocations where `order_id` might not be present it'll remove it from the Logger.

extra parameter

Extra parameter is available for all log levels' methods, as implemented in the standard logging library - e.g. logger.info, logger.warning.

It accepts any dictionary, and all keyword arguments will be added as part of the root structure of the logs for that log statement.

???+ info Any keyword argument added using extra will not be persisted for subsequent messages.

=== "extra_parameter.py"

```python hl_lines="9"
--8<-- "examples/logger/src/append_keys_extra.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="7"
--8<-- "examples/logger/src/append_keys_extra_output.json"
```

Removing additional keys

You can remove any additional key from Logger state using remove_keys.

=== "collect.py"

```python hl_lines="11"
--8<-- "examples/logger/src/remove_keys.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="7"
--8<-- "examples/logger/src/remove_keys_output.json"
```

Clearing all state

Logger is commonly initialized in the global scope. Due to Lambda Execution Context reuse, this means that custom keys can be persisted across invocations. If you want all custom keys to be deleted, you can use clear_state=True param in inject_lambda_context decorator.

???+ tip "Tip: When is this useful?" It is useful when you add multiple custom keys conditionally, instead of setting a default None value if not present. Any key with None value is automatically removed by Logger.

???+ danger "Danger: This can have unintended side effects if you use Layers" Lambda Layers code is imported before the Lambda handler.

This means that `clear_state=True` will instruct Logger to remove any keys previously added before Lambda handler execution proceeds.

You can either avoid running any code as part of Lambda Layers global scope, or override keys with their latest value as part of handler's execution.

=== "collect.py"

```python hl_lines="7 10"
--8<-- "examples/logger/src/clear_state.py"
```

=== "#1 request"

```json hl_lines="7"
--8<-- "examples/logger/src/clear_state_event_one.json"
```

=== "#2 request"

```json hl_lines="7"
--8<-- "examples/logger/src/clear_state_event_two.json"
```

Logging exceptions

Use logger.exception method to log contextual information about exceptions. Logger will include exception_name and exception keys to aid troubleshooting and error enumeration.

???+ tip You can use your preferred Log Analytics tool to enumerate and visualize exceptions across all your services using exception_name key.

=== "collect.py"

```python hl_lines="15"
--8<-- "examples/logger/src/logging_exceptions.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="7-8"
--8<-- "examples/logger/src/logging_exceptions_output.json"
```

Advanced

Built-in Correlation ID expressions

You can use any of the following built-in JMESPath expressions as part of inject_lambda_context decorator.

???+ note "Note: Any object key named with - must be escaped" For example, request.headers."x-amzn-trace-id".

Name Expression Description
API_GATEWAY_REST "requestContext.requestId" API Gateway REST API request ID
API_GATEWAY_HTTP "requestContext.requestId" API Gateway HTTP API request ID
APPSYNC_RESOLVER 'request.headers."x-amzn-trace-id"' AppSync X-Ray Trace ID
APPLICATION_LOAD_BALANCER 'headers."x-amzn-trace-id"' ALB X-Ray Trace ID
EVENT_BRIDGE "id" EventBridge Event ID

Reusing Logger across your code

Similar to Tracer, a new instance that uses the same service name - env var or explicit parameter - will reuse a previous Logger instance. Just like logging.getLogger("logger_name") would in the standard library if called with the same logger name.

Notice in the CloudWatch Logs output how payment_id appeared as expected when logging in collect.py.

=== "collect.py"

```python hl_lines="1 9 11 12"
--8<-- "examples/logger/src/logger_reuse.py"
```

=== "payment.py"

```python hl_lines="3 7"
--8<-- "examples/logger/src/logger_reuse_payment.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="12"
--8<-- "examples/logger/src/logger_reuse_output.json"
```

???+ note "Note: About Child Loggers" Coming from standard library, you might be used to use logging.getLogger(__name__). This will create a new instance of a Logger with a different name.

In Powertools, you can have the same effect by using `child=True` parameter: `Logger(child=True)`. This creates a new Logger instance named after `service.<module>`. All state changes will be propagated bi-directonally between Child and Parent.

For that reason, there could be side effects depending on the order the Child Logger is instantiated, because Child Loggers don't have a handler.

For example, if you instantiated a Child Logger and immediately used `logger.append_keys/remove_keys/set_correlation_id` to update logging state, this might fail if the Parent Logger wasn't instantiated.

In this scenario, you can either ensure any calls manipulating state are only called when a Parent Logger is instantiated (example above), or refrain from using `child=True` parameter altogether.

Sampling debug logs

Use sampling when you want to dynamically change your log level to DEBUG based on a percentage of your concurrent/cold start invocations.

You can use values ranging from 0.0 to 1 (100%) when setting POWERTOOLS_LOGGER_SAMPLE_RATE env var, or sample_rate parameter in Logger.

???+ tip "Tip: When is this useful?" Let's imagine a sudden spike increase in concurrency triggered a transient issue downstream. When looking into the logs you might not have enough information, and while you can adjust log levels it might not happen again.

This feature takes into account transient issues where additional debugging information can be useful.

Sampling decision happens at the Logger initialization. This means sampling may happen significantly more or less than depending on your traffic patterns, for example a steady low number of invocations and thus few cold starts.

???+ note Open a feature request if you want Logger to calculate sampling for every invocation

=== "collect.py"

```python hl_lines="6 10"
--8<-- "examples/logger/src/logger_reuse.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="3 5 13 16 25"
--8<-- "examples/logger/src/sampling_debug_logs_output.json"
```

LambdaPowertoolsFormatter

Logger propagates a few formatting configurations to the built-in LambdaPowertoolsFormatter logging formatter.

If you prefer configuring it separately, or you'd want to bring this JSON Formatter to another application, these are the supported settings:

Parameter Description Default
json_serializer function to serialize obj to a JSON formatted str json.dumps
json_deserializer function to deserialize str, bytes, bytearray containing a JSON document to a Python obj json.loads
json_default function to coerce unserializable values, when no custom serializer/deserializer is set str
datefmt string directives (strftime) to format log timestamp %Y-%m-%d %H:%M:%S,%F%z, where %F is a custom ms directive
use_datetime_directive format the datefmt timestamps using datetime, not time (also supports the custom %F directive for milliseconds) False
utc set logging timestamp to UTC False
log_record_order set order of log keys when logging ["level", "location", "message", "timestamp"]
kwargs key-value to be included in log messages None
--8<-- "examples/logger/src/powertools_formatter_setup.py"

Migrating from other Loggers

If you're migrating from other Loggers, there are few key points to be aware of: Service parameter, Inheriting Loggers, Overriding Log records, and Logging exceptions.

The service parameter

Service is what defines the Logger name, including what the Lambda function is responsible for, or part of (e.g payment service).

For Logger, the service is the logging key customers can use to search log operations for one or more functions - For example, search for all errors, or messages like X, where service is payment.

Inheriting Loggers

??? tip "Tip: Prefer Logger Reuse feature over inheritance unless strictly necessary, see caveats."

Python Logging hierarchy happens via the dot notation: service, service.child, service.child_2

For inheritance, Logger uses a child=True parameter along with service being the same value across Loggers.

For child Loggers, we introspect the name of your module where Logger(child=True, service="name") is called, and we name your Logger as {service}.{filename}.

???+ danger A common issue when migrating from other Loggers is that service might be defined in the parent Logger (no child param), and not defined in the child Logger:

=== "incorrect_logger_inheritance.py"

```python hl_lines="1 9"
--8<-- "examples/logger/src/logging_inheritance_bad.py"
```

=== "my_other_module.py"

```python hl_lines="1 9"
--8<-- "examples/logger/src/logging_inheritance_module.py"
```

In this case, Logger will register a Logger named payment, and a Logger named service_undefined. The latter isn't inheriting from the parent, and will have no handler, resulting in no message being logged to standard output.

???+ tip This can be fixed by either ensuring both has the service value as payment, or simply use the environment variable POWERTOOLS_SERVICE_NAME to ensure service value will be the same across all Loggers when not explicitly set.

Do this instead:

=== "correct_logger_inheritance.py"

```python hl_lines="1 9"
--8<-- "examples/logger/src/logging_inheritance_good.py"
```

=== "my_other_module.py"

```python hl_lines="1 9"
--8<-- "examples/logger/src/logging_inheritance_module.py"
```

Overriding Log records

???+ tip Use datefmt for custom date formats - We honour standard logging library string formats{target="_blank"}.

Prefer using [datetime string formats](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes){target="_blank"}? Set `use_datetime_directive` at Logger constructor or at [Lambda Powertools Formatter](#lambdapowertoolsformatter).

You might want to continue to use the same date formatting style, or override location to display the package.function_name:line_number as you previously had.

Logger allows you to either change the format or suppress the following keys altogether at the initialization: location, timestamp, level, xray_trace_id.

=== "lambda_handler.py"

```python hl_lines="7 10"
--8<-- "examples/logger/src/overriding_log_records.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="3 5"
--8<-- "examples/logger/src/overriding_log_records_output.json"
```

Reordering log keys position

You can change the order of standard Logger keys or any keys that will be appended later at runtime via the log_record_order parameter.

=== "app.py"

```python hl_lines="5 8"
--8<-- "examples/logger/src/reordering_log_keys.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="3 10"
--8<-- "examples/logger/src/reordering_log_keys_output.json"
```

Setting timestamp to UTC

By default, this Logger and standard logging library emits records using local time timestamp. You can override this behavior via utc parameter:

=== "app.py"

```python hl_lines="6"
--8<-- "examples/logger/src/setting_utc_timestamp.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="6 13"
--8<-- "examples/logger/src/setting_utc_timestamp_output.json"
```

Custom function for unserializable values

By default, Logger uses str to handle values non-serializable by JSON. You can override this behavior via json_default parameter by passing a Callable:

=== "app.py"

```python hl_lines="6 17"
--8<-- "examples/logger/src/unserializable_values.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="4-6"
--8<-- "examples/logger/src/unserializable_values_output.json"
```

Bring your own handler

By default, Logger uses StreamHandler and logs to standard output. You can override this behavior via logger_handler parameter:

--8<-- "examples/logger/src/bring_your_own_handler.py"

Bring your own formatter

By default, Logger uses LambdaPowertoolsFormatter that persists its custom structure between non-cold start invocations. There could be scenarios where the existing feature set isn't sufficient to your formatting needs.

???+ info The most common use cases are remapping keys by bringing your existing schema, and redacting sensitive information you know upfront.

For these, you can override the serialize method from LambdaPowertoolsFormatter.

=== "custom_formatter.py"

```python hl_lines="2 5-6 12"
--8<-- "examples/logger/src/bring_your_own_formatter.py"
```

=== "Example CloudWatch Logs excerpt" json hl_lines="6" --8<-- "examples/logger/src/bring_your_own_formatter_output.json"

The log argument is the final log record containing our standard keys, optionally Lambda context keys, and any custom key you might have added via append_keys or the extra parameter.

For exceptional cases where you want to completely replace our formatter logic, you can subclass BasePowertoolsFormatter.

???+ warning You will need to implement append_keys, clear_state, override format, and optionally remove_keys to keep the same feature set Powertools Logger provides. This also means keeping state of logging keys added.

=== "collect.py"

```python hl_lines="6 9 11-12 15 19 23 26 38"
--8<-- "examples/logger/src/bring_your_own_formatter_from_scratch.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="2-4"
--8<-- "examples/logger/src/bring_your_own_formatter_from_scratch_output.json"
```

Bring your own JSON serializer

By default, Logger uses json.dumps and json.loads as serializer and deserializer respectively. There could be scenarios where you are making use of alternative JSON libraries like orjson{target="_blank"}.

As parameters don't always translate well between them, you can pass any callable that receives a dict and return a str:

--8<-- "examples/logger/src/bring_your_own_json_serializer.py"

Testing your code

Inject Lambda Context

When unit testing your code that makes use of inject_lambda_context decorator, you need to pass a dummy Lambda Context, or else Logger will fail.

This is a Pytest sample that provides the minimum information necessary for Logger to succeed:

=== "fake_lambda_context_for_logger.py" Note that dataclasses are available in Python 3.7+ only.

```python
--8<-- "examples/logger/src/fake_lambda_context_for_logger.py"
```

=== "fake_lambda_context_for_logger_module.py"

```python
--8<-- "examples/logger/src/fake_lambda_context_for_logger_module.py"
```

???+ tip Check out the built-in Pytest caplog fixture{target="_blank"} to assert plain log messages

Pytest live log feature

Pytest Live Log feature duplicates emitted log messages in order to style log statements according to their levels, for this to work use POWERTOOLS_LOG_DEDUPLICATION_DISABLED env var.

POWERTOOLS_LOG_DEDUPLICATION_DISABLED="1" pytest -o log_cli=1

???+ warning This feature should be used with care, as it explicitly disables our ability to filter propagated messages to the root logger (if configured).

FAQ

How can I enable boto3 and botocore library logging?

You can enable the botocore and boto3 logs by using the set_stream_logger method, this method will add a stream handler for the given name and level to the logging module. By default, this logs all boto3 messages to stdout.

---8<-- "examples/logger/src/enabling_boto_logging.py"

How can I enable powertools logging for imported libraries?

You can copy the Logger setup to all or sub-sets of registered external loggers. Use the copy_config_to_registered_logger method to do this.

By default all registered loggers will be modified. You can change this behavior by providing include and exclude attributes. You can also provide optional log_level attribute external loggers will be configured with.

---8<-- "examples/logger/src/cloning_logger_config.py"

How can I add standard library logging attributes to a log record?

The Python standard library log records contains a large set of atttributes, however only a few are included in the powertools log record by default.

If you need to add additional records, these can be included as kwargs to Logger, or to LambdaPowertoolsFormatter when they are being instantiated, or to managed later with the append_keys or remove_keys methods.

=== "collect.py"

```python hl_lines="3 8 10"
---8<-- "examples/logger/src/append_and_remove_keys.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="6 15-16"
---8<-- "examples/logger/src/append_and_remove_keys.json"
```

For log records originating from powertools Logger, the name attribute will be the same as service, for log records coming from standard library logger, it will be the name of the logger (i.e. what was used as name argument to logging.getLogger).

What's the difference between append_keys and extra?

Keys added with append_keys will persist across multiple log messages while keys added via extra will only be available in a given log message operation.

Here's an example where we persist payment_id not request_id. Note that payment_id remains in both log messages while booking_id is only available in the first message.

=== "collect.py"

```python hl_lines="16 23"
---8<-- "examples/logger/src/append_keys_vs_extra.py"
```

=== "Example CloudWatch Logs excerpt"

```json hl_lines="9-10 19"
---8<-- "examples/logger/src/append_keys_vs_extra_output.json"
```

How do I aggregate and search Powertools logs across accounts?

As of now, ElasticSearch (ELK) or 3rd party solutions are best suited to this task. Please refer to this discussion for more details