Skip to content

App Insights

Brett Samblanet edited this page Oct 30, 2020 · 2 revisions

Note: This guide is now obsolete. See the official documentation.

Welcome!

This guide will provide some instructions on how to get started with turning on Application Insights integration with Azure Functions.

Getting started

If you're making a new function app, all you have to do is check the box to add App Insights and we'll do the rest. If you have an existing function app the process is still very easy:

  1. Create an Application Insights instance.
    1. Application type should be set to General
    2. Grab the instrumentation key
  2. Update your Function App’s settings
    1. Add App Setting – APPINSIGHTS_INSTRUMENTATIONKEY = {Instrumentation Key}

Once you’ve done this, your App should start automatically sending information on your Function App to Application Insights, without any code changes.

Application Insights + Azure Functions experience overview

Live Stream

If you open your Application Insights resource in the portal, you should see the option for “Live Metrics Stream” in the menu. Click on it and you’ll see a near-live view of what’s coming from your Function App. For executions, it has info on #/second, average duration, and failures/second. It also has information on resource consumption. You can pivot all of these by the “instance” your functions are on; providing you insight on whether a specific instance might be having an issue or all of your Functions.

Known issues: there are no dependencies being tracked right now, so the middle section is mostly useless for now. If you send your own custom dependencies, it’s not likely to show up here since they won’t be going through the Live Stream API since you’re normally using a different client, today.

Metrics Explorer

This view gives you insights on your metrics coming from your Function App. You can add new charts for your Dashboards and set up new Alert rules from this page.

Failures

This view gives you insights on which things are failing. It has pivots on “Operation” which are your Functions, Dependencies, and exception messages.

Known issues: Dependencies will be blank unless you add custom dependency metrics.

Performance

Shows information on the count, latency, and more of Function executions. You can customize this pretty aggressively to make it more useful.

Servers

Shows resource utilization and throughput per server. Useful for debugging Functions that might be bogging down your underlying resources. Putting the servers back in Serverless. 

Analytics

The analytics portal provides you the ability to write custom queries against your data. This is one of the most powerful tools in your tool box. Currently, the following tables are full of data from the Functions runtime:

  • Requests – one of these is logged for each execution
  • Exceptions – tracks any exceptions thrown by the runtime
  • Traces – any traces written to context.log or ILogger show up here
  • PerformanceMetrics – Auto collected info about the performance of the servers the functions are running on
  • CustomEvents – Custom events from your functions and anything that the host sees that may or may not be tied to a specific request
  • CustomMetrics – Custom metrics from your functions and general performance and throughput info on your Functions. This is very helpful for high throughput scenarios where you might not capture every request message to save costs, but you still want a full picture of your throughput/etc. as the host will attempt to aggregate these client side, before sending to Application Insights

The other tables are from availability tests and client/browser telemetry, which you can also add. The only thing that’s currently missing is dependencies. There is also more metrics/events we’ll add over the course of the preview (based off your generous feedback on what you need to see).

Example:

This will show us the distribution of requests/worker over the last 30 minutes.

requests
| where timestamp > ago(30m) 
| summarize count() by cloud_RoleInstance, bin(timestamp, 1m)
| render timechart

Scenarios

One of breaking all these great features up is to look at the scenarios we can now enable. We’ll talk about them as being “Measure”, “Detect”, and “Diagnose”.

Measure

The new Application Insights integration now makes it easy to measure the performance and behavior of your application. Understanding how it typically behaves and where your trouble areas are is the first steps to achieving success for running an application in production. For this, some of the best tools are the simplest. Utilizing the Metrics Explorer, Failures, Performance, and Servers menus will allow you to look the core pieces of what makes your application tick. For instance, you can click on servers and see workload by the underlying server it is running on. This can enable you to see how your application usually scales or to see if you’re frequently running out of resources. Fortunately, you don’t have to manage resource utilization yourself on the consumption plan of Azure Functions, but since there are still limits, you should be mindful of how much you’re using both from a reliability point of view, and from a cost savings point of view.

Detect

Once you know how your application should behave, the next step is to detect when it isn’t. On most of the menus, you have the ability to configure alerts. For instance, let’s say I’ve built an application on Azure Functions and I’ve promised my customers an SLA of 3 minutes for any given request to be processed, end to end. I can set up an alert for when my Function crosses various latency boundaries. I could set up an alert for it crossing 1 minute, that sends an FYI to an alias. I could set up an alert for 2 minutes that calls a Logic App which calls out to pager duty or texts the on call engineer to investigate. I could set up an alert at 3 minutes that starts an emergency call and proactively updates a status dashboard.

Diagnose

This is probably the coolest experience and one of the things we’re most excited about. If you click on the “Analytics” option at the top of your Overview page, you’ll get a new window that opens into a query window. This will let you dive into the data and look for potential issues. You can also emit custom events/metrics to help better instrument your application beyond just invocation data and performance metrics.

TBD SAMPLE QUERIES/SCENARIOS

Reference Docs

host.json settings

The Azure Functions logger emits all log with a specific category name. This category name can be used to fine-tune the amount of logging that comes from various parts of the Functions host and from Functions themselves. This category filtering is configured by adding a “logging” property to the JSON in the function application’s host.json file with the desired configuration:

{
  "logger": {
    "categoryFilter": {
      "defaultLevel": "Information",
      "categoryLevels": {
        "Host.Results": "Error",
        "Function": "Error",
        "Host.Aggregator": "Information"
      }
    },
    "aggregator": {
      "batchSize": 1000,
      "flushTimeout": "00:00:30"
    }
  },
  "applicationInsights": {
    "sampling": {
      "isEnabled": true,
      "maxTelemetryItemsPerSecond" : 5
    }
  }
}

There are currently three categories being logged, each configurable with a value from the LogLevel enumeration: https://docs.microsoft.com/en-us/aspnet/core/api/microsoft.extensions.logging.loglevel#Microsoft_Extensions_Logging_LogLevel. Those categories are:

  • Function: These are traces emitted from within a function.
  • Host.Results: These are the individual function result logs. In Application Insights, these appear as Request Telemetry.
  • Host.Aggregator: These are aggregated metrics that the Functions host collects. By default, every 30 seconds or 1000 results, the metrics will be aggregated and sent to Application Insights as Metrics. You will see metrics such as Count, Success Rate, and Average Duration for every function if the Host.Aggregator category is enabled. The aggregator logs metrics at an “Information” level, so a category filter of Warning, Error, Critical, or None will disable aggregated metrics. To change the default aggregation settings, use the “aggregator” object as shown in the sample above.

.NET experience

Tracing

We’ve added a new “ILogger” interface for logging which will give more information than the current TraceWriter approach:

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

Custom telemetry

You can bring the .NET App Insights SDK in and create your own TelemetryClient. There isn’t any conflicts, but there is some advice:

  • Don’t create TrackRequest or use the StartOperation<RequestTelemetry> methods if you don’t want duplicate requests – we do this automatically.
  • Use the ExecutionContext.InvocationId value as the operation id. You can then correlate info together.
  • You have to set the telemetry.Context.Operation.* items each time your function is started if you want to correlate the items together.

Example:

using System.Net;

using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.Extensibility;

private static string key = TelemetryConfiguration.Active.InstrumentationKey = System.Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);
private static TelemetryClient telemetry = new TelemetryClient() { InstrumentationKey = key };

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ExecutionContext context, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");
    DateTime start = DateTime.UtcNow;
           
    // parse query parameter
    string name = req.GetQueryNameValuePairs()
        .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
        .Value;

    // Get request body
    dynamic data = await req.Content.ReadAsAsync<object>();

    // Set name to query string or body data
    name = name ?? data?.name;
    
    telemetry.Context.Operation.Id = context.InvocationId.ToString();
    telemetry.Context.Operation.Name = "cs-http";
    if(!String.IsNullOrEmpty(name))
    {
        telemetry.Context.User.Id = name;
    }
    telemetry.TrackEvent("Function called");
    telemetry.TrackMetric("Test Metric", DateTime.Now.Millisecond);
    telemetry.TrackDependency("Test Dependency", "swapi.co/api/planets/1/", start, DateTime.UtcNow - start, true);

    return name == null
        ? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a name on the query string or in the request body")
        : req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
}

Node.js experience

Tracing

Just use context.log like normal.

context.log('JavaScript HTTP trigger function processed a request.' + context.invocationId);

Custom telemetry

The App Insights Node.js SDK works fine with your Functions as well, but some notes to consider:

  • Operation id is set at the client level right now, unless you use the generic “track” method, this means that you have to reset the operation id on the client right before every track call if you’ve made any callbacks.
    • I’ve sent a PR to the Application Insights SDK which will make this easier. It will be released with their 0.20.0 SDK release.
  • Operation Id should be the invocation id which is on the context object.

Example:

https://github.com/christopheranderson/azure-functions-app-insights-sample/blob/master/process-item/index.js

Known issues

  • Dependencies don’t show up automatically
  • Default logging levels are probably too verbose - for example if you're on a MSDN subscription you'll hit your data cap for App Insights very quickly.