Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Singleton support for Functions to ensure only one function running at a time #912

Open
mathewc opened this issue Nov 11, 2016 · 103 comments
Labels
Milestone

Comments

@mathewc
Copy link
Member

mathewc commented Nov 11, 2016

We should discuss whether we want to bring this functionality forward for Functions. However, it will be relatively simple for us to do - we just need to expose new properties via function.json for function level singletons. See here for singleton doc.

In addition to Function level singleton support, we might also consider supporting Listener level Singleton as well.

When we do this, we should make sure it works across multiple languages (not just C#)

@lindydonna lindydonna added this to the Next - Triaged milestone Nov 14, 2016
@vladkosarev
Copy link

We would love to have that. Let us specify singleton in different scopes just like in web jobs.

@mathewc
Copy link
Member Author

mathewc commented Feb 22, 2017

Thanks @vladkosarev. This issue has been on the back burner, since not many people have been asking for it yet. What specifically is your scenario?

@vladkosarev
Copy link

Being able to process messages from multiple queues in series as long as they affect the same entity. So if we have two queues and both of them will affect Order/1 (which is encoded in queue messages) we want the two different functions to kick in in series and process messages from those two queues one by one instead of in parallel. The idea here is having serialized access to data storage for a particular entity.

@ajukraine
Copy link

@vladkosarev sounds like actor model. I would love to see how efficient it's with using scoped singleton and Azure functions. Another option is to use Service Fabric, but it's huge piece of infrastructure to be handled. Prefer to start with something as lightweight as Azure functions

@vladkosarev
Copy link

That is exactly what I'm trying to achieve. Actor-like model using Azure functions. Now that they announced that functions will support serverless framework this might not be as important but I'd still like this ability in the 'raw'. Service Fabric is great but it's still not serverless. I want to pay for compute resources not for VMs. Consumption functions + actors on top is my path to nirvana. Obviously you can't have in memory state, etc but it would still be a good start to the path of properly infinitely scalable architecture.

@WonderPanda
Copy link

@lindydonna Hey I'm just wondering if there are any updates in regards to when we might expect support for singleton functionality with Azure Functions. Based on the conversation here it seemed like it might be low hanging fruit. It would be extremely helpful to be able to specify singleton behavior in function.json

@tmakin
Copy link

tmakin commented Apr 30, 2017

I have use case where this would very helpful. I have long running background processing tasks which are triggered via a storage queue. In order to prevent spikes in the database load, I need to ensure that that there is not more than one queue item being processed at a time.

My current solution is to use a timer trigger with a short interval to manually poll the queue, but a singleton flag for queue triggers would be a much tidier option.

@ericleigh007
Copy link

Yes, I didn't know what this was from the title. Perhaps rename the report to something more descriptive.

My case is just about the same. Would like to guarantee only one queue function running at a time. NOTE with the current time limitations we cannot just wait.

-thanks Donna
-e

@lindydonna lindydonna changed the title Add Singleton support for Functions Add Singleton support for Functions to ensure only one function running at a time Jun 22, 2017
@alohaninja
Copy link

Seems like you could support locking on your own - you just need shared storage backing it - SQL Azure, Azure Blob/Table/Queue, Redis, etc. Would be great just to add a [SingletonAttribute] to our Azure Functions like webjobs has. host.json has configurable options for singletons, but I don't know if it supports Azure functions (AF) versus Web Jobs. Could just add an environment key which has the storage connection string etc, and assign it in our host.json.

@alexjebens
Copy link

@lindydonna any updates on a timeline for this feature?

I have a couple of Projects where I would like to switch from WebJobs to Azure Functions as well as some new Projects that need serial processing for queues, which require this functionality.

From my understanding of the way this works for WebJobs is a lock blob with a lease in the storage accounts. Azure Functions appear to already use this mechanism for the Timer-Trigger.

@alexjebens
Copy link

alexjebens commented Aug 3, 2017

@alohaninja Supporting locking on our own is not trivial. e.g. In a Queue-Trigger Function you could only throw an exception so that the message is put back in the queue, this may however lead to the message being marked as poison and therefore lost if you cannot process it in time. Additionaly the Function will still be invoked leading to extra costs.

According to this issue there is currently no support for the Singleton Attribute in Azure Functions and the host.json options are therefore moot.

Possible Workarounds:

  • Timer-Trigger Functions appear to run as Singletons. They produce a blob in the storage account under "locks/" and create a lease. This requires implementing the input on your own.
  • ServiceBus-Trigger Functions with serviceBus.maxConcurrentCalls to 1 in host.json. This is however a global setting and I would like use this on a per function basis.

@lindydonna is it possible to confirm that Timer-Trigger Functions run as singletons?

@alohaninja
Copy link

@aboersch - came here because we are using Timer-Trigger functions and they can run simultaneously, I was looking for a way to ensure you cannot have concurrent timer events - seems to occur during app restart (DLLs change in /bin, app restart via portal).

Configured function.json to ignore trigger firing @ restarts via "runOnStartup": false, but we still see concurrent executions if a function was running before the cycle. Seems like the locking doesn't account well for function updates (restart events) - it fires the trigger even though an existing process is already running. To verify this - use kudu process explorer and you'll see multiple processes for the same function.

For now - I just use kudu process explorer to kill any existing processes before making any app updates or restarting the function host - would be great if the portal allowed you to kill the process for a running azure function.

@alexjebens
Copy link

alexjebens commented Aug 3, 2017

@alohaninja I am aware of the troubles with the restart. I usually stop the app before updating because of it, however all my functions are designed to be interrupted at any time.

If you look into your storage account you will see a container called azure-webjob-hosts. There will be several folders here {hostname}-{random-number} which contain a host.Functions.{function-name}.listener file for each timer-trigger function. This file is being used to lock with a blob lease.

Every time your app is (re)started a new folder is created ({hostname}-{random-number}). Since the new folder is empty there is no blob and no lease to check for, hence the parallel execution.

This should perhaps be a separate issue though.

@rossdargan
Copy link

I could really do with this feature to. The issue I have is that I need to call an external api based to get more data any time a message gets added to a queue. I tend to get around 300+ messages over a few minutes every 8 hours. The issue I'm having is azure spins up 16 servers to handle the spike in messages (which is cool...) however this is utterly destroying the server I'm calling.

I've set batch size and singleton in the host.json but that appears to have no impact (setting batch size just results in more servers being started).

@BowserKingKoopa
Copy link

BowserKingKoopa commented Sep 1, 2017

I'm in desperate need of this as well. I have a bunch of webjobs I'd like to move to Azure Functions, but I can't because they need to run as singletons. Some are queue based and need to be run in order. Others call external apis that are very sensitive about how rapidly I call them.

@rossdargan
Copy link

To work around this I made a buffer queue and use a scheduled function to see how many items are in the processing queue and move a few items over depending on the count.

@WonderPanda
Copy link

@BowserKingKoopa @rossdargan I haven't had time to experiment with it yet but in the configuration settings for host.json there's an option for WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT. Sounds like it's still being worked on but using this in conjunction with batch size might help you achieve singleton semantics for the meantime.

@bwwilliam
Copy link

Any updates on this please? Has anyone found a way to make enforce Functions run as a single instance max.

@keenfann
Copy link

We also have a need for this. Both singleton and scale out limitation at function level.

@ajukraine
Copy link

ajukraine commented Oct 11, 2017

I guess, if you can't wait for it to be implemented in SDK, you can use it already in Durable Azure Functions (https://docs.microsoft.com/en-us/azure/azure-functions/durable-functions-overview)

See section Stateful singletons

@AntonChernysh
Copy link

+1.
Use case: Simple slack bot event handler, should only send message once.

@WonderPanda
Copy link

@AntonChernysh I think you might be confused about Singleton behavior... There is nothing today preventing you from building a slack bot that only responds to messages only once

@AntonChernysh
Copy link

AntonChernysh commented Oct 12, 2017

@WonderPanda looks like my function is scaling and running multiple times, therefore getting reply as many times as functions started. I have the same function (python 3.6) running on AWS lambda with no problem.
I'd appreciate If you could advice something to make function run only once.

@WonderPanda
Copy link

@AntonChernysh What triggers your function?

@bwwilliam
Copy link

Any idea when singleton can be made available please? I'm implementing CQRS pattern with functions. My event publisher needs to be singleton so it can process the events in the right order/sequence. Thanks

@AntonChernysh
Copy link

AntonChernysh commented Oct 12, 2017

@WonderPanda post message to function's HTTPs endpoint. Can we continue in skype? anton.chernysh

@Schaemelhout
Copy link

I have 1 function app (on the consumption plan) with 3 queue-triggered functions.
I need a way to make sure only 1 queue message is processed at a time (from each queue, so the separate functions can run in parallel). I don't care about the order in which the messages are processed.

Is this possible? I see a lot of different options here like [Singleton], WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT, batchSize but since this issue is still open, I guess there is no official guidance on this?

I am using JavaScript but I am willing to switch to C# if it is easier to support this behavior.

@Tron1978
Copy link

Dropping by in 2020 to see if this has been worked on yet...

@joaoantunes
Copy link

Still no news about this subject?

@withinboredom
Copy link

#912 (comment) -- goes in-depth, not sure what you're looking for

@Tron1978
Copy link

WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT was what I used to limit parallelism.

@NgSekLong
Copy link

Would like to add to this discussion, I saw this document today:
https://docs.microsoft.com/en-us/azure/logic-apps/send-related-messages-sequential-convoy

Which seems to suggest that, we can use Azure Function with Service Bus Session trigger to achieve sequential processing of message.

I have setup the following:

  • Node 12
  • Azure Function Premium
  • Service Bus Queue, session enabled, other default
  • No WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
  • Other setting default

And a basic Function that will take a message out, wait 10 seconds, and end.

function.json

{
  "bindings": [
    {
      "queueName": "my-queue",
      "connection": "QueueRetriver",
      "name": "message",
      "type": "serviceBusTrigger",
      "direction": "in",
      "isSessionsEnabled": true
    }
  ]
}

index.js

const helpers = require("../common/helpers");

module.exports = async function (context, message) {
    // Session ID: context.bindingData.messageSession.sessionId
    const sessionId = context.bindingData.messageSession.sessionId;

    await helpers.sleep(10000); // this is just to wait 10 seconds

    return;
};

/* 
module.exports.sleep = (milliseconds) => {
    return new Promise(resolve => setTimeout(resolve, milliseconds));
}
*/

From my testing result, messages with the same Session ID are processed one by one sequentially!

Any idea if there are any drawback to my design that I do not aware, cause this seems like a very viable method to process sequential execution. Cheers!

@paulbatum
Copy link
Member

Yes I believe this is a viable approach. Messages sharing a session ID are processed in order, sequentially.

One thing I'll note - I am not sure whether this interacts correctly with the FUNCTIONS_WORKER_PROCESS_COUNT setting. So to be safe, I would make sure that setting is absent (or set to 1, the default).
https://docs.microsoft.com/en-us/azure/azure-functions/functions-app-settings#functions_worker_process_count

@rohitashwachaks
Copy link

Guys...
Any update on this?
I tried adding the singleton Tag on my Functions but nope, orchestration duplicates still being created!
This is what my code looks like

`
string orchestrationIdentifier = Guid.NewGuid().ToString();
response += $"Orchestration Identifier: {orchestrationIdentifier}\n";

                DurableOrchestrationStatus existingInstance = await starter.GetStatusAsync(instanceId);
                if (existingInstance == null)
                {
                    await starter.StartNewAsync("QueueTriggerOrchestrator", instanceId, orchestrationIdentifier);
                    response += $"Started New Instance";
                }
                else if (existingInstance.RuntimeStatus == OrchestrationRuntimeStatus.Completed ||
                        existingInstance.RuntimeStatus == OrchestrationRuntimeStatus.Failed ||
                        existingInstance.RuntimeStatus == OrchestrationRuntimeStatus.Canceled ||
                        existingInstance.RuntimeStatus == OrchestrationRuntimeStatus.Terminated)
                {
                    //await starter.PurgeInstanceHistoryAsync(instanceId);
                    await starter.StartNewAsync("QueueTriggerOrchestrator", instanceId, orchestrationIdentifier);
                    response += $"Restarted Earlier Instance. Ended with RunTimeStatus: {existingInstance.RuntimeStatus}";
                }
                else
                {
                    response = $"FeedId: {obj.FeedId} Status: {obj.Status}.\nInstance of OrchestrationClient with ID '{instanceId}' already running. Cannot Process request\n";
                }

`

The instanceId is a constant in this case and this snippet is in a QueueTrigger Function.

The problem is The QueueTrigger (Despite batch size being set to 1) scales up and starts consuming multiple messages simultaneously.
Now, both these instances reach the GetStatusAsync at the same time and (say the existingInstance == null ) try to start the orchestrator.

This is NUTS!
How can you have the 2 instances with the same Instance ID! Shouldn't there be a clash or something in the InstanceHistory table, causing one to fail? But nothing of that sorts is happening when the two instances are exactly simultaneous.

I added the [Singleton] tag to both the QueueTriggered Function and the Orchestrator. But it doesn't work.

@stap123
Copy link

stap123 commented Oct 22, 2020

@rohitashwachaks Does limiting the scale out not work for you, you can do it in the Azure Portal to prevent the function app scaling to more than 1 instance.

https://docs.microsoft.com/en-us/azure/azure-functions/functions-scale#limit-scale-out

@rohitashwachaks
Copy link

@stap123 I have several other functions in the same Functions App. Setting the WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT to 1 will limit all the constituent functions.
I do not wish to do that (scalability was the primary reason why we shifted to Azure). Is there a way to restrict JUST THAT ONE function from scaling up?

@stap123
Copy link

stap123 commented Oct 23, 2020

@rohitashwachaks Yeah we wanted the same thing. We just created a second function app and put the functions that were limited to 1 instance in that app and the other functions with default scaling in another app.

@UmairB
Copy link

UmairB commented Nov 18, 2020

@rohitashwachaks accordiing the docs: https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-singletons?tabs=csharp that should not be happening? Specifically the note:

There is a potential race condition in this sample. If two instances of HttpStartSingle execute concurrently, both function calls will report success, but only one orchestration instance will actually start. Depending on your requirements, this may have undesirable side effects. For this reason, it is important to ensure that no two requests can execute this trigger function concurrently.

@ragzzy
Copy link

ragzzy commented Sep 29, 2021

checking in on a long open issue/discussion... was there a solution for this?

@dev-bre
Copy link

dev-bre commented Oct 29, 2021

is there any update for this?
consumption plan is great, but if the function executes code which is meant to run from only one source, that plan is unmanageable and forces to move to AppService plan which way more expensive.,

@josejohny3
Copy link

is there any solution for the singleton issue @rohitashwachaks?

@rohitashwachaks
Copy link

Hi everyone.
Sorry for being MIA.
I joined a masters course and left .NET development.

I remember Microsoft fixed this race condition in their latest package version.
So, I just upgraded my version and it worked.

Bit of a bummer. but I don't remember much else

@jaltin
Copy link

jaltin commented Mar 4, 2022

Hi @paulbatum, @fabiocav, @jeffhollan, @mathewc, @lindydonna and anyone else at MS that has engaged in this discussion throughout the year.

I am sorry to include you all but this discussion is over 5 years in the making. Hopefully one of you are the right person to respond?

It is such a pity that it is not possible to handle this easily in Azure functions (especially on consumption plan). Is there any chance that this will ever get implemented properly?

What I feel is needed

There is A LOT of comments in this thread and it has kind of gone in different directions and spread of focus. Therefore I will repeat my need to clarify what I think will be a GREAT improvement (also the need expressed by a lot of other users in this issue):

  • Running a function app on consumption plan

  • Functions are triggered by Azure storage queue input bindings

  • Different functions in the app has different scalability/parallel execution needs. Here are hypothetical examples:

    • Function_A

      • An image resizing func that saves a resized image in a new blob
      • Has no real limit on execution/load
        • can be executing multiple times in parallel on one instance
        • can scale out to as many more instances as needed (relying on Azure auto instance scale-out)
    • Function_B

      • Calling an external API to fetch some data, then storing that in a blob
      • The external API can't handle more than a certain load (for this example let's say 3)
        • can be executing a maximum of 3 in parallel (or else external API will choke)
        • can scale out to multiple instances, but it is not resource heavy so can also run all 3 executions on the same instance (= Azure infrastructure can decide what i most suitable)
    • Function_C

      • Calling an external API to fetch some data, then doing some heavy processing of that data
      • The external API can't handle more than a certain load (for this example let's say 2)
        • can be executing a maximum of 2 in parallel (or else external API will choke)
        • is resource heavy so can not run more than one execution at a time per instance
    • Function_D

      • Doing an operation that MUST be called as a singleton, e.g. updating a account balance in a database record
        • Must ensure only one execution happens at a time

There are options to control some of these things in host.json, but that will affect all functions in the app, so is too blunt of a tool.

Also, the suggestion to have separate function apps to control execution has been mentioned, but that becomes very hard if you have more functions, with different config needs (code reuse/duplication, increased deployment aspect of creating many func apps etc.). Therefore not a good alternative in my mind.

An example of how this could be handled on a function level could be in function.json

Example for Function_A case

{
	// Other function.json values...

	"maxParallExecutionsPerInstance": 0, // 0 = no set limit
    "maxParallelExecutionsAcrossInstances": 0,
    "maxInstances": 0
}

Example for Function_B case

{
	// Other function.json values...
	
	"maxParallExecutionsPerInstance": 0,
    "maxParallelExecutionsAcrossInstances": 3,
    "maxInstances": 0
}

Example for Function_C case

{
	// Other function.json values...
	
	"maxParallExecutionsPerInstance": 1,
    "maxParallelExecutionsAcrossInstances": 2
}

Example for Function_D case

{
	// Other function.json values...
	
    "singleton":true,
}

These are just conceptual ideas but you get the idea.

Thanks!

@bartlannoeye
Copy link

@jaltin I hope this get picked up some day indeed. I've been bugging support for over a month now to achieve your Function_C case for .NET (on event grid binding). Every other language has FUNCTIONS_WORKER_PROCESS_COUNT to work with.

@jaltin
Copy link

jaltin commented Oct 6, 2022

Hi all wonderful people at Microsoft and other contributors that works on this repository (@paulbatum, @fabiocav, @mathewc, @NickCraver, @kaibocai, @heppersonmicrosoft, @pragnagopa, @siddharth-ms, @liliankasem, @alrod, @balag0 among others)!

Im posting on this issue one more time hoping that someone of you all can have a look at my previous comment #912 (comment) and give a response on if you are considering this (it was after all created by one of your colleagues @mathewc).

This is becoming more and more of a PITA for us to manage in scenarios where we have functions needing strict singleton controls, and at the moment it forces us to create separate function projects all over the place.

Many of the posters here would loooooove to see something done to make our life's easier, so really hoping you can consider looking into it and try to address it in line with my suggested ideas.

Thanks!

@andrei-tofan
Copy link

It will be great if something like this will be added, I don't think is normal to do workarounds for a functionality that should be there.

@mr-davidc
Copy link

I too would love to have this kind of functionality to ensure Timer functions in particular only ever have a single instance running at a time...

@drdamour
Copy link

drdamour commented Apr 6, 2023

i find it very problematic that the docs https://learn.microsoft.com/en-us/azure/azure-functions/functions-host-json#singleton link to this issue to explain singleton behaviour...and this issue is a 100 comments none which do a great job of explaining any of this very well.

the best i found regarding singleton is https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to#singleton-attribute but this is NOT in the context of functions just the underlying webjobs...which i know are related.

the most useful comment in this thread is #912 (comment) which is burried in github UX.

the relevant info needs to make it to the docs as it's own page mostly likely

@MelGrubb
Copy link

We have several queued functions, and most of them should be running in parallel as much as possible. There is one function in particular that can't, though. It communicates with an outside service that handles concurrency... poorly. I need just that one function to be a singleton, and I thought that the Singleton attribute was going to save me. Then I read here that it doesn't work on functions. But this comment says they DO work, albeit with billing implications.

That's not what I'm seeing in my testing, though. I queued up multiple messages to my problem function and my logs show four of them starting before the first one finishes. It's a queue-triggered function, and I've tried the Singleton attribute with and without specifying a scope. It seems to just rip through the queue no matter what.

I can't just change the host.json to a batch size of one because that will kill the performance of all the other functions that are perfectly happy running in parallel. What can I do?

@drdamour
Copy link

Fyi the way i’ve achieved singleton success is leveraging durable entities. My triggers queue the work on SignalEntityAsync. Entities will process one signal at a time.

if you need http triggered singleton with a response u have to get tricky and loop GetEntityState() the entity for work completion (and when queueing probably throw in an id) and if its gonna take a while (10mins) respond with a 301 to a diff http trigger that keeps looking for the work to be done or loops again.

its def no [Singleton] but its been very successful.

@bakes82
Copy link

bakes82 commented Sep 13, 2023

This doesn't seem to work for "ServiceBus" triggers, it's not auto extending the service bus lock. While it is running in singleton mode, it's pointless if the lock isn't being extended on the ones waiting.

using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;

namespace TestFunctions;

public static class ServiceBusQueueTrigger1
{
    [FunctionName("ServiceBusQueueTrigger1")]
    [Singleton(Mode = SingletonMode.Function)]
    public static async Task RunAsync([ServiceBusTrigger("testqueue", Connection = "ServiceBusConnection")] string myQueueItem, ILogger log)
    {
        log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
        await Task.Delay(30000);
        log.LogInformation($"C# waited 30sec processed message: {myQueueItem}");
    }
}

[2023-09-13T21:02:13.168Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=58f691b8-d49d-49ec-887a-22eb940eb7ba)
[2023-09-13T21:02:13.168Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=0afba31e-6f8b-4a48-8c11-5e5d095d810d)
[2023-09-13T21:02:13.171Z] Trigger Details: MessageId: 83a96849befa409a82a81718d8ac0656, SequenceNumber: 12, DeliveryCount: 1, EnqueuedTimeUtc: 2023-09-13T21:03:08.6450000+00:00, LockedUntilUtc: 2023-09-13T21:04:12.5360000+00:00, SessionId: (null)
[2023-09-13T21:02:13.171Z] Trigger Details: MessageId: 5bc5995342fe40ca8714568729165ae6, SequenceNumber: 11, DeliveryCount: 1, EnqueuedTimeUtc: 2023-09-13T21:03:03.7220000+00:00, LockedUntilUtc: 2023-09-13T21:04:12.5360000+00:00, SessionId: (null)
[2023-09-13T21:02:13.172Z] Trigger Details: MessageId: 42e4ba60d32844f6b18d5699442ac0d0, SequenceNumber: 9, DeliveryCount: 1, EnqueuedTimeUtc: 2023-09-13T21:02:54.8630000+00:00, LockedUntilUtc: 2023-09-13T21:04:12.5360000+00:00, SessionId: (null)
[2023-09-13T21:02:13.174Z] Trigger Details: MessageId: 42f0a372ca164624bdb714c0502e89f5, SequenceNumber: 10, DeliveryCount: 1, EnqueuedTimeUtc: 2023-09-13T21:02:59.0660000+00:00, LockedUntilUtc: 2023-09-13T21:04:12.5360000+00:00, SessionId: (null)
[2023-09-13T21:02:13.259Z] C# ServiceBus queue trigger function processed message: 2
[2023-09-13T21:02:43.271Z] C# waited 30sec processed message: 2
[2023-09-13T21:02:43.326Z] C# ServiceBus queue trigger function processed message: 4
[2023-09-13T21:02:43.331Z] Executed 'ServiceBusQueueTrigger1' (Succeeded, Id=0afba31e-6f8b-4a48-8c11-5e5d095d810d, Duration=30220ms)
[2023-09-13T21:02:56.207Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=c012fa20-fe1d-476f-b8a1-2284ec920537)
[2023-09-13T21:02:56.209Z] Trigger Details: MessageId: 083163eaf0d64b7591f906569c87f1d2, SequenceNumber: 13, DeliveryCount: 1, EnqueuedTimeUtc: 2023-09-13T21:03:55.7240000+00:00, LockedUntilUtc: 2023-09-13T21:04:55.7400000+00:00, SessionId: (null)
[2023-09-13T21:02:59.727Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=cccf9108-34e2-4634-9382-752220b598e1)
[2023-09-13T21:02:59.729Z] Trigger Details: MessageId: 5a4b38d22a294826a5cb99a31d7213cd, SequenceNumber: 14, DeliveryCount: 1, EnqueuedTimeUtc: 2023-09-13T21:03:59.2570000+00:00, LockedUntilUtc: 2023-09-13T21:04:59.2570000+00:00, SessionId: (null)
[2023-09-13T21:03:02.479Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=3848f97d-6784-490c-b848-12cd1583c457)
[2023-09-13T21:03:02.486Z] Trigger Details: MessageId: b59d47a878ff4ca1a49ac89a8d85fa1d, SequenceNumber: 15, DeliveryCount: 1, EnqueuedTimeUtc: 2023-09-13T21:04:01.9920000+00:00, LockedUntilUtc: 2023-09-13T21:05:02.0070000+00:00, SessionId: (null)
[2023-09-13T21:03:06.040Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=d99f977c-582c-439b-a0ae-2d649ca4e7fc)
[2023-09-13T21:03:06.048Z] Trigger Details: MessageId: 1fc2e570decf4150b0f01a215645bea0, SequenceNumber: 16, DeliveryCount: 1, EnqueuedTimeUtc: 2023-09-13T21:04:05.5540000+00:00, LockedUntilUtc: 2023-09-13T21:05:05.5700000+00:00, SessionId: (null)
[2023-09-13T21:03:08.590Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=bce075e0-8e49-42b6-b271-28d40cd9fe8d)
[2023-09-13T21:03:08.592Z] Trigger Details: MessageId: b86abfa1f9b64a6682e5509c1cc1f60e, SequenceNumber: 17, DeliveryCount: 1, EnqueuedTimeUtc: 2023-09-13T21:04:08.1010000+00:00, LockedUntilUtc: 2023-09-13T21:05:08.1170000+00:00, SessionId: (null)
[2023-09-13T21:03:12.926Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=e434b194-9322-4625-b7d4-29a978625268)
[2023-09-13T21:03:12.927Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=840b2cf2-ace7-42c4-9c43-4fa06a240955)
[2023-09-13T21:03:12.931Z] Trigger Details: MessageId: 42e4ba60d32844f6b18d5699442ac0d0, SequenceNumber: 9, DeliveryCount: 2, EnqueuedTimeUtc: 2023-09-13T21:02:54.8630000+00:00, LockedUntilUtc: 2023-09-13T21:05:12.4450000+00:00, SessionId: (null)
[2023-09-13T21:03:12.932Z] Trigger Details: MessageId: 83a96849befa409a82a81718d8ac0656, SequenceNumber: 12, DeliveryCount: 2, EnqueuedTimeUtc: 2023-09-13T21:03:08.6450000+00:00, LockedUntilUtc: 2023-09-13T21:05:12.4610000+00:00, SessionId: (null)
[2023-09-13T21:03:12.926Z] Executing 'ServiceBusQueueTrigger1' (Reason='(null)', Id=eb0d75a0-a57d-4ebc-8f20-e1a19dfcc62c)
[2023-09-13T21:03:12.939Z] Trigger Details: MessageId: 5bc5995342fe40ca8714568729165ae6, SequenceNumber: 11, DeliveryCount: 2, EnqueuedTimeUtc: 2023-09-13T21:03:03.7220000+00:00, LockedUntilUtc: 2023-09-13T21:05:12.4450000+00:00, SessionId: (null)
[2023-09-13T21:03:13.362Z] C# waited 30sec processed message: 4
[2023-09-13T21:03:13.368Z] Executed 'ServiceBusQueueTrigger1' (Succeeded, Id=ca9ee9b5-77db-424e-99ee-28d14bf822c2, Duration=60269ms)
[2023-09-13T21:03:13.466Z] C# ServiceBus queue trigger function processed message: 1
[2023-09-13T21:03:13.605Z] Message processing error (Action=Complete, EntityPath=testqueue, Endpoint=MMCSharedServiceBus.servicebus.windows.net)
[2023-09-13T21:03:13.607Z] Azure.Messaging.ServiceBus: The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue. For more information please see https://aka.ms/ServiceBusExceptions . Reference:ced3aedb-c4da-4f07-9924-c49f4b566262, TrackingId:c87a4b560000048f082601b265022390_G8_B25, SystemTracker:G8:444878248:amqps://mmcsharedservicebus.servicebus.windows.n
et/-390fda99;0:5:6:source(address:/testqueue,filter:[]), Timestamp:2023-09-13T21:04:13 (MessageLockLost). For troubleshooting information, see https://aka.ms/azsdk/net/servicebus/exceptions/troubleshoot.
[2023-09-13T21:03:43.471Z] C# waited 30sec processed message: 1
[2023-09-13T21:03:43.476Z] Executed 'ServiceBusQueueTrigger1' (Succeeded, Id=58f691b8-d49d-49ec-887a-22eb940eb7ba, Duration=90377ms)
[2023-09-13T21:03:43.591Z] C# ServiceBus queue trigger function processed message: 3
[2023-09-13T21:03:43.779Z] Message processing error (Action=Complete, EntityPath=testqueue, Endpoint=MMCSharedServiceBus.servicebus.windows.net)
[2023-09-13T21:03:43.785Z] Azure.Messaging.ServiceBus: The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue. For more information please see https://aka.ms/ServiceBusExceptions . Reference:ce4299f7-3876-4074-8cf6-39f5f1a56e93, TrackingId:c87a4b560000048f082601b265022390_G8_B25, SystemTracker:G8:444878248:amqps://mmcsharedservicebus.servicebus.windows.n
et/-390fda99;0:5:6:source(address:/testqueue,filter:[]), Timestamp:2023-09-13T21:04:43 (MessageLockLost). For troubleshooting information, see https://aka.ms/azsdk/net/servicebus/exceptions/troubleshoot.

@BorgPrincess
Copy link

Wow.
A year since the last new contribution to the discussion, and more than two years since anybody from Microsoft answered.

Dear MS devs (@paulbatum, @fabiocav, @mathewc, @NickCraver, @kaibocai, @heppersonmicrosoft, @pragnagopa, @siddharth-ms, @liliankasem, @alrod, @balag0 among others) - have another look at this - also for non-.Net FnApps?
Pretty please? 🥺

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests