Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda: #226

Open
jandppw opened this issue Sep 16, 2021 · 49 comments · May be fixed by #230 or #239

Comments

@jandppw
Copy link

jandppw commented Sep 16, 2021

In our project, Lambda was last deployed successfully by CI with claudia 2021-09-14 ~16:17 CET. There have been no issues earlier.

A next attempt by CI, at 2021-09-15 ~1649 CET failed. Retry attempts failed. Manual attempts via CLI
failed. Manual upload, publish, and creation of an alias, worked via the console (but no working version was produced
because there was no investment in getting the package right).

Nothing of relevance was changed (always a strong statement, I know). There was no update of claudia, or related
packages between 2 deploys. A retry of the successful deployed version failed too.

Retries 2021-09-16 ~ 10:10 CET failed again.

The reported error is always the same:

updating configuration	lambda.setupRequestListeners
ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:eu-west-1:NNNNN:function:XXXXXXXX
    at Object.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/json.js:52:27)
    at Request.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/rest_json.js:55:8)
    at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
    at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
    at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
    at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
    at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
    at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
    at callNextListener (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:96:12)
    at IncomingMessage.onEnd (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/event_listeners.js:313:13)
    at IncomingMessage.emit (events.js:412:35)
    at IncomingMessage.emit (domain.js:470:12)
    at endReadableNT (internal/streams/readable.js:1317:12)
    at processTicksAndRejections (internal/process/task_queues.js:82:21) {
  code: 'ResourceConflictException',
  time: 2021-09-16T08:19:05.924Z,
  requestId: 'cf98db8a-0457-4f92-9a68-19b37f326508',
  statusCode: 409,
  retryable: false,
  retryDelay: 45.98667333028396
}

But this happens quick, before the package is build, or after.

We've felt for a while that claudia does some things twice, first checking, and then doing. When the error appears
late, we see several mentions of lambda.setupRequestListeners

loading Lambda config
loading Lambda config	sts.getCallerIdentity
loading Lambda config	sts.setupRequestListeners
loading Lambda config	sts.optInRegionalEndpoint
loading Lambda config	lambda.getFunctionConfiguration	FunctionName=XXXXXXXX
loading Lambda config	lambda.setupRequestListeners
packaging files
packaging files	npm pack -q /opt/atlassian/pipelines/agent/build
packaging files	npm install -q --no-audit --production
[…]
validating package
validating package	removing optional dependencies
validating package	npm install -q --no-package-lock --no-audit --production --no-optional
[…]
validating package	npm dedupe -q --no-package-lock
updating configuration
updating configuration	lambda.updateFunctionConfiguration	FunctionName=XXXXXXXX
updating configuration	lambda.setupRequestListeners
updating configuration	lambda.updateFunctionConfiguration	FunctionName=XXXXXXXX
updating configuration	lambda.setupRequestListeners
ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:eu-west-1:NNNNNNNN:function:XXXXXXXX
    at Object.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/json.js:52:27)
    at Request.extractError (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/protocol/rest_json.js:55:8)
    at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
    at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
    at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
    at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:688:14)
    at Request.transition (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/request.js:690:12)
    at Request.callListeners (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
    at callNextListener (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/sequential_executor.js:96:12)
    at IncomingMessage.onEnd (/opt/atlassian/pipelines/agent/build/node_modules/claudia/node_modules/aws-sdk/lib/event_listeners.js:313:13)
    at IncomingMessage.emit (events.js:412:35)
    at IncomingMessage.emit (domain.js:470:12)
    at endReadableNT (internal/streams/readable.js:1317:12)
    at processTicksAndRejections (internal/process/task_queues.js:82:21) {
  code: 'ResourceConflictException',
  time: 2021-09-16T08:19:05.924Z,
  requestId: 'cf98db8a-0457-4f92-9a68-19b37f326508',
  statusCode: 409,
  retryable: false,
  retryDelay: 45.98667333028396
}

Resources on the internet are barely any help.

AWS Lambda - Troubleshoot invocation issues in Lambda
mentions ResourceConflictException, but with a different message, and refers to VPCs, which we are not using.

UpdateFunctionConfiguration,
PublishVersion,
UpdateFunctionCode and others mention more
generally:

ResourceConflictException

The resource already exists, or another operation is in progress.

HTTP Status Code: 409

Other resources are no help:

Terraform Error publishing version when lambda using container updates code #17153
(Jan. 2021) mentions a "lock" / "last update status", which we can watch during execution using

> watch aws --profile YYYYYYY --region eu-west-1 lambda get-function-configuration --function-name XXXXXXXX

The output looks like

{
  "FunctionName": "XXXXXXXX",
  "FunctionArn": "arn:aws:lambda:eu-west-1:NNNNNNNNNNN:function:XXXXXXXX
  "Runtime": "nodejs14.x",
  "Role": "arn:aws:iam::NNNNNNNNNNN:role/execution/lambda-execution-XXXXXXXX",
  "Handler": "lib/service.handler",
  "CodeSize": 76984324,
  "Description": "[…]",
  "Timeout": 30,
  "MemorySize": 2048,
  "LastModified": "2021-09-16T08:19:05.000+0000",
  "CodeSha256": "zQb6Vss0Zlug46HRjA8+bNe0i1TP6NWfrm70hC6zC90=",
  "Version": "$LATEST",
  "Environment": {
    "Variables": {
      "NODE_ENV": "production"
    }
  },
  "TracingConfig": {
    "Mode": "PassThrough"
  },
  "RevisionId": "9d5f5431-6f2f-4d39-9794-d86778b34446",
  "Layers": [
    {
      "Arn": "arn:aws:lambda:eu-west-1:NNNNNNNNNNN:layer:chrome-aws-lambda:25",
      "CodeSize": 51779390
    }
  ],
  "State": "Active",
  "LastUpdateStatus": "Successful",
  "PackageType": "Zip"
}

most of the time, but we see LastUpdateStatus change for a moment before the error occurs.

Terraform aws_lambda_function ResourceConflictException due to a concurrent update operation #5154
says, in 2018,

OK, I've figured out what's happening here based on a comment here: AWS has some sort of limit on how many concurrent
modifications you can make to a Lambda function.

serverless 'Concurrent update operation' error for multi-function service results in both deployment and rollback failure. #4964
reports the same issue in 2018, and remarks:

I just heard back from AWS Premium Support, and they offered up a solution and the cause of the issue. It's not so
much an issue with too many functions, as it is trying to do too many updates with a single function.

So, this appears to be a timing issue. Claudia should take it slower?

@jandppw
Copy link
Author

jandppw commented Sep 16, 2021

In a further attempt to get this working again, I added --aws-delay 10000 ---aws-retry 30 to the update command.

No joy. Fails just as fast.

Monitoring with

> watch -n1 aws --profile YYYYYYY --region eu-west-1 lambda get-function-configuration --function-name XXXXXXXX

this time showed no other LastUpdateStatus than "Successful". It might have happened in between polls.

LastModified and RevisionId did change once, but the CodeSha256 and CodeSize did not.

So something (the configuration?) did change, but the code did not.

@jandppw
Copy link
Author

jandppw commented Sep 16, 2021

Created branches, which only deploy in CI, for the 2 last successful deploys, which worked a gazillion times before.

Both experiments get the same problem now.

Since package-lock.json was not changed, and deploy happens with npm ci, this can only mean something at the AWS side changed, either in general, or in this particular instance.

@choeller
Copy link

We just ran into the same issue and found the reason here:

https://aws.amazon.com/de/blogs/compute/coming-soon-expansion-of-aws-lambda-states-to-all-functions/

@spencer-aerian
Copy link

FYI: We upgraded the version of the aws provider in one of our terraform sets to the latest version and that seems to have cleared the problem.

@jandppw
Copy link
Author

jandppw commented Sep 24, 2021

@choeller thx for the response

That is consistent with our observations.

We just ran into the same issue and found the reason here:

https://aws.amazon.com/de/blogs/compute/coming-soon-expansion-of-aws-lambda-states-to-all-functions/

@jandppw
Copy link
Author

jandppw commented Sep 24, 2021

FYI: We upgraded the version of the aws provider in one of our terraform sets to the latest version and that seems to have cleared the problem.

@spencer-aerian that cleared the problem in claudia ?!?

Or are you suggesting updating the version of the AWS SDK used inside claudia?

@jandppw
Copy link
Author

jandppw commented Sep 24, 2021

Dear Claudiajs,

Thx for creating and maintaining this project. We've been using it for deployment for a number of years now (and not other features).

This issue is blocking for us. There has been little activity here over the last few months. Can you give us an indication about your intentions with this project?

I would like to be able to determine whether it is worth waiting for further progress, or whether it is more appropriate to look for another long term solution.

@spencer-aerian
Copy link

FYI: We upgraded the version of the aws provider in one of our terraform sets to the latest version and that seems to have cleared the problem.

@spencer-aerian that cleared the problem in claudia ?!?

Or are you suggesting updating the version of the AWS SDK used inside claudia?

No different package but same error.

@jandppw
Copy link
Author

jandppw commented Sep 24, 2021

I just checked out the project, and ran the tests (this takes ages).

There are some errors.

Notably, there are frequent mentions of

error cleaning up TooManyRequestsException: Too Many Requests

so I guess these tests are leaving behind some flutsam in our AWS account now.

Also, there are failures with update / environment variables. As far as I can see, there are more keys in the object returned than expected. AWS_LAMBDA_INITIALIZATION_TYPE seems unexpected.

But the heart of this issue is The operation cannot be performed at this time. An update is in progress …. And that message is exactly what we get with tests that work with layers. All those tests fail with this message.

update / layer support all fail with this message.

I am running the tests with both the AWS-SDK defined in the project, and the most recent version, and they fail with the latest AWS-SDK. In other words, updating the AWS SDK is not the answer in this case (it was fairly recent to start with).

@nonnib
Copy link

nonnib commented Oct 5, 2021

@jandppw This pr fixes it for me: zappa/Zappa#992

@lehno
Copy link

lehno commented Oct 13, 2021

Getting this problem here, unable to deploy an update of my lambda in production

@gojko
Copy link
Member

gojko commented Oct 18, 2021

v5.14.0 should fix this

@gojko gojko closed this as completed Oct 18, 2021
@freaker2k7
Copy link

freaker2k7 commented Nov 18, 2021

@gojko - Still getting this on v5.14.0

@james-s-turner
Copy link

james-s-turner commented Nov 18, 2021

@gojko - Also, still getting this on v5.14.0
I am using api-gateway and have a temporary work around by creating the gateway under a different name (that works just fine).
This just started happening for me today - I successfully updated yesterday. Also I just upgraded to latest aws-sdk v2.1031.0

@kotlinski
Copy link

Maybe irrelevant and not applicable, but I solved this by adding the env var to my Cloudformation template instead of during deploy.

@catamphetamine
Copy link

Unrelated: I don't use this library, but looks like the code now has to waitFor() the previous function update before running further updates.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html

For example, first the function is created and then its configuration is updated — has to wait in-between.
Same is for updating the function's *.zip file and then reconfiguring it — has to wait too.

    log('Updating lambda function');

    // Update lambda code
    await lambdaApi.updateFunctionCode({
      FunctionName: getFunctionName(lambdaConfig, stage),
      ZipFile: zipFile
    }).promise();

    log('Updated code');

    // Has to wait for the function update status to transition to "Updated"
    // until making further re-configuration.
    // https://github.com/claudiajs/claudia/issues/226
    // https://aws.amazon.com/de/blogs/compute/coming-soon-expansion-of-aws-lambda-states-to-all-functions/
    // https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
    await lambdaApi.waitFor('functionUpdated', {
      FunctionName: getFunctionName(lambdaConfig, stage)
    }).promise();

    // Update lambda configuration
    await lambdaApi.updateFunctionConfiguration(lambdaConfiguration).promise();
    log('Updated configuration');

@ruddct
Copy link

ruddct commented Nov 18, 2021

Also just started happening to me today, both 5.14.0 and 5.13.0

@aghazy
Copy link

aghazy commented Nov 18, 2021

Started happening to me today as well. updating to the latest ClaudiaJs and AWS-SDK didn't help.
I was however able to mitigate the error by adding the aws:states:opt-out in the description of the lambda function or by not passing the environment variables during deploy.

@ppatelcodal
Copy link

ppatelcodal commented Nov 19, 2021

Started for us as well. if someone wants to do quick fix until its fixed in ClaudiaJs, use below around Claudia commands:

aws lambda update-function-configuration --function-name  ${FuncName} --description "aws:states:opt-out"
claudia update
aws lambda update-function-configuration --function-name  ${FuncName} --description ""

@PriyankaPGowdaa
Copy link

Upgrade zappa to latest version. Worked for me.

@kubamika16
Copy link

I started making skills for Alexa for about a month. I was doing that using ASK CLI. This morning this error popped out:

[Error]: CliError: The lambda deploy failed for Alexa region "default": ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:454044463163:function:ask-reading-buses-default-default-1636455775666

Now I can't deploy any skill.

@ronl
Copy link

ronl commented Nov 19, 2021

I all of a sudden had the same problem using the same program I have always used to deploy. I was updating the lambda code followed by a publish version. I would get the error "An update is in progress for resource". I fixed it by calling the getFunctionConfiguration and waiting til State was Active and LastUpdateStatus was Successful. I then continued on with the publish. The State was Active right away, but now it took the LastUpdateStatus to be Successful about 30 seconds or so.

@hghodasara
Copy link

I was in trouble with the same error on Node 12.
Just update the Node version to 14 and it's working for me.

@hexid
Copy link

hexid commented Nov 22, 2021

It looks like this is only still an issue if updateConfiguration is called with any options.

For me, removing the --runtime argument allowed the function to deploy correctly.

@gojko Something similar to the following should be enough to fix this (just copied the existing wait logic up into the updateConfiguration function)

--- a/src/commands/update.js
+++ b/src/commands/update.js
@@ -145,7 +145,11 @@ module.exports = function update(options, optionalLogger) {
                                        },
                                        () => logger.logStage('waiting for IAM role propagation'),
                                        Promise
-                               );
+                               ).then(result => {
+                                       logger.logStage('waiting for lambda resource allocation');
+                                       return waitUntilNotPending(lambda, lambdaConfig.name, awsDelay, awsRetries)
+                                       .then(() => result);
+                               });
                        }
                },
                cleanup = function () {

@danomatic
Copy link

This is still broken when using --set-env-from-json

@magoyo
Copy link

magoyo commented Dec 1, 2021

This is still broken for me as well. I temporarily bypassed the situation by adding the opt out clause (aws:states:opt-out) to the Lambda description field, but this trick will only work until December 6th.

@ambigus9
Copy link

ambigus9 commented Dec 1, 2021

@magoyo Same here, However, i using code-build which is similar. So, the solution is add --description "aws:states:opt-out" for any update?

@whatwg6
Copy link

whatwg6 commented Dec 2, 2021

opt-out

temporary plan

@magoyo
Copy link

magoyo commented Dec 2, 2021

@ambigus9 I didn't see a --description option in the update documentation (it was only on create), so I added it to the project description in package.json and also to the description field in Lambda configuration screen. It worked like a charm, but it's obviously temporary and we don't have much time.

@maltahs
Copy link

maltahs commented Dec 2, 2021

I was able to fix this locally as follows:

  1. Following @hexid suggestion around updating the src/commands/update.js file.
  2. I also amended the src/tasks/wait-until-not-pending.js file to include a new line:

await new Promise(resolve => setTimeout(resolve, timeout));

And making the waitUntilNotPending async.

So now it looks like this:

const retry = require("oh-no-i-insist");
module.exports = async function waitUntilNotPending(lambda, functionName, timeout, retries) {
	'use strict';
	await new Promise(resolve => setTimeout(resolve, timeout));

	return retry(
		() => {
			return lambda.getFunctionConfiguration({FunctionName: functionName}).promise()
				.then(result => {					
					if (result.state === 'Failed') {
						throw `Lambda resource update failed`;
					}
					if (result.state === 'Pending') {
						throw 'Pending';
					}
				});
		},
		timeout,
		retries,
		failure => failure === 'Pending',
		() => console.log('Lambda function is in Pending state, waiting...'),
		Promise
	);
};

I am not sure if this is the correct approach, but it works without the need to update the description for now.

@ambigus9
Copy link

ambigus9 commented Dec 2, 2021

@magoyo Ok, but i using codebuild on AWS, so for the solution was add sleep 90 which in linux instance means a delay of 90 seconds, resulting on a temporal solution.

@magoyo
Copy link

magoyo commented Dec 2, 2021

@ambigus9 I don't use codebuild, but I understand your approach.

@maltahs Thank you for expanding on @hexid suggestion. I will be implementing your fix in my code unless this issue is resolved in the next couple of days.

@madve2 madve2 linked a pull request Dec 2, 2021 that will close this issue
@madve2
Copy link

madve2 commented Dec 2, 2021

@maltahs Your suggestion worked for us too, thanks! To make things easier for others, I also submitted our hotfix branch as a PR to this project (as you can see above).

@jandppw
Copy link
Author

jandppw commented Dec 20, 2021

@gojko, given the feedback above, shouldn't this issue be re-opened?

@gojko gojko reopened this Dec 20, 2021
@chrislim
Copy link

Are there any updates on a new release that would fix this issue (perhaps accepting PR #230?)?

@randytate
Copy link

+1 for accepting #230. Resolved the issue for me as well. Thanks @madve2 !!

@cathalelliott1
Copy link

+1 for accepting #230

@therussiankid92
Copy link

therussiankid92 commented Jan 13, 2022

+1 for accepting #230

We're considering other alternatives at the moment to claudia and we hope this is merged soon!
Thanks @madve2

@JL102
Copy link

JL102 commented Jan 15, 2022

I'm trying to upload my Node function to Lambda with the vanilla aws-sdk package, and I'm getting the same error for my resource. It must be something to do with my Lambda configuration itself, not Claudia, but I have no idea how to fix it.

const aws = require('aws-sdk');
const lambda = new aws.Lambda({
  region: 'us-east-1'
});

var params = {
  FunctionName: functionName,
  ZipFile: zipBuffer
};

console.log('Uploading function code...');

lambda.updateFunctionCode(params, (err, data) => {
  if (err) cb(err);
  else {
    console.log(`Uploaded function code:\n\t FunctionName=${data.FunctionName}\n\t Role=${data.Role}\n\t CodeSha256=${data.CodeSha256}`);
    
    console.log('Publishing new version...');
    var params = {
      CodeSha256: data.CodeSha256,
      Description: `${time}`,
      FunctionName: functionName,
    };
    lambda.publishVersion(params, (err, data) => {
      if (err) cb(err);
      else {
        // continue

Output:

Uploading function code...
Uploaded function code:
         FunctionName=ScoutradiozPrimaryStack-PrimaryFunction-1N6C440CXO15P
         Role=arn:aws:iam::243452333432:role/ScoutradiozPrimaryStack-LambdaExecutionRole-RAGPEHPPHKZ3
         CodeSha256=r6GcoXGmMp4Zg4V6MqDE/02X5T9PSQ4bTn3oe8VMgHQ=
Publishing new version...
/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/request.js:31
            throw err;
            ^

ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:243452333432:function:ScoutradiozPrimaryStack-PrimaryFunction-1N6C440CXO15P
    at Object.extractError (/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/protocol/json.js:51:27)
    at Request.extractError (/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/protocol/rest_json.js:55:8)
    at Request.callListeners (/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/request.js:683:14)
    at Request.transition (/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/media/drak/DATA/OneDrive/Projects/Programming/ScoringApp-Serverless/node_modules/aws-sdk/lib/request.js:685:12) {
  code: 'ResourceConflictException',
  time: 2022-01-15T18:10:12.031Z,
  requestId: 'f42cf16f-ca70-41a8-95a6-55a9751717e5',
  statusCode: 409,
  retryable: false,
  retryDelay: 0.2627167491526139
}

Any idea what to do to fix it?

EDIT

My bad! I didn't fully read through the comments. After adding @maltahs' waitUntilNotPending function to my own script, it works. I'm gonna keep my comment here just in case someone else is confused in a similar vein to how I was.
I misunderstood this: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html thinking that the "update in progress" was something major on the back-end. But no, it was just the update that I had started just a few ms before I attempted to publish a new version.

@krishna-koushik
Copy link

This needs to be fixed otherwise we need to think about not using claudiajs anymore

@cathalelliott1
Copy link

cathalelliott1 commented Feb 10, 2022

I had to find another solution unfortunately now that its been months waiting on the pr

@JL102
Copy link

JL102 commented Feb 10, 2022

I had to find another solution unfortunately now that its been months waiting on the pr

@cathalelliott1 It's actually not too difficult to manage your Lambda functions manually with the aws-sdk package. If you need a reference, feel free to take a look at the scripts I made for my own project: https://github.com/FIRSTTeam102/ScoringApp-Serverless/tree/master/scripts - the scripts are called with NPM scripts as defined in https://github.com/FIRSTTeam102/ScoringApp-Serverless/blob/master/primary/package.json. Code is licensed with GPLv3 so feel free to use it if it helps. It's highly specific to the one project, but you can use the same concepts to fit your own use.

The scripts don't have many comments, but if you are looking into it and want an explainer on how they work, open an issue on our repo and I can answer any questions.

@eyespies
Copy link

eyespies commented Mar 11, 2022

We are facing this issue as well and the code changes are small, so we were able to implement them locally (although it's a pain when you switch workstations and have to update again.)

So question - looking at the version history, it seems like after 2019 there have only been three releases (early 2020, early 2021, and October 2021). And there has been no activity here by the maintainers since the Oct 18 changes. Is ClaudiaJS no longer maintained, or is it just sporadically maintained now?

@magoyo
Copy link

magoyo commented Mar 11, 2022

If ClaudiaJS is no longer maintained, then it would be good to know what packages are the easiest to transition to. Anyone has any good suggestions?

@eyespies
Copy link

@magoyo Serverless Framework is the only option of which I'm aware. I'm sure there are others, I'm just not up to speed.

@dmackinn
Copy link

dmackinn commented Apr 7, 2022

For those mentioning this is still open, did you try updating your package to 5.14.1? Looks like @gojko put in a commit that contains the suggestions by @hexid above
6284972

@eyespies
Copy link

eyespies commented Apr 7, 2022

Thank you @dmackinn, I hadn't used that version yet. It looks like he just patched it 21 days ago (after our commentary) in March 2022, not March 2021 like the date in the README.md says (and indeed there is a follow up commit where the year was changed from 2021 to 2022.) I'll give it a try.

@aarsilv
Copy link

aarsilv commented Aug 12, 2022

I believe this was caused by AWS sending back State with a capital "S" but Claudia checks for a lower-case state so it doesn't wait for the state to change correctly. I encountered this running tests for #239, and included a fix in that PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet