Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SOL-38491] queueMaxMsgRedelivery not working #13

Closed
FWinkler79 opened this issue Jul 6, 2020 · 2 comments · Fixed by #46
Closed

[SOL-38491] queueMaxMsgRedelivery not working #13

FWinkler79 opened this issue Jul 6, 2020 · 2 comments · Fixed by #46
Labels
bug Something isn't working tracked Internally tracked by Solace's internal issue tracking system
Milestone

Comments

@FWinkler79
Copy link

My use case is the following:

  • A producer sends a message
  • The consumer processes the message (by calling another service) and in case of an error, no (Spring) retry should take place, instead the message should immediately be re-queued and re-delivered.
  • In case the message was re-queued 3 times and could not be processed properly, it should no longer be re-queued and re-delivered.

I am using the following configuration:

spring:
  cloud:
    stream:
      default:
        group: defaultConsumers
        consumer:
          concurrency: 3
      bindings:
        jobTriggers:  
          group: jobTriggerConsumers 
          # Effectively disable internal retry of failed message processing attempts.  
          # Instead we will re-queue them in the solace broker.
          # See: https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/2.2.1.RELEASE/spring-cloud-stream.html#_re_queue_failed_messages
          consumer:
            concurrency: 3
            max-attempts: 1
      # Make sure failed messages are requeued in the solace broker. 
      # See: https://github.com/SolaceProducts/solace-spring-cloud/tree/master/solace-spring-cloud-starters/solace-spring-cloud-stream-starter#failed-message-error-handling 
      solace: 
        bindings:
          jobTriggers:
            consumer:
              dmq-max-msg-redelivery: 3   # has no effect
              queue-max-msg-redelivery: 3 # has no effect
              requeue-rejected: true

with the following @StreamListener code:

@StreamListener(JobTriggerEventConsumerBinding.INPUT)
protected void onJobTriggerEvent(org.springframework.messaging.Message<JobExecutionTriggerEvent> message, 
                                 MessageHeaders headers,
                                 @Validated JobExecutionTriggerEvent event) throws InterruptedException { 
   throw new RuntimeException("This error should re-queue the message. After three re-queue attempts no more re-queuing should happen");
}

Expected behaviour: Message processing should not be retried (i.e. using RetryTemplate), but the message should be re-queued and immediately re-delivered to the @StreamListener. After 3 re-delivery attempts (spring.cloud.stream.solace.bindings.jobTriggers.consumer.queue-max-msg-redelivery: 3) the message should not be re-queued and no re-delivery should take place.

Observed behaviour: The message gets re-delivered indefinitely. This is independent of whether queue-max-msg-redelivery is used or queueMaxMsgRedelivery. Also the dmq-max-msg-redelivery does not have any effect here.

This is crucial defect, as it leaves the application without any chance to recover from failure scenarios, if the application wants to use message re-delivery as described above.

@Mrc0113
Copy link
Contributor

Mrc0113 commented Jul 6, 2020

Thanks for opening this ticket @FWinkler79! You did indeed find a bug.

It looks like the queue-max-msg-redelivery setting works properly. If you look at the created queue (make sure it's not already created before running your app) you can see that the "Maximum Redelivery Count" matches the value placed in this config.

The bug seems to be in the re-queue logic that is enabled when requeue-rejected: true. Instead of unbinding the flow and essentially "nacking" the message so the redelivery count gets incremented it looks like the binder is actually republishing a new message which is a bug. You can see that here:

Heads up @Nephery! We can discuss this tomorrow as well :)

@FWinkler79 as an FYI, I ran out of time today but I'm hoping to take a closer look at your other ticket tomorrow.

@Nephery Nephery added the bug Something isn't working label Jul 15, 2020
@Nephery Nephery added this to the SCSt 3.0.0 milestone Nov 17, 2020
@Nephery Nephery changed the title queueMaxMsgRedelivery not working [SOL-38491] queueMaxMsgRedelivery not working Nov 27, 2020
@Nephery Nephery added the tracked Internally tracked by Solace's internal issue tracking system label Nov 27, 2020
Nephery added a commit that referenced this issue Jan 14, 2021
… error-handling (#46)

* Renamed "Binder DMQ" to "Error Queue"
* Fix requeuing logic (closes #13) 
  * requeuing is no longer supported for anonymous consumer groups (i.e. temporary queues) since these cannot be rebound.
* Add support for manual acknowledgments (closes #14)
* Removed the message discard error handling strategy from defined consumer groups. The new default for these will be requeuing.
@Nephery Nephery mentioned this issue Feb 24, 2021
Nephery added a commit that referenced this issue Feb 24, 2021
### Global
* Major version bump to `2.0.0`
* Upgrade to `spring-cloud` `2020.0.1`
* Upgrade to `spring-boot` `2.4.3`
* Upgrade to `sol-jcsmp` `10.10.0`
* Upgrade to `sol-jms` `10.10.0`
* Fix Java 11 build (#38)
* Migrate CI from Travis to Github Actions
* Use Maven Failsafe plugin to run integration tests

### Solace Spring Cloud Stream Binder
* Major version bump to `3.0.0`
* Add Solace Spring Message Headers (#50)
  * Add `SolaceHeaders` and `SolaceBinderHeaders`
  * Bidirectionally map `SolaceHeaders` to JCSMP properties so message handlers can read/write Solace properties
* Renamed "Binder DMQ" to "Error Queue"
* Fix requeuing logic (#13)
  * requeuing is no longer supported for anonymous consumer groups (i.e. temporary queues) since these cannot be rebound.
* Add support for manual acknowledgments (#14)
* Removed the message discard error handling strategy from defined consumer groups. The new default for these will be requeuing.
* Add support for wildcard destinations (#3)
* add consumer config options to omit group name from the consumer group queue or error queue names (#28)
* add `errorQueueNameOverride` consumer config option to override the generated error queue name with a custom config-provided one. (#28)
* Add `headerExclusions` producer config option to exclude headers from published messages
* Add `nonserializableHeaderConvertToString ` producer config option to convert non-serializable headers to strings
* Override the default DMQ eligibility when publishing to be true (#9)
* Remove `solace_raw_message` error-channel message header in favor of `sourceData` header
* Fix JMS interoperability
* Fix `null` payload error handling (#54)
* Fix error handling failures (#36)
* Add `errorMsgDmqEligibility` consumer config option to override failed input messages' DMQ eligibility property when republishing to error queues
* Refactor default generated queue names to be more similar to Solace's shared subscriptions feature
* Fix asynchronous publishing exceptions to be sent to error channels (#34)
* Properly construct the `ErrorMessage` for publisher failures
* Configure client info provider to display Solace SCSt Binder release details
* Reduce warning levels from WARN to INFO when provisioning is disabled or when subscriptions already exist on queues
* Document ACL Profile tips when using error queues (#60)

### Solace Spring Cloud Connector
* Upgrade to `spring-cloud-connectors` `2.2.13.RELEASE` (version managed separately from spring cloud BOM)
@Nephery
Copy link
Collaborator

Nephery commented Feb 25, 2021

Closed with #75

@Nephery Nephery closed this as completed Feb 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working tracked Internally tracked by Solace's internal issue tracking system
Projects
None yet
3 participants