- 
                Notifications
    
You must be signed in to change notification settings  - Fork 2
 
feat: 增加并发更新处理,优化TelegramBotReceiverService的更新处理逻辑 #168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| 
           Claude encountered an error —— View job I'll analyze this and get back to you.  | 
    
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces concurrency control for Telegram bot update processing by implementing a semaphore-based throttling mechanism. The changes ensure that no more than 8 updates are processed simultaneously, preventing resource exhaustion during high-traffic scenarios.
- Adds a 
SemaphoreSlimto limit concurrent update processing to 8 updates - Refactors 
HandleUpdateAsyncto fire-and-forget processing tasks while respecting cancellation - Introduces a new 
ProcessUpdateAsyncmethod that enforces the semaphore and handles all update processing logic 
| _botClient = botClient; | ||
| _serviceProvider = serviceProvider; | ||
| _logger = logger; | ||
| _updateProcessingSemaphore = new SemaphoreSlim(MaxConcurrentUpdates, MaxConcurrentUpdates); | 
    
      
    
      Copilot
AI
    
    
    
      Nov 3, 2025 
    
  
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The SemaphoreSlim resource is not being disposed. Since TelegramBotReceiverService inherits from BackgroundService which implements IDisposable, you should override Dispose to properly release the semaphore. Add a Dispose method that calls _updateProcessingSemaphore?.Dispose() to prevent resource leaks.
| } | ||
| 
               | 
          ||
| private async Task ProcessUpdateAsync(Update update, CancellationToken cancellationToken) { | ||
| await _updateProcessingSemaphore.WaitAsync().ConfigureAwait(false); | 
    
      
    
      Copilot
AI
    
    
    
      Nov 3, 2025 
    
  
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The WaitAsync call does not pass the cancellationToken parameter. If the service is shutting down, this could cause the semaphore wait to hang indefinitely instead of respecting the cancellation request. Change to await _updateProcessingSemaphore.WaitAsync(cancellationToken).ConfigureAwait(false);
| await _updateProcessingSemaphore.WaitAsync().ConfigureAwait(false); | |
| await _updateProcessingSemaphore.WaitAsync(cancellationToken).ConfigureAwait(false); | 
| var processingTask = ProcessUpdateAsync(update, cancellationToken); | ||
| 
               | 
          ||
| if (cancellationToken.IsCancellationRequested) { | ||
| return processingTask; | ||
| } | ||
| 
               | 
          ||
| _ = processingTask; | 
    
      
    
      Copilot
AI
    
    
    
      Nov 3, 2025 
    
  
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fire-and-forget pattern with _ = processingTask; discards exceptions that occur after HandleUpdateAsync returns, making them unobservable. Consider using Task.Run(() => processingTask) or ensuring exceptions are logged within ProcessUpdateAsync. While line 84 does catch and log exceptions, this pattern could be clearer by documenting the intentional discard or using a more explicit fire-and-forget pattern.
| var processingTask = ProcessUpdateAsync(update, cancellationToken); | |
| if (cancellationToken.IsCancellationRequested) { | |
| return processingTask; | |
| } | |
| _ = processingTask; | |
| // Fire-and-forget: intentionally not awaiting the task, but ensure exceptions are logged. | |
| var processingTask = ProcessUpdateAsync(update, cancellationToken); | |
| if (cancellationToken.IsCancellationRequested) { | |
| return processingTask; | |
| } | |
| Task.Run(() => processingTask).ContinueWith(t => { | |
| if (t.Exception != null) { | |
| _logger.LogError(t.Exception, "Unhandled exception in fire-and-forget update processing for UpdateId {UpdateId}", update.Id); | |
| } | |
| }, TaskContinuationOptions.OnlyOnFaulted); | 
| } catch (Exception ex) { | ||
| _logger.LogError(ex, "Error handling update {UpdateId}", update.Id); | ||
| } finally { | 
    
      
    
      Copilot
AI
    
    
    
      Nov 3, 2025 
    
  
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generic catch clause.
          🔍 PR检查报告📋 检查概览
 🧪 测试结果
 📊 代码质量
 📁 测试产物
 🔗 相关链接此报告由GitHub Actions自动生成  | 
    
本次拉取请求通过引入信号量限制并行处理更新数量、重构更新处理逻辑以更好地管理取消操作和错误日志记录,并确保资源正确释放,从而提升了
TelegramBotReceiverService的并发能力和可靠性。并发性改进
SemaphoreSlim(_updateProcessingSemaphore)和常量MaxConcurrentUpdates,将并发更新处理任务数限制为8个,避免资源耗尽并提升服务稳定性。[1] [2]重构与可靠性
HandleUpdateAsync现将处理委派给新的ProcessUpdateAsync方法,该方法负责信号量获取/释放及错误日志记录。这一变更确保更新在并发限制内处理,且即使发生异常也能释放资源。[1] [2]