-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
perf: optimize upload #554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a configurable buffer limit feature to optimize memory usage during uploads and improves memory/storage efficiency for multiple drivers. The main change introduces a max_buffer_limitMB configuration option that automatically defaults to 5% of total memory, can be disabled with 0, or set to a custom value in MB.
- Added
max_buffer_limitMBconfiguration with automatic memory-based sizing - Optimized upload operations for 11 drivers to use streaming readers instead of memory buffers
- Replaced string HTTP method literals with constants for consistency
Reviewed Changes
Copilot reviewed 43 out of 43 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/conf/config.go | Added MaxBufferLimit configuration field with default -1 |
| internal/bootstrap/config.go | Implemented automatic buffer limit calculation based on system memory |
| internal/stream/util.go | Added StreamSectionReader for efficient memory management during uploads |
| drivers/*/util.go | Replaced memory buffering with streaming section readers for upload operations |
| pkg/errgroup/errgroup.go | Enhanced error group with ordered execution and lifecycle management |
| pkg/singleflight/var.go | Changed from ErrorGroup to AnyGroup for better type flexibility |
Comments suppressed due to low confidence (1)
internal/stream/util.go:199
- [nitpick] The parameter name 'bufMaxLen' is unclear. Consider renaming to 'maxBufferSize' or 'bufferSizeLimit' to better indicate it represents the maximum buffer size.
func NewStreamSectionReader(file model.FileStreamer, bufMaxLen int) (*StreamSectionReader, error) {
| var buf []byte | ||
| if cache == nil { | ||
| if off != ss.off { | ||
| return nil, fmt.Errorf("stream not cached: request offset %d != current offset %d", off, ss.off) |
Copilot
AI
Aug 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The error message could be more descriptive about why this condition matters. Consider adding context about the streaming nature of the reader and why sequential access is required.
| return nil, fmt.Errorf("stream not cached: request offset %d != current offset %d", off, ss.off) | |
| return nil, fmt.Errorf("stream not cached: sequential access required for streaming reader (requested offset %d, current offset %d). Random access is only supported when the stream is fully cached.", off, ss.off) |
添加配置选项 max_buffer_limitMB
自动: -1 (默认):5%的总内存
关闭:0
自定义:大于0 单位为MB
优化以下驱动在上传时的内存和存储占用
115open、123、123open、aliyun_open、google_drive、cloudreve、cloudreve_v4、onedrive、onedrive_app、doubao、189pc