Skip to content

Conversation

@ILoveScratch2
Copy link
Member

@ILoveScratch2 ILoveScratch2 commented Sep 21, 2025

Description / 描述

添加MediaFire驱动
cherry-pick from alist#9319 alist#9321

原作者D@' 3z K!7 经过@ILoveScratch2 修改

Motivation and Context / 背景

添加Mediafire驱动,并修改以适配OpenList

How Has This Been Tested? / 测试

测试,可成功挂在Mediafire

Checklist / 检查清单

  • I have read the CONTRIBUTING document.
    我已阅读 CONTRIBUTING 文档。
  • I have formatted my code with go fmt or prettier.
    我已使用 go fmtprettier 格式化提交的代码。
  • I have added appropriate labels to this PR (or mentioned needed labels in the description if lacking permissions).
    我已为此 PR 添加了适当的标签(如无权限或需要的标签不存在,请在描述中说明,管理员将后续处理)。
  • I have requested review from relevant code authors using the "Request review" feature when applicable.
    我已在适当情况下使用"Request review"功能请求相关代码作者进行审查。
  • I have updated the repository accordingly (If it’s needed).
    我已相应更新了相关仓库(若适用)。

Da3zKi7 and others added 6 commits September 14, 2025 19:11
- Implement complete MediaFire storage driver
- Add authentication via session_token and cookie
- Support all core operations: List, Get, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include thumbnail generation for media files
- Handle MediaFire's resumable upload API with multi-unit transfers
- Add proper error handling and progress reporting

Co-authored-by: Da3zKi7 <da3zki7@duck.com>
- Implement automatic session token renewal every 6-9 minutes
- Add validation for required SessionToken and Cookie fields in Init
- Handle session expiration by calling renewToken on validation failure
- Prevent storage failures due to MediaFire session timeouts

Fixes session closure issues that occur after server restarts or extended periods.

Co-authored-by: Da3zKi7 <da3zki7@duck.com>
Signed-off-by: ILoveScratch <ilovescratch@foxmail.com>
Copy link
Member

@KirCute KirCute left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感觉很多问题都是因为原作者不熟悉代码,改一下d.cron和秒传的问题就行,别的不太重要

@ILoveScratch2
Copy link
Member Author

感觉很多问题都是因为原作者不熟悉代码,改一下d.cron和秒传的问题就行,别的不太重要

(其实也基本都是原作者写的,就改了改适配)

…m processing

- Remove forced caching to *os.File type
- Support generic model.File interface for better flexibility
- Improve upload efficiency by avoiding unnecessary file conversions
- Fix return type to use model.Object instead of model.ObjThumb
- Ensure all API methods properly use context for rate limiting
- Fix context parameter usage in getDirectDownloadLink, getActionToken, getFileByHash
- Maintain consistent rate limiting across all MediaFire API calls
…humb

- Change MakeDir, Rename, Copy methods to return model.Object instead of model.ObjThumb
- Remove empty Thumbnail fields where not meaningful
- Keep ObjThumb only for fileToObj (List operations) which provides actual thumbnail URLs
- Improve code consistency and reduce unnecessary wrapper objects
- Add checkAPIResult helper function to reduce code duplication
- Replace repetitive MediaFire API error checks with centralized function
- Maintain specific error messages for unique cases (token, upload, search)
- Improve code maintainability and consistency
- Add null check for existingFile to prevent potential issues
- Improve error handling in quick upload - continue normal upload if search fails
- Add detailed comments explaining quick upload logic
- Optimize getExistingFileInfo with clearer fallback strategy
- Ensure upload reliability even when file search encounters issues
@j2rong4cn j2rong4cn dismissed stale reviews from Suyunmeng and KirCute via 13a1ec8 September 24, 2025 11:39
@j2rong4cn
Copy link
Member

上传还可以进一步优化,重试、多线程,,具体参考其他使用stream.NewStreamSectionReader和errgroup.NewOrderedGroupWithContext的代码

@Da3zKi7
Copy link
Contributor

Da3zKi7 commented Sep 26, 2025

@ILoveScratch2 Hello, what is pending?

@j2rong4cn upload was already optimized by small chunks and resumable logic, all handled by original code and MediaFire's API logic.

@Da3zKi7
Copy link
Contributor

Da3zKi7 commented Sep 26, 2025

@j2rong4cn I'll check stream.NewStreamSectionReader and errgroup.NewOrderedGroupWithContext I never committed here.

@KirCute
Copy link
Member

KirCute commented Sep 26, 2025

@j2rong4cn I'll check stream.NewStreamSectionReader and errgroup.NewOrderedGroupWithContext I never committed here.

Using file.GetHash().GetHash(utils.SHA256) may allow you to obtain the SHA256 of the file. If the file already has the required hash value, we recommend avoiding fully caching the file stream locally. Instead, as the frontend provides data segment by segment, OpenList should transfer it piece by piece to the driver. Please keep this in mind during your check. Thank you for your contributions to AList and OpenList.

@Da3zKi7
Copy link
Contributor

Da3zKi7 commented Sep 26, 2025

@j2rong4cn I'll check stream.NewStreamSectionReader and errgroup.NewOrderedGroupWithContext I never committed here.

Using file.GetHash().GetHash(utils.SHA256) may allow you to obtain the SHA256 of the file. If the file already has the required hash value, we recommend avoiding fully caching the file stream locally. Instead, as the frontend provides data segment by segment, OpenList should transfer it piece by piece to the driver. Please keep this in mind during your check. Thank you for your contributions to AList and OpenList.

Got it thanks, I'm not a master in Golang)) but I try to help.

So...

If stream.NewStreamSectionReader and errgroup.NewOrderedGroupWithContext are commited the driver is ready to be merged?

@KirCute
Copy link
Member

KirCute commented Sep 26, 2025

If stream.NewStreamSectionReader and errgroup.NewOrderedGroupWithContext are commited the driver is ready to be merged?

Right.

@Da3zKi7
Copy link
Contributor

Da3zKi7 commented Sep 26, 2025

@KirCute I will try, do not expect too much 😂😂😂

@KirCute
Copy link
Member

KirCute commented Sep 26, 2025

@KirCute I will try, do not expect too much 😂😂😂

ss, err := stream.NewStreamSectionReader(file, int(chunkSize), &up)
if err != nil {
return err
}
uploadNums := (size + chunkSize - 1) / chunkSize
thread := min(int(uploadNums), d.UploadThread)
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread,
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
for partIndex := range uploadNums {
if utils.IsCanceled(uploadCtx) {
break
}
partIndex := partIndex
partNumber := partIndex + 1 // 分片号从1开始
offset := partIndex * chunkSize
size := min(chunkSize, size-offset)
var reader *stream.SectionReader
var rateLimitedRd io.Reader
sliceMD5 := ""
// 表单
b := bytes.NewBuffer(make([]byte, 0, 2048))
threadG.GoWithLifecycle(errgroup.Lifecycle{
Before: func(ctx context.Context) error {
if reader == nil {
var err error
// 每个分片一个reader
reader, err = ss.GetSectionReader(offset, size)
if err != nil {
return err
}
// 计算当前分片的MD5
sliceMD5, err = utils.HashReader(utils.MD5, reader)
if err != nil {
return err
}
}
return nil
},
Do: func(ctx context.Context) error {
// 重置分片reader位置,因为HashReader、上一次失败已经读取到分片EOF
reader.Seek(0, io.SeekStart)
b.Reset()
w := multipart.NewWriter(b)
// 添加表单字段
err = w.WriteField("preuploadID", createResp.Data.PreuploadID)
if err != nil {
return err
}
err = w.WriteField("sliceNo", strconv.FormatInt(partNumber, 10))
if err != nil {
return err
}
err = w.WriteField("sliceMD5", sliceMD5)
if err != nil {
return err
}
// 写入文件内容
_, err = w.CreateFormFile("slice", fmt.Sprintf("%s.part%d", file.GetName(), partNumber))
if err != nil {
return err
}
headSize := b.Len()
err = w.Close()
if err != nil {
return err
}
head := bytes.NewReader(b.Bytes()[:headSize])
tail := bytes.NewReader(b.Bytes()[headSize:])
rateLimitedRd = driver.NewLimitedUploadStream(ctx, io.MultiReader(head, reader, tail))
// 创建请求并设置header
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadDomain+"/upload/v2/file/slice", rateLimitedRd)
if err != nil {
return err
}
// 设置请求头
req.Header.Add("Authorization", "Bearer "+d.AccessToken)
req.Header.Add("Content-Type", w.FormDataContentType())
req.Header.Add("Platform", "open_platform")
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != 200 {
return fmt.Errorf("slice %d upload failed, status code: %d", partNumber, res.StatusCode)
}
var resp BaseResp
respBody, err := io.ReadAll(res.Body)
if err != nil {
return err
}
err = json.Unmarshal(respBody, &resp)
if err != nil {
return err
}
if resp.Code != 0 {
return fmt.Errorf("slice %d upload failed: %s", partNumber, resp.Message)
}
progress := 10.0 + 85.0*float64(threadG.Success())/float64(uploadNums)
up(progress)
return nil
},
After: func(err error) {
ss.FreeSectionReader(reader)
},
})
}
if err := threadG.Wait(); err != nil {
return err
}

You can refer to the upload logic of the 123open driver. Don't be stressful about the code, if there are any issues we will fix it then.

@Da3zKi7
Copy link
Contributor

Da3zKi7 commented Sep 26, 2025

@KirCute Thanks mate, I was working on it, I hope I caught you

Da3zKi7 added a commit to Da3zKi7/OpenList that referenced this pull request Sep 26, 2025
- Implement complete MediaFire storage driver with session token authentication
- Support all core operations: List, Get, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include thumbnail generation for media files
- Handle MediaFire's resumable upload with intelligent and multi-unit transfers
- Support concurrent chunk uploads using errgroup.NewOrderedGroupWithContext, using splitted file caching for large files
- Optimize memory usage with adaptive buffer sizing (10MB-100MB (default))
- Include rate limiting and retry logic for API requests
- Add proper error handling and progress reporting
- Handle MediaFire's bitmap-based resumable upload protocol

Closes PR OpenListTeam#1322
Da3zKi7 added a commit to Da3zKi7/OpenList that referenced this pull request Sep 26, 2025
- Implement complete MediaFire storage driver with session token authentication
- Support all core operations: List, Get, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include thumbnail generation for media files
- Handle MediaFire's resumable upload with intelligent and multi-unit transfers
- Support concurrent chunk uploads using errgroup.NewOrderedGroupWithContext, using splitted file caching for large files
- Optimize memory usage with adaptive buffer sizing (10MB-100MB (default))
- Include rate limiting and retry logic for API requests
- Add proper error handling and progress reporting
- Handle MediaFire's bitmap-based resumable upload protocol

Closes PR OpenListTeam#1322
Da3zKi7 added a commit to Da3zKi7/OpenList that referenced this pull request Sep 26, 2025
- Implement complete MediaFire storage driver with session token authentication
- Support all core operations: List, Get, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include thumbnail generation for media files
- Handle MediaFire's resumable upload with intelligent and multi-unit transfers
- Support concurrent chunk uploads using errgroup.NewOrderedGroupWithContext, using splitted file caching for large files
- Optimize memory usage with adaptive buffer sizing (10MB-100MB (default))
- Include rate limiting and retry logic for API requests
- Add proper error handling and progress reporting
- Handle MediaFire's bitmap-based resumable upload protocol

Closes PR OpenListTeam#1322
Da3zKi7 added a commit to Da3zKi7/OpenList that referenced this pull request Sep 26, 2025
- Implement complete MediaFire storage driver with session token authentication
- Support all core operations: List, Get, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include thumbnail generation for media files
- Handle MediaFire's resumable upload with intelligent and multi-unit transfers
- Support concurrent chunk uploads using errgroup.NewOrderedGroupWithContext, using splitted file caching for large files
- Optimize memory usage with adaptive buffer sizing (10MB-100MB (default))
- Include rate limiting and retry logic for API requests
- Add proper error handling and progress reporting
- Handle MediaFire's bitmap-based resumable upload protocol
- Implement automatic session token renewal

Closes PR OpenListTeam#1322
@KirCute
Copy link
Member

KirCute commented Sep 29, 2025

我发现所有api调用最后都会汇总到d.apiRequest,在apiRequest里已经调用d.limiter.Wait了,我就把最外层的WaitLimit删了

ILoveScratch2 and others added 3 commits September 29, 2025 20:40
* feat(drivers): add MediaFire driver with concurrent upload support

- Implement complete MediaFire storage driver with session token authentication
- Support all core operations: List, Get, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include thumbnail generation for media files
- Handle MediaFire's resumable upload with intelligent and multi-unit transfers
- Support concurrent chunk uploads using errgroup.NewOrderedGroupWithContext, using splitted file caching for large files
- Optimize memory usage with adaptive buffer sizing (10MB-100MB (default))
- Include rate limiting and retry logic for API requests
- Add proper error handling and progress reporting
- Handle MediaFire's bitmap-based resumable upload protocol

Closes PR #1322

* feat(stream): add DiscardSection method to StreamSectionReader for skipping data

* feat(mediafire): refactor resumableUpload logic for improved upload handling and error management

* fix(mediafire): stop cron job and clear action token in Drop method

* .

* fix(mediafire): optimize buffer sizing logic in uploadUnits method

* fix(docs): remove duplicate MediaFire

* fix(mediafire): revert 'optimization', large files should not be fully chached.

---------

Signed-off-by: j2rong4cn <36783515+j2rong4cn@users.noreply.github.com>
Co-authored-by: Da3zKi7 <da3zki7@duck.com>
Co-authored-by: D@' 3z K!7 <99719341+Da3zKi7@users.noreply.github.com>
Co-authored-by: j2rong4cn <j2rong@qq.com>
Co-authored-by: j2rong4cn <36783515+j2rong4cn@users.noreply.github.com>
* feat(drivers): add MediaFire driver with concurrent upload support

- Implement complete MediaFire storage driver with session token authentication
- Support all core operations: List, Get, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include thumbnail generation for media files
- Handle MediaFire's resumable upload with intelligent and multi-unit transfers
- Support concurrent chunk uploads using errgroup.NewOrderedGroupWithContext, using splitted file caching for large files
- Optimize memory usage with adaptive buffer sizing (10MB-100MB (default))
- Include rate limiting and retry logic for API requests
- Add proper error handling and progress reporting
- Handle MediaFire's bitmap-based resumable upload protocol

Closes PR #1322

* feat(stream): add DiscardSection method to StreamSectionReader for skipping data

* feat(mediafire): refactor resumableUpload logic for improved upload handling and error management

* fix(mediafire): stop cron job and clear action token in Drop method

* .

* fix(mediafire): optimize buffer sizing logic in uploadUnits method

* fix(docs): remove duplicate MediaFire

* fix(mediafire): revert 'optimization', large files should not be fully chached.

---------

Signed-off-by: j2rong4cn <36783515+j2rong4cn@users.noreply.github.com>
Signed-off-by: D@' 3z K!7 <99719341+Da3zKi7@users.noreply.github.com>
Co-authored-by: j2rong4cn <j2rong@qq.com>
Co-authored-by: j2rong4cn <36783515+j2rong4cn@users.noreply.github.com>
Copy link
Member

@KirCute KirCute left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

多线程上传这块我现在也不知道是啥情况,别的没问题了

@j2rong4cn
Copy link
Member

多线程上传这块我现在也不知道是啥情况,别的没问题了

什么症状

@KirCute
Copy link
Member

KirCute commented Sep 30, 2025

什么症状

没什么症状,只是我不知道你们聊到什么程度了,如果现在这版你们测过没问题那就合吧

@ILoveScratch2 ILoveScratch2 merged commit 189cebe into main Sep 30, 2025
12 checks passed
@ILoveScratch2 ILoveScratch2 deleted the mediafire branch September 30, 2025 13:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants