-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load all data before training for pool_size=-1 #59
Closed
emailweixu
wants to merge
2
commits into
PaddlePaddle:master
from
emailweixu:PyDataProvider2_loadAll
Closed
Load all data before training for pool_size=-1 #59
emailweixu
wants to merge
2
commits into
PaddlePaddle:master
from
emailweixu:PyDataProvider2_loadAll
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This is to ensure there is no problem for shuffling when only partial data is loaded.
qingqing01
pushed a commit
to qingqing01/Paddle
that referenced
this pull request
Apr 30, 2020
XiaoguangHu01
pushed a commit
that referenced
this pull request
Sep 18, 2021
* 1. add interface for fft; 2. add data type predicate; 3. fix paddle.roll. * add fft c2c cufft kernel * implement argument checking & op calling parts for fft_c2c and fftn_c2c * add operator and opmaker definitions * only register float and double for cpu. * add common code for implementing FFT, add pocketfft as a dependency * add fft c2c cufft kernel function * fix bugs in python interface * add support for c2r, r2c operators, op makers, kernels and kernel functors. * test and fix bugs * 1. fft_c2c function: add support for onesided=False; 2. add complex<float>, complex<double> support for concat and flip. * 1. fft: fix python api bugs; 2. shape_op: add support for complex data types. * fft c2c cufft kernel done with complie and link * fix shape_op, add mkl placeholder * remove mkl * complete fft c2c in gpu * 1. implement mkl-based fft, FFTC2CFunctor and common function exec_fft; 2. change the design, add input and output typename as template parameter for all FFTFunctors, update pocketfft-based implementation. * complete fft c2c on gpu in ND * complete fft c2c on gpu in ND * complete fft c2c backward in ND * fix MKL-based implementation * Add frame op and CPU/GPU kernels. * Add frame op forward unittest. * Add frame op forward unittest. * Remove axis parameter in FrameFunctor. * Add frame op grad CPU/GPU kernels and unittest. * Add frame op grad CPU/GPU kernels and unittest. * Update doc string. * Update after review and remove librosa requirement in unittest. * Update grad kernel. * add fft_c2r op * Remove data allocation in TransCompute function. * add fft r2c onesided with cpu(pocketfft/mkl) and gpu * last fft c2r functor * fix C2R and R2C for cufft, becase the direction is not an option in these cases. * add fft r2c onesided with cpu(pocketfft/mkl) and gpu * fix bugs in python APIs * fix fft_c2r grad kernal * fix bugs in python APIs * add cuda fft c2r grad kernal functor * clean code * fix fft_c2r python API * fill fft r2c result with conjugate symmetry (#19) fill fft r2c result with conjugate symmetry * add placeholder for unittests (#24) * simple parameterize test function by auto generate test case from parm list (#25) * miscellaneous fixes for python APIs (#26) * add placeholder for unittests * resize fft inputs before computation is n or s is provided. * add complex kernels for pad and pad_grad * simplify argument checking. * add type promotion * add int to float or complex promotion * fix output data type for static mode * fix fft's input dtype dispatch, import fft to paddle * fix typos in axes checking (#27) * fix typos in axes checking * fix argument checking (#28) * fix argument checking * Add C2R Python layer normal and abnormal use cases (#29) * documents and single case * test c2r case * New C2R Python layer normal and exception use cases * complete rfft,rfft2,rfftn,ihfft,ihfft2,ihfftn unittest and doc string (#30) * Documentation of the common interfaces of c2r and c2c (#31) * Documentation of the common interfaces of c2r and c2c * clean c++ code (#32) * clean code * Add numpy-based implementation of spectral ops (#33) * add numpy reference implementation of spectral ops * Add fft_c2r numpy based implementation for unittest. (#34) * add fft_c2r numpy implementation * Add deframe op and stft/istft api. (#23) * Add frame api * Add deframe op and kernels. * Add stft and istft apis. * Add deframe api. Update stft and istft apis. * Fix bug in frame_from_librosa function when input dims >= 3 * Rename deframe to overlap_add. * Update istft. * Update after code review. * Add overlap_add op and stft/istft api unittest (#35) * Add overlap_add op unittest. * Register complex kernels of squeeze/unsquuze op. * Add stft/istft api unittest. * Add unittest for fft helper functions (#36) * add unittests for fft helper functions. add complex kernel for roll op. * complete static graph unittest for all public api (#37) * Unittest of op with FFT C2C, C2R and r2c added (#38) * documents and single case * test c2r case * New C2R Python layer normal and exception use cases * Documentation of the common interfaces of c2r and c2c * Unittest of op with FFT C2C, C2R and r2c added Co-authored-by: lijiaqi <lijiaqi0612@163.com> * add fft related options to CMakeLists.txt * fix typos and clean code (#39) * fix invisible character in mkl branch and fix error in error message * clean code: remove docstring from unittest for signal.py. * always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype. (#40) * always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype. * fix CI Errors: numpy dtype comparison, thrust when cuda is not available (#41) 1. always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype. 2. promote floating point tensor to complex tensor ior fft_c2c and fft_c2r; 3. fix unittest to catch UnImplementedError and RuntimeError; 4. fix compile error by avoid using thrust when cuda is not available. 5. fix sample code, use paddle.fft instead of paddle.tensor.fft * remove inclusion of thrust, add __all__ list for fft (#42) * Add api doc and update unittest. (#43) * Add doc strings. * Update overlap_add op unittest * fix MKL-based FFT implementation (#44) * fix MKL-based FFT implementation, MKL CDFT's FORWARD DOMAIN is always REAL for R2C and C2R * remove code for debug (#45) * use dynload for cufft (#46) * use std::ptrdiff_t as datatype of stride (instead of int64_t) to avoid argument mismatch on some platforms. * add complex support for fill_zeros_like * use dynload for cufft * Update doc and unittest. (#47) * Add doc of frame op and overlap_add op. * Update unittest. * use dynload for cufft (#48) 1. use dynload for cufft 2. fix unittest; 3. temporarily disable Rocm. * fix conflicts and merge upstream (#49) fix conflicts and merge upstream * fix compile error: only link dyload_cuda when cuda is available (#50) * fix compile error: only link dyload_cuda when cuda is available * fix dynload for cufft on windows (#51) 1. fix dynload for cufft on windows; 2. fix unittests. * add NOMINMAX to compile on windows (#52) add NOMINMAX to compile on windows * explicitly specify capture mode for lambdas (#55) explicitly specify capture mode for lambdas * fix fft sample (#53) * fix fft sample * update scipy and numpy version for unittests of fft (#56) update scipy and numpy version for unittests of fft * Add static graph unittests of frame and overlap_add api. (#57) * Remove cache of cuFFT & Disable ONEMKL (#59) 1. replace numpy.fft with scipy.fft as numpy<1.20 not support ortho norm 2. remove cache of cufft plans; 3. enhance error checking. 4. default WITH_ONEMKL to OFF Co-authored-by: jeff41404 <jeff41404@gmail.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming9.bjyz.baidu.com> Co-authored-by: KP <109694228@qq.com> Co-authored-by: lijiaqi <lijiaqi0612@163.com> Co-authored-by: Xiaoxu Chen <chenxx_id@163.com> Co-authored-by: lijiaqi0612 <33169170+lijiaqi0612@users.noreply.github.com>
AnnaTrainingG
pushed a commit
to AnnaTrainingG/Paddle
that referenced
this pull request
Sep 29, 2021
* 1. add interface for fft; 2. add data type predicate; 3. fix paddle.roll. * add fft c2c cufft kernel * implement argument checking & op calling parts for fft_c2c and fftn_c2c * add operator and opmaker definitions * only register float and double for cpu. * add common code for implementing FFT, add pocketfft as a dependency * add fft c2c cufft kernel function * fix bugs in python interface * add support for c2r, r2c operators, op makers, kernels and kernel functors. * test and fix bugs * 1. fft_c2c function: add support for onesided=False; 2. add complex<float>, complex<double> support for concat and flip. * 1. fft: fix python api bugs; 2. shape_op: add support for complex data types. * fft c2c cufft kernel done with complie and link * fix shape_op, add mkl placeholder * remove mkl * complete fft c2c in gpu * 1. implement mkl-based fft, FFTC2CFunctor and common function exec_fft; 2. change the design, add input and output typename as template parameter for all FFTFunctors, update pocketfft-based implementation. * complete fft c2c on gpu in ND * complete fft c2c on gpu in ND * complete fft c2c backward in ND * fix MKL-based implementation * Add frame op and CPU/GPU kernels. * Add frame op forward unittest. * Add frame op forward unittest. * Remove axis parameter in FrameFunctor. * Add frame op grad CPU/GPU kernels and unittest. * Add frame op grad CPU/GPU kernels and unittest. * Update doc string. * Update after review and remove librosa requirement in unittest. * Update grad kernel. * add fft_c2r op * Remove data allocation in TransCompute function. * add fft r2c onesided with cpu(pocketfft/mkl) and gpu * last fft c2r functor * fix C2R and R2C for cufft, becase the direction is not an option in these cases. * add fft r2c onesided with cpu(pocketfft/mkl) and gpu * fix bugs in python APIs * fix fft_c2r grad kernal * fix bugs in python APIs * add cuda fft c2r grad kernal functor * clean code * fix fft_c2r python API * fill fft r2c result with conjugate symmetry (#19) fill fft r2c result with conjugate symmetry * add placeholder for unittests (#24) * simple parameterize test function by auto generate test case from parm list (#25) * miscellaneous fixes for python APIs (#26) * add placeholder for unittests * resize fft inputs before computation is n or s is provided. * add complex kernels for pad and pad_grad * simplify argument checking. * add type promotion * add int to float or complex promotion * fix output data type for static mode * fix fft's input dtype dispatch, import fft to paddle * fix typos in axes checking (#27) * fix typos in axes checking * fix argument checking (#28) * fix argument checking * Add C2R Python layer normal and abnormal use cases (#29) * documents and single case * test c2r case * New C2R Python layer normal and exception use cases * complete rfft,rfft2,rfftn,ihfft,ihfft2,ihfftn unittest and doc string (PaddlePaddle#30) * Documentation of the common interfaces of c2r and c2c (PaddlePaddle#31) * Documentation of the common interfaces of c2r and c2c * clean c++ code (PaddlePaddle#32) * clean code * Add numpy-based implementation of spectral ops (PaddlePaddle#33) * add numpy reference implementation of spectral ops * Add fft_c2r numpy based implementation for unittest. (PaddlePaddle#34) * add fft_c2r numpy implementation * Add deframe op and stft/istft api. (#23) * Add frame api * Add deframe op and kernels. * Add stft and istft apis. * Add deframe api. Update stft and istft apis. * Fix bug in frame_from_librosa function when input dims >= 3 * Rename deframe to overlap_add. * Update istft. * Update after code review. * Add overlap_add op and stft/istft api unittest (PaddlePaddle#35) * Add overlap_add op unittest. * Register complex kernels of squeeze/unsquuze op. * Add stft/istft api unittest. * Add unittest for fft helper functions (PaddlePaddle#36) * add unittests for fft helper functions. add complex kernel for roll op. * complete static graph unittest for all public api (PaddlePaddle#37) * Unittest of op with FFT C2C, C2R and r2c added (PaddlePaddle#38) * documents and single case * test c2r case * New C2R Python layer normal and exception use cases * Documentation of the common interfaces of c2r and c2c * Unittest of op with FFT C2C, C2R and r2c added Co-authored-by: lijiaqi <lijiaqi0612@163.com> * add fft related options to CMakeLists.txt * fix typos and clean code (PaddlePaddle#39) * fix invisible character in mkl branch and fix error in error message * clean code: remove docstring from unittest for signal.py. * always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype. (PaddlePaddle#40) * always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype. * fix CI Errors: numpy dtype comparison, thrust when cuda is not available (PaddlePaddle#41) 1. always convert numpy array to paddle.Tensor to avoid comparing numpy dtype with paddle dtype. 2. promote floating point tensor to complex tensor ior fft_c2c and fft_c2r; 3. fix unittest to catch UnImplementedError and RuntimeError; 4. fix compile error by avoid using thrust when cuda is not available. 5. fix sample code, use paddle.fft instead of paddle.tensor.fft * remove inclusion of thrust, add __all__ list for fft (PaddlePaddle#42) * Add api doc and update unittest. (PaddlePaddle#43) * Add doc strings. * Update overlap_add op unittest * fix MKL-based FFT implementation (PaddlePaddle#44) * fix MKL-based FFT implementation, MKL CDFT's FORWARD DOMAIN is always REAL for R2C and C2R * remove code for debug (PaddlePaddle#45) * use dynload for cufft (PaddlePaddle#46) * use std::ptrdiff_t as datatype of stride (instead of int64_t) to avoid argument mismatch on some platforms. * add complex support for fill_zeros_like * use dynload for cufft * Update doc and unittest. (PaddlePaddle#47) * Add doc of frame op and overlap_add op. * Update unittest. * use dynload for cufft (PaddlePaddle#48) 1. use dynload for cufft 2. fix unittest; 3. temporarily disable Rocm. * fix conflicts and merge upstream (PaddlePaddle#49) fix conflicts and merge upstream * fix compile error: only link dyload_cuda when cuda is available (PaddlePaddle#50) * fix compile error: only link dyload_cuda when cuda is available * fix dynload for cufft on windows (PaddlePaddle#51) 1. fix dynload for cufft on windows; 2. fix unittests. * add NOMINMAX to compile on windows (PaddlePaddle#52) add NOMINMAX to compile on windows * explicitly specify capture mode for lambdas (PaddlePaddle#55) explicitly specify capture mode for lambdas * fix fft sample (PaddlePaddle#53) * fix fft sample * update scipy and numpy version for unittests of fft (PaddlePaddle#56) update scipy and numpy version for unittests of fft * Add static graph unittests of frame and overlap_add api. (PaddlePaddle#57) * Remove cache of cuFFT & Disable ONEMKL (PaddlePaddle#59) 1. replace numpy.fft with scipy.fft as numpy<1.20 not support ortho norm 2. remove cache of cufft plans; 3. enhance error checking. 4. default WITH_ONEMKL to OFF Co-authored-by: jeff41404 <jeff41404@gmail.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming9.bjyz.baidu.com> Co-authored-by: KP <109694228@qq.com> Co-authored-by: lijiaqi <lijiaqi0612@163.com> Co-authored-by: Xiaoxu Chen <chenxx_id@163.com> Co-authored-by: lijiaqi0612 <33169170+lijiaqi0612@users.noreply.github.com>
jeff41404
pushed a commit
to jeff41404/Paddle
that referenced
this pull request
Oct 8, 2021
1. replace numpy.fft with scipy.fft as numpy<1.20 not support ortho norm 2. remove cache of cufft plans; 3. enhance error checking. 4. default WITH_ONEMKL to OFF
thisjiang
pushed a commit
to thisjiang/Paddle
that referenced
this pull request
Oct 28, 2021
…with-transform add test of transform to codegen_c_test
gglin001
added a commit
to graphcore/Paddle-fork
that referenced
this pull request
Dec 8, 2021
* add elementwise_op_handler * fix pow_handler
wangxicoding
pushed a commit
to wangxicoding/Paddle
that referenced
this pull request
Dec 9, 2021
* Add new glue dataset and example. update dataset. * update load_dataet() args name
zhoutianzi666
pushed a commit
to zhoutianzi666/Paddle
that referenced
this pull request
May 23, 2022
[DOC] add quick_start section, test=develop
danleifeng
pushed a commit
to danleifeng/Paddle
that referenced
this pull request
Jul 7, 2022
Optimize CUDA thread parallelism in MergeGrad phase,6.5h-3.25h
zmxdream
pushed a commit
to zmxdream/Paddle
that referenced
this pull request
Aug 31, 2022
…e#59) * fix async alloc bug * use stream safe alloc * alloc fix & reuse scope mem
zmxdream
pushed a commit
to zmxdream/Paddle
that referenced
this pull request
Nov 2, 2022
…e#59) * fix async alloc bug * use stream safe alloc * alloc fix & reuse scope mem
jack603047588
referenced
this pull request
in jack603047588/Paddle
Nov 9, 2022
add norm async update in BoxPSAsynDenseTable
zmxdream
pushed a commit
to zmxdream/Paddle
that referenced
this pull request
Oct 10, 2023
add merge multi models interface
hanhaowen-mt
pushed a commit
to hanhaowen-mt/Paddle
that referenced
this pull request
Feb 29, 2024
* [MTAI-484] feat(ci): download 3rd_party form oss * [MT-484] fix(ci): update eigen3 commit * [MTAI-484] use eigen3 patches to replace extra third_party
feifei-111
added a commit
to feifei-111/Paddle
that referenced
this pull request
Mar 12, 2024
Aurelius84
pushed a commit
that referenced
this pull request
Mar 26, 2024
* implement FuseFilteredStmtPatterns * update * split trivial op into a single file. * fix compiler complaints * rename StmtIter to StmtPtr * declare group_pattern.InferShardableAxes * refine signature of group_pattern.InferShardableAxes * move group_pattern.InferShardableAxes to group_pattern_util.InferShardableAxes * implement group_pattern_util.InferShardableAxes * add group_pattern_util.InferShardableAxesFromSink * ReversedInferShardableAxes support sinks * update op lower * support multiple sinks in group_pattern_util.InferShardableAxes * update * fix link error * update * remove FusionOp to OpList * update * update * update * update * declare group_pattern_util.h * fix compiler complains * declare group_pattern_util.ClusteringHelper * refine signature of group_pattern_util.ClusterIntoGroupPatternsFromOpList * update op lowr * add todo * minor refine by group_pattern_util.OpSet * update * update * update (#57) * update * update * Cinn trivalop fuse (#58) * fix * refactor StmtFusionHelper by OpTopo * Complete: CreateReduceExpr function. * update * recursive done. * update * Cinn trivalop fuse (#59) * clean all the TODO. * update * fix cluster * remove unused OpTopo.downstream_disconnected_ops * Cinn trivalop fuse (#60) * fix compile rror * update * Cinn trivalop fuse (#61) * add R + T skeleon * add search utils. * update * Cinn trivalop fuse (#62) * push * update * fix * fix transformer * fix * Implement iterator vars fetching in ReduceOp * small fix * add GetOuterIterVars API * fix * fix compile complain * modify GetOutputIters of TrivialOp * remove dumplicate code in visit * implement ClusterIntoGroupPatternsFromOpList * Fix most error in trivial_op.cc. * CreateReduceExpr is OK! * fix * add CheckIterEq * implement group_pattern_util.ClusteringEngine and groupp_pattern_util.ClusteringPolicy * SinkTrivialTransform OK! * update * fix init_tensor name problem. * update * fix compiler complains * refactor ShardableAxesSignature by group_pattern.SoleOutputShardableAxes * split trivial_op.cc * update * implement group_pattern_util.MakeShardableAxesSignature4ReduceOp * update * implement group_pattern_util.MakeEmptyShardableAxesSignature * add helper class group_pattern_util.ShardableAxesProvider * implement group_pattern_util.MakeShardableAxesSignature4BroadcastOp * update * update * fix softmax error.! * fix * update * merge * fix * Implement new OpMergeWithOp and add a relevant flag * update * update * fix reduce_load error. add splitReduceTransform * fix conflict * update * update * update * disable horizontal fusion * fix * Add some VLOG * Fix group cluster bug (#71) * fix * fix dyshape * fix * init split cluster files * update * update * update * spliting * update * spliting * spliting * pattern utils * update * update * clean cmake * update * update * update * fix clustering_engine * fix fusion_helper * update * fix * update * update * update * update * fix * fix some erros * update * update * fix split with num problem * update * fix * fix static issues * fix * init split cluster files (#72) * update * update * update * update * update * update * update * update * update * split shardable axes provider (#73) * update * update * fix broadcast (#75) * update * update * fix * fix code format * fix code format * remove unittest * update * update (#77) * update * update * update --------- Co-authored-by: tc20042008 <156998525+tc20042008@users.noreply.github.com> Co-authored-by: feifei-111 <2364819892@qq.com> Co-authored-by: jiahy0825 <jiahongyu@baidu.com> Co-authored-by: zhangbaizhou <zhangbaizhou@baidu.com> Co-authored-by: Baizhou Zhang <eddiezhang@pku.edu.cn>
co63oc
pushed a commit
to co63oc/Paddle
that referenced
this pull request
Mar 26, 2024
* implement FuseFilteredStmtPatterns * update * split trivial op into a single file. * fix compiler complaints * rename StmtIter to StmtPtr * declare group_pattern.InferShardableAxes * refine signature of group_pattern.InferShardableAxes * move group_pattern.InferShardableAxes to group_pattern_util.InferShardableAxes * implement group_pattern_util.InferShardableAxes * add group_pattern_util.InferShardableAxesFromSink * ReversedInferShardableAxes support sinks * update op lower * support multiple sinks in group_pattern_util.InferShardableAxes * update * fix link error * update * remove FusionOp to OpList * update * update * update * update * declare group_pattern_util.h * fix compiler complains * declare group_pattern_util.ClusteringHelper * refine signature of group_pattern_util.ClusterIntoGroupPatternsFromOpList * update op lowr * add todo * minor refine by group_pattern_util.OpSet * update * update * update (PaddlePaddle#57) * update * update * Cinn trivalop fuse (PaddlePaddle#58) * fix * refactor StmtFusionHelper by OpTopo * Complete: CreateReduceExpr function. * update * recursive done. * update * Cinn trivalop fuse (PaddlePaddle#59) * clean all the TODO. * update * fix cluster * remove unused OpTopo.downstream_disconnected_ops * Cinn trivalop fuse (PaddlePaddle#60) * fix compile rror * update * Cinn trivalop fuse (PaddlePaddle#61) * add R + T skeleon * add search utils. * update * Cinn trivalop fuse (PaddlePaddle#62) * push * update * fix * fix transformer * fix * Implement iterator vars fetching in ReduceOp * small fix * add GetOuterIterVars API * fix * fix compile complain * modify GetOutputIters of TrivialOp * remove dumplicate code in visit * implement ClusterIntoGroupPatternsFromOpList * Fix most error in trivial_op.cc. * CreateReduceExpr is OK! * fix * add CheckIterEq * implement group_pattern_util.ClusteringEngine and groupp_pattern_util.ClusteringPolicy * SinkTrivialTransform OK! * update * fix init_tensor name problem. * update * fix compiler complains * refactor ShardableAxesSignature by group_pattern.SoleOutputShardableAxes * split trivial_op.cc * update * implement group_pattern_util.MakeShardableAxesSignature4ReduceOp * update * implement group_pattern_util.MakeEmptyShardableAxesSignature * add helper class group_pattern_util.ShardableAxesProvider * implement group_pattern_util.MakeShardableAxesSignature4BroadcastOp * update * update * fix softmax error.! * fix * update * merge * fix * Implement new OpMergeWithOp and add a relevant flag * update * update * fix reduce_load error. add splitReduceTransform * fix conflict * update * update * update * disable horizontal fusion * fix * Add some VLOG * Fix group cluster bug (PaddlePaddle#71) * fix * fix dyshape * fix * init split cluster files * update * update * update * spliting * update * spliting * spliting * pattern utils * update * update * clean cmake * update * update * update * fix clustering_engine * fix fusion_helper * update * fix * update * update * update * update * fix * fix some erros * update * update * fix split with num problem * update * fix * fix static issues * fix * init split cluster files (PaddlePaddle#72) * update * update * update * update * update * update * update * update * update * split shardable axes provider (PaddlePaddle#73) * update * update * fix broadcast (PaddlePaddle#75) * update * update * fix * fix code format * fix code format * remove unittest * update * update (PaddlePaddle#77) * update * update * update --------- Co-authored-by: tc20042008 <156998525+tc20042008@users.noreply.github.com> Co-authored-by: feifei-111 <2364819892@qq.com> Co-authored-by: jiahy0825 <jiahongyu@baidu.com> Co-authored-by: zhangbaizhou <zhangbaizhou@baidu.com> Co-authored-by: Baizhou Zhang <eddiezhang@pku.edu.cn>
zmxdream
pushed a commit
to zmxdream/Paddle
that referenced
this pull request
Apr 2, 2024
abacus-aibox-991 fix the bug of add_float_mask_data's error place
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is to ensure there is no problem for shuffling when only partial data is loaded.