Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Hackathon 7th] 修复 fastspeech20d 问题 #3951

Merged
merged 1 commit into from
Dec 16, 2024

Conversation

megemini
Copy link
Contributor

PR types

Bug fixes

PR changes

Models

Describe

修复 fastspeech20d 问题

aistudio@jupyter-942478-8626068:~/PaddleSpeech/examples/other/tts_finetune/tts3$ ./run.sh --stage 5 --stop-stage 5
finetune...
rank: 0, pid: 4229, parent_pid: 4217
multiple speaker fastspeech2!
spk_num: 174
samplers done!
dataloaders done!
vocab_size: 306
W1211 04:50:06.753571  4229 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 12.0, Runtime API Version: 11.8
W1211 04:50:06.755182  4229 gpu_resources.cc:149] device: 0, cuDNN Version: 8.9.
I1211 04:50:08.957983  4229 eager_method.cc:140] Warning:: 0D Tensor cannot be used as 'Tensor.numpy()[0]' . In order to avoid this problem, 0D Tensor will be changed to 1D numpy currently, but it's not correct and will be removed in release 2.6. For Tensor contain only one element, Please modify  'Tensor.numpy()[0]' to 'float(Tensor)' as soon as possible, otherwise 'Tensor.numpy()[0]' will raise error in release 2.6.
I1211 04:50:08.958451  4229 eager_method.cc:140] Warning:: 0D Tensor cannot be used as 'Tensor.numpy()[0]' . In order to avoid this problem, 0D Tensor will be changed to 1D numpy currently, but it's not correct and will be removed in release 2.6. For Tensor contain only one element, Please modify  'Tensor.numpy()[0]' to 'float(Tensor)' as soon as possible, otherwise 'Tensor.numpy()[0]' will raise error in release 2.6.
model done!
optimizer done!
/home/aistudio/.local/lib/python3.8/site-packages/paddle/nn/layer/layers.py:1897: UserWarning: Skip loading for encoder.embed.1.alpha. encoder.embed.1.alpha receives a shape [1], but the expected shape is [].
  warnings.warn(f"Skip loading for {key}. " + str(err))
/home/aistudio/.local/lib/python3.8/site-packages/paddle/nn/layer/layers.py:1897: UserWarning: Skip loading for decoder.embed.0.alpha. decoder.embed.0.alpha receives a shape [1], but the expected shape is [].
  warnings.warn(f"Skip loading for {key}. " + str(err))
/home/aistudio/.local/lib/python3.8/site-packages/paddle/nn/layer/norm.py:777: UserWarning: When training, we now always track global mean and variance.
  warnings.warn(
Exception in main training loop: Variable Shape not match, Variable [ create_parameter_3.w_0_moment1_0 ] need tensor with shape [] but load set tensor with shape [1]
Traceback (most recent call last):
  File "/home/aistudio/PaddleSpeech/paddlespeech/t2s/training/trainer.py", line 149, in run
    update()
  File "/home/aistudio/PaddleSpeech/paddlespeech/t2s/training/updaters/standard_updater.py", line 110, in update
    self.update_core(batch)
  File "/home/aistudio/PaddleSpeech/paddlespeech/t2s/models/fastspeech2/fastspeech2_updater.py", line 120, in update_core
    optimizer.step()
  File "/home/aistudio/.local/lib/python3.8/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 335, in __impl__
    return func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/framework.py", line 462, in __impl__
    return func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 446, in step
    optimize_ops = self._apply_optimize(
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 1243, in _apply_optimize
    optimize_ops = self._create_optimization_pass(
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 995, in _create_optimization_pass
    self._create_accumulators(
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 278, in _create_accumulators
    self._add_moments_pows(p)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 231, in _add_moments_pows
    self._add_accumulator(self._moment1_acc_str, p, dtype=acc_dtype)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 800, in _add_accumulator
    var.set_value(self._accumulators_holder.pop(var_name))
  File "/home/aistudio/.local/lib/python3.8/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/framework.py", line 449, in __impl__
    return func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/tensor_patch_methods.py", line 196, in set_value
    assert self.shape == list(
Trainer extensions will try to handle the extension. Then all extensions will finalize.Traceback (most recent call last):
  File "local/finetune.py", line 269, in <module>
    train_sp(train_args, config)
  File "local/finetune.py", line 202, in train_sp
    trainer.run()
  File "/home/aistudio/PaddleSpeech/paddlespeech/t2s/training/trainer.py", line 203, in run
    six.reraise(*exc_info)
  File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
    raise value
  File "/home/aistudio/PaddleSpeech/paddlespeech/t2s/training/trainer.py", line 149, in run
    update()
  File "/home/aistudio/PaddleSpeech/paddlespeech/t2s/training/updaters/standard_updater.py", line 110, in update
    self.update_core(batch)
  File "/home/aistudio/PaddleSpeech/paddlespeech/t2s/models/fastspeech2/fastspeech2_updater.py", line 120, in update_core
    optimizer.step()
  File "/home/aistudio/.local/lib/python3.8/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 335, in __impl__
    return func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/framework.py", line 462, in __impl__
    return func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 446, in step
    optimize_ops = self._apply_optimize(
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 1243, in _apply_optimize
    optimize_ops = self._create_optimization_pass(
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 995, in _create_optimization_pass
    self._create_accumulators(
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 278, in _create_accumulators
    self._add_moments_pows(p)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 231, in _add_moments_pows
    self._add_accumulator(self._moment1_acc_str, p, dtype=acc_dtype)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 800, in _add_accumulator
    var.set_value(self._accumulators_holder.pop(var_name))
  File "/home/aistudio/.local/lib/python3.8/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/framework.py", line 449, in __impl__
    return func(*args, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/tensor_patch_methods.py", line 196, in set_value
    assert self.shape == list(
AssertionError: Variable Shape not match, Variable [ create_parameter_3.w_0_moment1_0 ] need tensor with shape [] but load set tensor with shape [1]

修复后测试正常

aistudio@jupyter-942478-8657745:~/PaddleSpeech/examples/other/tts_finetune/tts3$ ./run.sh --stage 5 --stop-stage 5
...
y_loss: 0.382000, loss: 1.298470, avg_reader_cost: 0.00078 sec, avg_batch_cost: 0.42779 sec, avg_samples: 64, avg_ips: 149.60747 sequences/sec, max_mem_reserved: 8704 MB, max_mem_allocated: 6250 MB
[2024-12-13 15:28:30,218] [    INFO] trainer.py:172 -  iter: 96654/900, Rank: 0, l1_loss: 0.652985, duration_loss: 0.047881, pitch_loss: 0.174919, energy_loss: 0.369634, loss: 1.245419, avg_reader_cost: 0.00043 sec, avg_batch_cost: 0.37254 sec, avg_samples: 64, avg_ips: 171.79539 sequences/sec, max_mem_reserved: 8704 MB, max_mem_allocated: 6250 MB
[2024-12-13 15:28:30,995] [    INFO] fastspeech2_updater.py:238 - Evaluate: l1_loss: 0.671683, duration_loss: 0.076784, pitch_loss: 0.117268, energy_loss: 0.514914, loss: 1.380648
[2024-12-13 15:28:34,138] [    INFO] trainer.py:172 -  iter: 96655/900, Rank: 0, l1_loss: 0.651513, duration_loss: 0.048853, pitch_loss: 0.192229, energy_loss: 0.354329, loss: 1.246924, avg_reader_cost: 1.10752 sec, avg_batch_cost: 1.55327 sec, avg_samples: 64, avg_ips: 41.20339 sequences/sec, max_mem_reserved: 8704 MB, max_mem_allocated: 6250 MB
[2024-12-13 15:28:34,516] [    INFO] trainer.py:172 -  iter: 96656/900, Rank: 0, l1_loss: 0.650512, duration_loss: 0.055414, pitch_loss: 0.192810, energy_loss: 0.379957, loss: 1.278693, avg_reader_cost: 0.00063 sec, avg_batch_cost: 0.37560 sec, avg_samples: 64, avg_ips: 170.39464 sequences/sec, max_mem_reserved: 8704 MB, max_mem_allocated: 6250 MB
[2024-12-13 15:28:34,934] [    INFO] trainer.py:172 -  iter: 96657/900, Rank: 0, l1_loss: 0.651290, duration_loss: 0.051236, pitch_loss: 0.185067, energy_loss: 0.417840, loss: 1.305433, avg_reader_cost: 0.00042 sec, avg_batch_cost: 0.41518 sec, avg_samples: 64, avg_ips: 154.15161 sequences/sec, max_mem_reserved: 8704 MB, max_mem_allocated: 6250 MB


aistudio@jupyter-942478-8657745:~/PaddleSpeech/examples/other/tts_finetune/tts3$ ./run.sh --stage 6 --stop-stage 6
in hifigan syn_e2e
/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/utils/cpp_extension/extension_utils.py:686: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
  warnings.warn(warning_message)
/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/_distutils_hack/__init__.py:26: UserWarning: Setuptools is replacing distutils.
  warnings.warn("Setuptools is replacing distutils.")
========Args========
am: fastspeech2_aishell3
am_ckpt: ./exp/default/checkpoints/snapshot_iter_96654.pdz
am_config: ./pretrained_models/fastspeech2_aishell3_ckpt_1.1.0/default.yaml
am_stat: ./pretrained_models/fastspeech2_aishell3_ckpt_1.1.0/speech_stats.npy
inference_dir: null
lang: zh
ngpu: 1
nmlu: 0
nnpu: 0
nxpu: 0
output_dir: ./test_e2e/
phones_dict: ./dump/phone_id_map.txt
pinyin_phone: null
speaker_dict: ./dump/speaker_id_map.txt
speech_stretchs: null
spk_id: 0
text: /home/aistudio/PaddleSpeech/paddlespeech/t2s/exps/fastspeech2/../../assets/sentences.txt
tones_dict: null
use_rhy: false
voc: hifigan_aishell3
voc_ckpt: pretrained_models/hifigan_aishell3_ckpt_0.2.0/snapshot_iter_2500000.pdz
voc_config: pretrained_models/hifigan_aishell3_ckpt_0.2.0/default.yaml
voc_stat: pretrained_models/hifigan_aishell3_ckpt_0.2.0/feats_stats.npy

========Config========
batch_size: 64
f0max: 400
f0min: 80
fmax: 7600
fmin: 80
fs: 24000
max_epoch: 200
model:
  adim: 384
  aheads: 2
  decoder_normalize_before: True
  dlayers: 4
  dunits: 1536
  duration_predictor_chans: 256
  duration_predictor_kernel_size: 3
  duration_predictor_layers: 2
  elayers: 4
  encoder_normalize_before: True
  energy_embed_dropout: 0.0
  energy_embed_kernel_size: 1
  energy_predictor_chans: 256
  energy_predictor_dropout: 0.5
  energy_predictor_kernel_size: 3
  energy_predictor_layers: 2
  eunits: 1536
  init_dec_alpha: 1.0
  init_enc_alpha: 1.0
  init_type: xavier_uniform
  pitch_embed_dropout: 0.0
  pitch_embed_kernel_size: 1
  pitch_predictor_chans: 256
  pitch_predictor_dropout: 0.5
  pitch_predictor_kernel_size: 5
  pitch_predictor_layers: 5
  positionwise_conv_kernel_size: 3
  positionwise_layer_type: conv1d
  postnet_chans: 256
  postnet_filts: 5
  postnet_layers: 5
  reduction_factor: 1
  spk_embed_dim: 256
  spk_embed_integration_type: concat
  stop_gradient_from_energy_predictor: False
  stop_gradient_from_pitch_predictor: True
  transformer_dec_attn_dropout_rate: 0.2
  transformer_dec_dropout_rate: 0.2
  transformer_dec_positional_dropout_rate: 0.2
  transformer_enc_attn_dropout_rate: 0.2
  transformer_enc_dropout_rate: 0.2
  transformer_enc_positional_dropout_rate: 0.2
  use_scaled_pos_enc: True
n_fft: 2048
n_mels: 80
n_shift: 300
num_snapshots: 5
num_workers: 4
optimizer:
  learning_rate: 0.001
  optim: adam
seed: 10086
updater:
  use_masking: True
win_length: 1200
window: hann
batch_max_steps: 8400
batch_size: 16
discriminator_adv_loss_params:
  average_by_discriminators: False
discriminator_grad_norm: -1
discriminator_optimizer_params:
  beta1: 0.5
  beta2: 0.9
  weight_decay: 0.0
discriminator_params:
  follow_official_norm: True
  period_discriminator_params:
    bias: True
    channels: 32
    downsample_scales: [3, 3, 3, 3, 1]
    in_channels: 1
    kernel_sizes: [5, 3]
    max_downsample_channels: 1024
    nonlinear_activation: leakyrelu
    nonlinear_activation_params:
      negative_slope: 0.1
    out_channels: 1
    use_spectral_norm: False
    use_weight_norm: True
  periods: [2, 3, 5, 7, 11]
  scale_discriminator_params:
    bias: True
    channels: 128
    downsample_scales: [4, 4, 4, 4, 1]
    in_channels: 1
    kernel_sizes: [15, 41, 5, 3]
    max_downsample_channels: 1024
    max_groups: 16
    nonlinear_activation: leakyrelu
    nonlinear_activation_params:
      negative_slope: 0.1
    out_channels: 1
  scale_downsample_pooling: AvgPool1D
  scale_downsample_pooling_params:
    kernel_size: 4
    padding: 2
    stride: 2
  scales: 3
discriminator_scheduler_params:
  gamma: 0.5
  learning_rate: 0.0002
  milestones: [200000, 400000, 600000, 800000]
discriminator_train_start_steps: 0
eval_interval_steps: 1000
feat_match_loss_params:
  average_by_discriminators: False
  average_by_layers: False
  include_final_outputs: False
fmax: 7600
fmin: 80
fs: 24000
generator_adv_loss_params:
  average_by_discriminators: False
generator_grad_norm: -1
generator_optimizer_params:
  beta1: 0.5
  beta2: 0.9
  weight_decay: 0.0
generator_params:
  bias: True
  channels: 512
  in_channels: 80
  kernel_size: 7
  nonlinear_activation: leakyrelu
  nonlinear_activation_params:
    negative_slope: 0.1
  out_channels: 1
  resblock_dilations: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
  resblock_kernel_sizes: [3, 7, 11]
  upsample_kernel_sizes: [10, 10, 8, 6]
  upsample_scales: [5, 5, 4, 3]
  use_additional_convs: True
  use_weight_norm: True
generator_scheduler_params:
  gamma: 0.5
  learning_rate: 0.0002
  milestones: [200000, 400000, 600000, 800000]
generator_train_start_steps: 1
lambda_adv: 1.0
lambda_aux: 45.0
lambda_feat_match: 2.0
mel_loss_params:
  fft_size: 2048
  fmax: 12000
  fmin: 0
  fs: 24000
  hop_size: 300
  log_base: None
  num_mels: 80
  win_length: 1200
  window: hann
n_fft: 2048
n_mels: 80
n_shift: 300
num_snapshots: 10
num_workers: 2
save_interval_steps: 5000
seed: 42
train_max_steps: 2500000
use_feat_match_loss: True
use_mel_loss: True
use_stft_loss: False
win_length: 1200
window: hann
[2024-12-13 15:32:06,632] [    INFO] - tokenizer config file saved in /home/aistudio/.paddlenlp/models/bert-base-chinese/tokenizer_config.json
[2024-12-13 15:32:06,632] [    INFO] - Special tokens file saved in /home/aistudio/.paddlenlp/models/bert-base-chinese/special_tokens_map.json
frontend done!
W1213 15:32:07.127918 265297 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 12.0, Runtime API Version: 11.8
W1213 15:32:07.130271 265297 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
acoustic model done!
voc done!
001 凯莫瑞安联合体的经济崩溃,迫在眉睫。
Building prefix dict from the default dictionary ...
[2024-12-13 15:32:11,704] [   DEBUG] __init__.py:113 - Building prefix dict from the default dictionary ...
Dumping model to file cache /tmp/jieba.cache
[2024-12-13 15:32:12,356] [   DEBUG] __init__.py:146 - Dumping model to file cache /tmp/jieba.cache
Loading model cost 0.705 seconds.
[2024-12-13 15:32:12,410] [   DEBUG] __init__.py:164 - Loading model cost 0.705 seconds.
Prefix dict has been built successfully.
[2024-12-13 15:32:12,410] [   DEBUG] __init__.py:166 - Prefix dict has been built successfully.
001, mel: [77, 80], wave: (84300, 1), time: 2269s, Hz: 37.1529308065227, RTF: 645.9786476868327.
001 done!
002 对于所有想要离开那片废土,去寻找更美好生活的人来说。
002, mel: [202, 80], wave: (119700, 1), time: 403s, Hz: 297.0223325062035, RTF: 80.80200501253132.
002 done!
003 克哈,是你们所有人安全的港湾。
003, mel: [171, 80], wave: (67200, 1), time: 262s, Hz: 256.48854961832063, RTF: 93.57142857142857.
003 done!
004 为了保护尤摩扬人民不受异虫的残害,我所做的,比他们自己的领导委员会都多。
004, mel: [203, 80], wave: (156300, 1), time: 575s, Hz: 271.82608695652175, RTF: 88.29174664107485.
004 done!
005 无论他们如何诽谤我,我将继续为所有泰伦人的最大利益,而努力奋斗。
005, mel: [93, 80], wave: (137400, 1), time: 457s, Hz: 300.65645514223195, RTF: 79.82532751091703.
005 done!
006 身为你们的元首,我带领泰伦人实现了人类统治领地和经济的扩张。
006, mel: [334, 80], wave: (132900, 1), time: 406s, Hz: 327.3399014778325, RTF: 73.31828442437924.
006 done!
007 我们将继续成长,用行动回击那些只会说风凉话,不愿意和我们相向而行的害群之马。
007, mel: [224, 80], wave: (168900, 1), time: 559s, Hz: 302.14669051878354, RTF: 79.43161634103019.
007 done!
008 帝国武装力量,无数的优秀儿女,正时刻守卫着我们的家园大门,但是他们孤木难支。
008, mel: [139, 80], wave: (166500, 1), time: 627s, Hz: 265.55023923444975, RTF: 90.37837837837839.
008 done!
009 凡是今天应征入伍者,所获的所有刑罚罪责,减半。
009, mel: [45, 80], wave: (109200, 1), time: 357s, Hz: 305.88235294117646, RTF: 78.46153846153847.
009 done!
010 激进分子和异见者希望你们一听见枪声,就背弃多年的和平与繁荣。
010, mel: [188, 80], wave: (136200, 1), time: 447s, Hz: 304.6979865771812, RTF: 78.76651982378854.
010 done!
011 他们没有勇气和能力,带领人类穿越一个充满危险的星系。
011, mel: [237, 80], wave: (114300, 1), time: 343s, Hz: 333.23615160349857, RTF: 72.02099737532808.
011 done!
012 法治是我们的命脉,然而它却受到前所未有的挑战。
012, mel: [217, 80], wave: (104700, 1), time: 333s, Hz: 314.4144144144144, RTF: 76.33237822349571.
012 done!
013 我将恢复我们帝国的荣光,绝不会向任何外星势力低头。
013, mel: [201, 80], wave: (111600, 1), time: 326s, Hz: 342.3312883435583, RTF: 70.10752688172043.
013 done!
014 我已经驯服了异虫,荡平了星灵。如今它们的创造者,想要夺走我们拥有的一切。
014, mel: [171, 80], wave: (156900, 1), time: 616s, Hz: 254.7077922077922, RTF: 94.22562141491396.
014 done!
015 永远记住,谁才是最能保护你们的人。
015, mel: [174, 80], wave: (73500, 1), time: 266s, Hz: 276.3157894736842, RTF: 86.85714285714286.
015 done!
016 不要听信别人的谗言,我不是什么克隆人。
016, mel: [125, 80], wave: (81300, 1), time: 291s, Hz: 279.3814432989691, RTF: 85.9040590405904.
016 done!
generation speed: 225.00878528757175Hz, RTF: 106.6625019522099

@zxcd @Liyulingyue @GreatV @enkilee @yinfan98

Copy link

paddle-bot bot commented Dec 13, 2024

Thanks for your contribution!

@mergify mergify bot added the T2S label Dec 13, 2024
Copy link
Collaborator

@zxcd zxcd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zxcd zxcd merged commit 8ee3a7e into PaddlePaddle:develop Dec 16, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants