Skip to content

Add attention_bias argument in transformer block and transformer layer modules, addressing change in MCore #6344

Add attention_bias argument in transformer block and transformer layer modules, addressing change in MCore

Add attention_bias argument in transformer block and transformer layer modules, addressing change in MCore #6344

L2_Megatron_GPT_with_Drop_Optimizer_States_TP2  /  main

succeeded Nov 15, 2024 in 3m 20s