Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在文档中统一静态图模式与动态图模式的英文翻译 #49170

Merged
merged 15 commits into from
Dec 30, 2022
Merged
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,8 @@ def __init__(
elif core.is_compiled_with_cuda():
self._device = "gpu"
assert self._device, "Only gpu and npu are supported."
assert not _non_static_mode(), "Only static graph mode is supported."

assert not in_dygraph_mode(), "Only static mode is supported."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里解决冲突后是不是 static graph mode 也恢复了?可以解决完冲突后再修改一下~

fd96ebe (#49170) 里的三个改动里有两个貌似都有这个问题,可以改一下~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

收到,已修改完毕

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

嗯,可以等 CI 过了疯狂 @Ligoml [doge],这个 PR 改动挺多的,蛮容易冲突的

我记得之前有几个 CI 有问题,Coverage 覆盖率与本 PR 修改无关应可豁免,Static-Check 和 APPROVAL 等人来 approve 即可,其他的一般 re-run 下就可以了毕竟本 PR 没有修改任何逻辑

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对。。万一别人改了又要重新CI
QWQ 我等跑完了艾特一下

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

嗯,可以等 CI 过了疯狂 @Ligoml [doge],这个 PR 改动挺多的,蛮容易冲突的

栓Q


op_maker = core.op_proto_and_checker_maker
self._op_role = op_maker.OpRole
Expand Down
22 changes: 13 additions & 9 deletions python/paddle/tensor/layer_function_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -334,15 +334,19 @@ def generate_inplace_fn(inplace_op_type):
origin_op_type = inplace_op_type[:-1]

def func(x, name=None):
if in_dygraph_mode() and hasattr(_C_ops, inplace_op_type):
op = getattr(_C_ops, inplace_op_type)
return op(x)
if _non_static_mode():
op = getattr(_legacy_C_ops, inplace_op_type)
return op(x)
warnings.warn(
"In static graph mode, {}() is the same as {}() and does not perform inplace operation.".format(
inplace_op_type, origin_op_type

if in_dygraph_mode():
if hasattr(_C_ops, inplace_op_type):
op = getattr(_C_ops, inplace_op_type)
return op(x)
else:
op = getattr(_legacy_C_ops, inplace_op_type)
return op(x)
else:
warnings.warn(
"In static mode, {}() is the same as {}() and does not perform inplace operation.".format(
inplace_op_type, origin_op_type
)
)
return generate_activation_fn(origin_op_type)(x, name)

Expand Down
42 changes: 4 additions & 38 deletions python/paddle/tensor/manipulation.py
Original file line number Diff line number Diff line change
Expand Up @@ -1138,44 +1138,10 @@ def concat(x, axis=0, name=None):
],
'concat',
)
else:
input = [input]
check_type(axis, 'axis', (int, Variable), 'concat')

if isinstance(axis, Variable):
check_dtype(
axis.dtype,
'axis',
['int32', 'int64'],
'concat',
"The data type of axis must be int32 or int64 when axis is a Tensor",
)

helper = LayerHelper('concat', **locals())
out = helper.create_variable_for_type_inference(dtype=helper.input_dtype())

if input[0].desc.type() == core.VarDesc.VarType.LOD_TENSOR_ARRAY:
# NOTE(liym27): Don't remove this if branch!
# This feature is supported for Dynamic-to-Static, because after transformed, the type of inputs[0]
# is LOD_TENSOR_ARRAY in some scenarios. And this feature can be used in static graph mode.

assert len(input) == 1, (
"If the elements of 'input' in concat are Variable(LoDTensorArray), "
"number of the elements must be 1, but received %s." % len(input)
)
out_index = helper.create_variable_for_type_inference(dtype="int32")
helper.append_op(
type='tensor_array_to_tensor',
inputs={'X': input[0]},
outputs={'Out': [out], 'OutIndex': [out_index]},
attrs={'axis': axis, 'use_stack': False},
)
else:
inputs = {'X': input}
attrs = {}
if isinstance(axis, Variable):
axis.stop_gradient = True
inputs['AxisTensor'] = axis
if x.dtype != input[0].dtype:
raise TypeError(
"All the Tensors in the input must have the same data type."
)
else:
input = [input]
check_type(axis, 'axis', (int, Variable), 'concat')
Expand Down
You are viewing a condensed version of this merge commit. You can view the full changes here.