Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

debug/format protobuf to human-readable codes #8086

Merged

Conversation

Superjomn
Copy link
Contributor

@Superjomn Superjomn commented Feb 2, 2018

called with

from paddle.v2.fluid import debuger

print debuger.pprint_program_codes(framework.default_main_program().desc)

it is easy to customize the debug information for a specific operator, just add a new handler into the op_repr_handlers.

The code sample adds a handler for fill_constant, and it looks like aval = 1. [shape=[1]]

// block-0  parent--1
// variables
Tensor tmp_19 (tensor(type=float32, shape=[1L]))
Tensor tmp_17 (tensor(type=float32, shape=[1L]))
Tensor tmp_15 (tensor(type=float32, shape=[1L]))
Tensor tmp_14 (tensor(type=float32, shape=[1L]))
Tensor tmp_6 (tensor(type=float32, shape=[1L]))
Tensor tmp_5 (tensor(type=float32, shape=[1L]))
Tensor tmp_3 (tensor(type=float32, shape=[1L]))
Tensor tmp_2 (tensor(type=float32, shape=[1L]))
Tensor tmp_0 (tensor(type=float32, shape=[1L]))
Tensor moment_8 (tensor(type=float32, shape=[32L, 32L]))
Tensor moment_7 (tensor(type=float32, shape=[1L, 224L]))
Tensor tmp_9 (tensor(type=float32, shape=[1L]))
Tensor moment_6 (tensor(type=float32, shape=[30000L, 16L]))
Tensor tmp_13 (tensor(type=float32, shape=[1L]))
Tensor moment_4 (tensor(type=float32, shape=[32L, 30000L]))
Tensor tmp_12 (tensor(type=float32, shape=[1L]))
Tensor moment_2 (tensor(type=float32, shape=[32L, 128L]))
Tensor tmp_11 (tensor(type=float32, shape=[1L]))
Tensor tmp_18 (tensor(type=float32, shape=[1L]))
Tensor moment_5 (tensor(type=float32, shape=[16L, 128L]))
Tensor tmp_7 (tensor(type=float32, shape=[1L]))
Tensor moment_3 (tensor(type=float32, shape=[128L]))
Tensor lstm_0.tmp_3 (tensor(type=float32, shape=[-1L, 32L]))
Tensor tmp_4 (tensor(type=float32, shape=[1L]))
Tensor dynamic_rnn_0.tmp_0 (tensor(type=bool, shape=[1L]))
Tensor lstm_0.tmp_2 (tensor(type=float32, shape=[-1L, 128L]))
LoDTensor lstm_0.tmp_1 (level=1, tensor(type=float32, shape=[-1L, 32L]))
LoDTensor embedding_1.tmp_0 (level=1, tensor(type=float32, shape=[-1L, 16L]))
LoDTensor target_language_next_word (level=1, tensor(type=int64, shape=[-1L, 1L]))
Tensor tmp_8 (tensor(type=float32, shape=[1L]))
LoDTensor embedding_0.tmp_0 (level=1, tensor(type=float32, shape=[-1L, 16L]))
Tensor tmp_1 (tensor(type=float32, shape=[1L]))
Tensor vemb (tensor(type=float32, shape=[30000L, 16L]))
LoDTensor fc_0.tmp_1 (level=1, tensor(type=float32, shape=[-1L, 128L]))
LoDTensor target_language_word (level=1, tensor(type=int64, shape=[-1L, 1L]))
Tensor moment_1 (tensor(type=float32, shape=[30000L]))
Tensor fc_0.w_0 (tensor(type=float32, shape=[16L, 128L]))
Tensor sequence_pool_0.tmp_1 (tensor(type=float32, shape=[]))
Tensor moment_0 (tensor(type=float32, shape=[32L]))
Tensor cross_entropy_0.tmp_0 (tensor(type=float32, shape=[-1L, 1L]))
Tensor lstm_0.b_0 (tensor(type=float32, shape=[1L, 224L]))
LoDTensor src_word_id (level=1, tensor(type=int64, shape=[-1L, 1L]))
Tensor lod_rank_table_0 (tensor(type=bool, shape=[]))
Tensor fc_2.b_0 (tensor(type=float32, shape=[30000L]))
Tensor fc_0.b_0 (tensor(type=float32, shape=[128L]))
Tensor tmp_10 (tensor(type=float32, shape=[1L]))
Tensor fc_1.b_0 (tensor(type=float32, shape=[32L]))
LoDTensor fc_0.tmp_0 (level=1, tensor(type=float32, shape=[-1L, 128L]))
Tensor _generated_var_0 (tensor(type=bool, shape=[]))
LoDTensor fc_0.tmp_2 (level=1, tensor(type=float32, shape=[-1L, 128L]))
Tensor fill_constant_1.tmp_0 (tensor(type=int64, shape=[1L]))
Tensor lstm_0.w_0 (tensor(type=float32, shape=[32L, 128L]))
Tensor tmp_16 (tensor(type=float32, shape=[1L]))
Tensor dynamic_rnn_mem_array_0 (tensor(type=bool, shape=[]))
Tensor mean_0.tmp_0 (tensor(type=float32, shape=[1L]))
Tensor dynamic_rnn_max_seq_len_0 (tensor(type=int64, shape=[1L]))
Tensor dynamic_rnn_input_array_0 (tensor(type=bool, shape=[]))
Tensor fill_constant_0.tmp_0 (tensor(type=int64, shape=[1L]))
Tensor array_to_lod_tensor_0.tmp_0 (tensor(type=float32, shape=[-1L, 30000L]))
Tensor fc_1.w_0 (tensor(type=float32, shape=[16L, 32L]))
Tensor fc_1.w_1 (tensor(type=float32, shape=[32L, 32L]))
Tensor fc_2.w_0 (tensor(type=float32, shape=[32L, 30000L]))
Tensor moment_9 (tensor(type=float32, shape=[16L, 32L]))
Tensor dynamic_rnn_0_output_array_fc_2.tmp_2_0 (tensor(type=bool, shape=[]))
Tensor learning_rate_0 (tensor(type=float32, shape=[1L]))
Tensor sequence_pool_0.tmp_0 (tensor(type=float32, shape=[-1L, 32L]))
LoDTensor lstm_0.tmp_0 (level=1, tensor(type=float32, shape=[-1L, 32L]))

// operators
embedding_0.tmp_0 = lookup_table(Ids=src_word_id, W=vemb) [{padding_idx=-1,is_sparse=True}]
fc_0.tmp_0 = mul(X=embedding_0.tmp_0, Y=fc_0.w_0) [{y_num_col_dims=1,x_num_col_dims=1}]
fc_0.tmp_1 = elementwise_add(X=fc_0.tmp_0, Y=fc_0.b_0) [{axis=1}]
fc_0.tmp_2 = tanh(X=fc_0.tmp_1) [{}]
lstm_0.tmp_3, lstm_0.tmp_2, lstm_0.tmp_1, lstm_0.tmp_0 = lstm(Bias=lstm_0.b_0, C0=[], H0=[], Input=fc_0.tmp_2, Weight=lstm_0.w_0) [{candidate_activation=tanh,use_peepholes=True,is_reverse=False,gate_activation=s
igmoid,cell_activation=tanh}]
sequence_pool_0.tmp_1, sequence_pool_0.tmp_0 = sequence_pool(X=lstm_0.tmp_0) [{pooltype=LAST}]
embedding_1.tmp_0 = lookup_table(Ids=target_language_word, W=vemb) [{padding_idx=-1,is_sparse=True}]
fill_constant_0.tmp_0 = 0.0 [shape=[1]]
fill_constant_1.tmp_0 = 0.0 [shape=[1]]
lod_rank_table_0 = lod_rank_table(X=embedding_1.tmp_0) [{level=0}]
dynamic_rnn_max_seq_len_0 = max_sequence_len(RankTable=lod_rank_table_0) [{}]
dynamic_rnn_0.tmp_0 = less_than(X=fill_constant_1.tmp_0, Y=dynamic_rnn_max_seq_len_0) [{axis=-1}]
dynamic_rnn_input_array_0 = lod_tensor_to_array(RankTable=lod_rank_table_0, X=embedding_1.tmp_0) [{}]
dynamic_rnn_mem_array_0 = write_to_array(I=fill_constant_0.tmp_0, X=sequence_pool_0.tmp_0) [{}]
[u'fill_constant_1.tmp_0', u'dynamic_rnn_0_output_array_fc_2.tmp_2_0', u'dynamic_rnn_mem_array_0', u'dynamic_rnn_0.tmp_0'], _generated_var_0 = while(Condition=dynamic_rnn_0.tmp_0, X=[u'fc_1.w_1', u'fc_1.w_0', u'
fill_constant_1.tmp_0', u'dynamic_rnn_input_array_0', u'fc_2.w_0', u'dynamic_rnn_mem_array_0', u'dynamic_rnn_max_seq_len_0', u'fc_1.b_0', u'lod_rank_table_0', u'fc_2.b_0']) [{sub_block=1}]
array_to_lod_tensor_0.tmp_0 = array_to_lod_tensor(RankTable=lod_rank_table_0, X=dynamic_rnn_0_output_array_fc_2.tmp_2_0) [{}]
cross_entropy_0.tmp_0 = cross_entropy(Label=target_language_next_word, X=array_to_lod_tensor_0.tmp_0) [{soft_label=False}]
mean_0.tmp_0 = mean(X=cross_entropy_0.tmp_0) [{}]
tmp_0 = 1.0 [shape=[1]]
tmp_1 = elementwise_mul(X=learning_rate_0, Y=tmp_0) [{axis=-1}]
tmp_2 = 1.0 [shape=[1]]
tmp_3 = elementwise_mul(X=learning_rate_0, Y=tmp_2) [{axis=-1}]
tmp_4 = 1.0 [shape=[1]]
tmp_5 = elementwise_mul(X=learning_rate_0, Y=tmp_4) [{axis=-1}]
tmp_6 = 1.0 [shape=[1]]
tmp_7 = elementwise_mul(X=learning_rate_0, Y=tmp_6) [{axis=-1}]
tmp_8 = 1.0 [shape=[1]]
tmp_9 = elementwise_mul(X=learning_rate_0, Y=tmp_8) [{axis=-1}]
tmp_10 = 1.0 [shape=[1]]
tmp_11 = elementwise_mul(X=learning_rate_0, Y=tmp_10) [{axis=-1}]
tmp_12 = 1.0 [shape=[1]]
tmp_13 = elementwise_mul(X=learning_rate_0, Y=tmp_12) [{axis=-1}]
tmp_14 = 1.0 [shape=[1]]
tmp_15 = elementwise_mul(X=learning_rate_0, Y=tmp_14) [{axis=-1}]
tmp_16 = 1.0 [shape=[1]]
tmp_17 = elementwise_mul(X=learning_rate_0, Y=tmp_16) [{axis=-1}]
tmp_18 = 1.0 [shape=[1]]
tmp_19 = elementwise_mul(X=learning_rate_0, Y=tmp_18) [{axis=-1}]

]


def repr_var(vardesc):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the meaning of repr?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

represent, similar to the repr method in python.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can these repr methods be used independently?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, just input proto and get a str.

Copy link
Member

@jacquesqiao jacquesqiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work!

@Superjomn Superjomn merged commit 6f28084 into PaddlePaddle:develop Feb 5, 2018
@Superjomn Superjomn deleted the feature/add_human_readable_debuginfo branch February 5, 2018 02:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants