Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Swapping the order of the inputs of layer.fc will cause error. #2141

Closed
xinghai-sun opened this issue May 15, 2017 · 0 comments
Closed

Swapping the order of the inputs of layer.fc will cause error. #2141

xinghai-sun opened this issue May 15, 2017 · 0 comments
Labels

Comments

@xinghai-sun
Copy link
Contributor

xinghai-sun commented May 15, 2017

It's weird that an error occurs when we just swap the order of the input layers of layer.fc.

For example:

import paddle.v2 as paddle


def main():
    hidden_size = 128
    dict_size = 30000
    paddle.init(use_gpu=False, trainer_count=1)

    words = paddle.layer.data(
        name="words",
        type=paddle.data_type.integer_value_sequence(dict_size))
    next_words = paddle.layer.data(
        name='next_words',
        type=paddle.data_type.integer_value_sequence(dict_size))

    def recurrent_step(embedding):
        last_memory = paddle.layer.memory(name="memory", size=hidden_size)
        memory_update = paddle.layer.fc(
            name="memory", input=[last_memory, embedding], size=hidden_size)
        predict = paddle.layer.fc(
            input=[embedding, memory_update],
            size=dict_size,
            act=paddle.activation.Softmax())
        return predict

    predict_seq = paddle.layer.recurrent_group(
        step=recurrent_step,
        input=[paddle.layer.embedding(input=words, size=hidden_size)])
    cost = paddle.layer.classification_cost(
        input=predict_seq, label=next_words)

    parameters = paddle.parameters.create(cost)


if __name__ == '__main__':
    main()

This works OK. But if we swap the order of the fc layer's inputs, from

        predict = paddle.layer.fc(
            input=[embedding, memory_update],
            size=dict_size,
            act=paddle.activation.Softmax())

to

        predict = paddle.layer.fc(
            input=[memory_update, embedding],
            size=dict_size,
            act=paddle.activation.Softmax())

We will have the error:

Traceback (most recent call last):
  File "bug.py", line 39, in <module>
    main()
  File "bug.py", line 32, in main
    parameters = paddle.parameters.create(cost)
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/parameters.py", line 19, in create
    topology = Topology(layers)
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/topology.py", line 69, in __init__
    layers, extra_layers=extra_layers)
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/layer.py", line 96, in parse_network
    return __parse__(__real_func__)
  File "/usr/local/lib/python2.7/site-packages/paddle/trainer_config_helpers/config_parser_utils.py", line 32, in parse_network_config
    config = config_parser.parse_config(network_conf, config_arg_str)
  File "/usr/local/lib/python2.7/site-packages/paddle/trainer/config_parser.py", line 3597, in parse_config
    trainer_config()
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/layer.py", line 89, in __real_func__
    real_output = [each.to_proto(context=context) for each in output_layers]
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/config_base.py", line 109, in to_proto
    context=context)
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/config_base.py", line 112, in to_proto
    self.__parent_layers__[layer_name])
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/config_base.py", line 111, in <lambda>
    v1_layer = map(lambda x: x.to_proto(context=context),
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/config_base.py", line 112, in to_proto
    self.__parent_layers__[layer_name])
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/config_base.py", line 111, in <lambda>
    v1_layer = map(lambda x: x.to_proto(context=context),
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/config_base.py", line 112, in to_proto
    self.__parent_layers__[layer_name])
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/config_base.py", line 111, in <lambda>
    v1_layer = map(lambda x: x.to_proto(context=context),
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/config_base.py", line 116, in to_proto
    ret_val = self.to_proto_impl(**kwargs)
  File "/usr/local/lib/python2.7/site-packages/paddle/v2/layer.py", line 197, in to_proto_impl
    return conf_helps.memory(name=self.name, **args)
  File "/usr/local/lib/python2.7/site-packages/paddle/trainer_config_helpers/default_decorators.py", line 53, in __wrapper__
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/paddle/trainer_config_helpers/layers.py", line 2927, in memory
    memory_name=memory_name)
  File "/usr/local/lib/python2.7/site-packages/paddle/trainer/config_parser.py", line 2337, in Memory
    agent_layer = AgentLayer(agent_name, size)
  File "/usr/local/lib/python2.7/site-packages/paddle/trainer/config_parser.py", line 2243, in __init__
    name, 'agent', size, inputs=[], device=device)
  File "/usr/local/lib/python2.7/site-packages/paddle/trainer/config_parser.py", line 1441, in __init__
    height = self.get_input_layer(0).height
  File "/usr/local/lib/python2.7/site-packages/paddle/trainer/config_parser.py", line 1447, in get_input_layer
    return g_layer_map[self.config.inputs[input_index].input_layer_name]
  File "/usr/local/lib/python2.7/site-packages/google/protobuf/internal/containers.py", line 204, in __getitem__
    return self._values[key]
IndexError: list index out of range

which is not expected.

Notice that this happens within a recurrent_group.

Version: docker.paddlepaddle.org/paddle:0.10.0rc3

heavengate pushed a commit to heavengate/Paddle that referenced this issue Aug 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants