We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When attempting to use the Alexnet output with TimeloopFE, based on the problem's instance attribute:
instance
Can not convert non-dict to dict: 0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1
So how am I supposed to actually use pytorch2timeloop-converter?
Full error is:
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:144, in TypeSpecifier.cast_check_type(self, value, node, key) 143 try: --> 144 casted = self.cast(value) 145 except Exception as exc: File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:197, in TypeSpecifier.cast(self, value, _TypeSpecifier__node_skip_parse) 196 else: --> 197 value = self.callfunc(value) 198 if not primitive: File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:106, in TypeSpecifier.__init__.<locals>.callfunc(x, _TypeSpecifier__node_skip_parse) 105 return x --> 106 return rt(x, __node_skip_parse=__node_skip_parse) File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/v4/problem.py:143, in Instance.__init__(self, *args, **kwargs) 142 def __init__(self, *args, **kwargs): --> 143 super().__init__(*args, **kwargs) File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1207, in DictNode.__init__(self, _DictNode__node_skip_parse, *args, **kwargs) 1205 super().__init__(*args, **kwargs) -> 1207 self.update(self._to_dict(args)) 1208 for a in args: File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1238, in DictNode._to_dict(x) 1237 for y in x: -> 1238 result.update(DictNode._to_dict(y)) 1239 return result File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1241, in DictNode._to_dict(x) 1240 else: -> 1241 raise TypeError(f"Can not convert non-dict to dict: {x}") TypeError: Can not convert non-dict to dict: 0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1 The above exception was the direct cause of the following exception: ParseError Traceback (most recent call last) File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:144, in TypeSpecifier.cast_check_type(self, value, node, key) 143 try: --> 144 casted = self.cast(value) 145 except Exception as exc: File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:197, in TypeSpecifier.cast(self, value, _TypeSpecifier__node_skip_parse) 196 else: --> 197 value = self.callfunc(value) 198 if not primitive: File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:106, in TypeSpecifier.__init__.<locals>.callfunc(x, _TypeSpecifier__node_skip_parse) 105 return x --> 106 return rt(x, __node_skip_parse=__node_skip_parse) File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/v4/problem.py:24, in Problem.__init__(self, *args, **kwargs) 23 def __init__(self, *args, **kwargs): ---> 24 super().__init__(*args, **kwargs) 25 self.version: str = self["version"] File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1229, in DictNode.__init__(self, _DictNode__node_skip_parse, *args, **kwargs) 1228 if not __node_skip_parse: -> 1229 self._parse_elems() File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:564, in Node._parse_elems(self) 563 for k, check in self._get_index2checker().items(): --> 564 self._parse_elem(k, check) File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:541, in Node._parse_elem(self, key, check, value_override) 540 if check is not None: --> 541 v = check.cast_check_type(v, self, key) 543 if isinstance(v, Node): File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:181, in TypeSpecifier.cast_check_type(self, value, node, key) 180 new_exc._last_non_node_exception = last_non_node_exception --> 181 raise new_exc from exc 183 # self.check_type(casted, node, key) ParseError: Error calling cast function "Instance" for value "0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1" in Problem[instance]. Can not convert non-dict to dict: 0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1 The above exception was the direct cause of the following exception: ParseError Traceback (most recent call last) Cell In[5], line 1 ----> 1 spec = tl.Specification.from_yaml_files( 2 ARCH_PATH, 3 COMPONENTS_PATH, 4 MAPPER_PATH, 5 #PROBLEM_PATH, 6 ALEXNET_PATH, 7 VARIABLES_PATH, 8 ) # Gather YAML files into a Python object 9 tl.call_mapper(spec, output_dir=f"{os.curdir}/outputs") # Run the Timeloop mapper 10 stats = open("outputs/timeloop-mapper.stats.txt").read() File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/base_specification.py:179, in BaseSpecification.from_yaml_files(cls, *args, **kwargs) 167 @classmethod 168 def from_yaml_files(cls, *args, **kwargs) -> "Specification": 169 """ 170 Create a Specification object from YAML files. 171 (...) 177 Specification: The created Specification object. 178 """ --> 179 return super().from_yaml_files(*args, **kwargs) File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1362, in DictNode.from_yaml_files(cls, jinja_parse_data, *files, **kwargs) 1359 key2file[k] = f 1360 rval[k] = v -> 1362 c = cls(**rval, **kwargs) 1363 logging.info( 1364 "Parsing extra attributes %s", ", ".join([x[0] for x in extra_elems]) 1365 ) 1366 c._parse_extra_elems(extra_elems) File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/v4/specification.py:61, in Specification.__init__(self, *args, **kwargs) 59 assert "_required_processors" not in kwargs, "Cannot set _required_processors" 60 kwargs["_required_processors"] = REQUIRED_PROCESSORS ---> 61 super().__init__(*args, **kwargs) 62 self.architecture: arch.Architecture = self["architecture"] 63 self.constraints: constraints.Constraints = self["constraints"] File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/base_specification.py:73, in BaseSpecification.__init__(self, *args, **kwargs) 69 self.spec = self 71 self._early_init_processors(**kwargs) # Because processors define declare_attrs ---> 73 super().__init__(*args, **kwargs) 74 TypeSpecifier.reset_id2casted() 76 self.processors: ListNode = self["processors"] File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1229, in DictNode.__init__(self, _DictNode__node_skip_parse, *args, **kwargs) 1227 self[k] = default_unspecified_ 1228 if not __node_skip_parse: -> 1229 self._parse_elems() File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:564, in Node._parse_elems(self) 562 self.spec = parent.spec if parent is not None else Node.get_global_spec() 563 for k, check in self._get_index2checker().items(): --> 564 self._parse_elem(k, check) File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:541, in Node._parse_elem(self, key, check, value_override) 539 tag = Node._get_tag(v) 540 if check is not None: --> 541 v = check.cast_check_type(v, self, key) 543 if isinstance(v, Node): 544 v.tag = tag File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:181, in TypeSpecifier.cast_check_type(self, value, node, key) 175 new_exc = ParseError( 176 f'Error calling cast function "{callname}" ' 177 f'for value "{value}" in {node.get_name()}[{key}]. ' 178 f"{self.removed_by_str()}{estr}" 179 ) 180 new_exc._last_non_node_exception = last_non_node_exception --> 181 raise new_exc from exc 183 # self.check_type(casted, node, key) 184 return casted ParseError: Error calling cast function "Problem" for value "[{'instance': '0 <= features_0_G < 1 and 0 <= features_0_C < 3 and 0 <= features_0_M < 64 and 0 <= features_0_N < 2 and 0 <= features_0_P < 55 and 0 <= features_0_Q < 55 and 0 <= features_0_R < 11 and 0 <= features_0_S < 11', 'shape': {'data-spaces': [{'name': 'features_0_filter', 'projection': '[ features_0_G, features_0_C, features_0_M, features_0_R, features_0_S ]'}, {'name': 'x_out', 'projection': '[ features_0_N, features_0_G*3 + features_0_C, features_0_R + features_0_P*4, features_0_S + features_0_Q*4 ]'}, {'name': 'features_0_out', 'projection': '[ features_0_N, features_0_G*64 + features_0_M, features_0_P, features_0_Q ]', 'read-write': True}], 'dimensions': ['features_0_G', 'features_0_C', 'features_0_M', 'features_0_R', 'features_0_S', 'features_0_N', 'features_0_P', 'features_0_Q'], 'name': 'features_0'}}, {'instance': '0 <= features_2_C < 64 and 0 <= features_2_N < 2 and 0 <= features_2_P < 27 and 0 <= features_2_Q < 27 and 0 <= features_2_R < 3 and 0 <= features_2_S < 3', 'shape': {'data-spaces': [{'name': 'features_0_out', 'projection': '[ features_2_N, features_2_C, features_2_R + features_2_P*2, features_2_S + features_2_Q*2 ]'}, {'name': 'features_2_out', 'projection': '[ features_2_N, features_2_C, features_2_P, features_2_Q ]', 'read-write': True}], 'dimensions': ['features_2_C', 'features_2_R', 'features_2_S', 'features_2_N', 'features_2_P', 'features_2_Q'], 'name': 'features_2'}}, {'instance': '0 <= features_3_G < 1 and 0 <= features_3_C < 64 and 0 <= features_3_M < 192 and 0 <= features_3_N < 2 and 0 <= features_3_P < 27 and 0 <= features_3_Q < 27 and 0 <= features_3_R < 5 and 0 <= features_3_S < 5', 'shape': {'data-spaces': [{'name': 'features_3_filter', 'projection': '[ features_3_G, features_3_C, features_3_M, features_3_R, features_3_S ]'}, {'name': 'features_2_out', 'projection': '[ features_3_N, features_3_G*64 + features_3_C, features_3_R + features_3_P*1, features_3_S + features_3_Q*1 ]'}, {'name': 'features_3_out', 'projection': '[ features_3_N, features_3_G*192 + features_3_M, features_3_P, features_3_Q ]', 'read-write': True}], 'dimensions': ['features_3_G', 'features_3_C', 'features_3_M', 'features_3_R', 'features_3_S', 'features_3_N', 'features_3_P', 'features_3_Q'], 'name': 'features_3'}}, {'instance': '0 <= features_5_C < 192 and 0 <= features_5_N < 2 and 0 <= features_5_P < 13 and 0 <= features_5_Q < 13 and 0 <= features_5_R < 3 and 0 <= features_5_S < 3', 'shape': {'data-spaces': [{'name': 'features_3_out', 'projection': '[ features_5_N, features_5_C, features_5_R + features_5_P*2, features_5_S + features_5_Q*2 ]'}, {'name': 'features_5_out', 'projection': '[ features_5_N, features_5_C, features_5_P, features_5_Q ]', 'read-write': True}], 'dimensions': ['features_5_C', 'features_5_R', 'features_5_S', 'features_5_N', 'features_5_P', 'features_5_Q'], 'name': 'features_5'}}, {'instance': '0 <= features_6_G < 1 and 0 <= features_6_C < 192 and 0 <= features_6_M < 384 and 0 <= features_6_N < 2 and 0 <= features_6_P < 13 and 0 <= features_6_Q < 13 and 0 <= features_6_R < 3 and 0 <= features_6_S < 3', 'shape': {'data-spaces': [{'name': 'features_6_filter', 'projection': '[ features_6_G, features_6_C, features_6_M, features_6_R, features_6_S ]'}, {'name': 'features_5_out', 'projection': '[ features_6_N, features_6_G*192 + features_6_C, features_6_R + features_6_P*1, features_6_S + features_6_Q*1 ]'}, {'name': 'features_6_out', 'projection': '[ features_6_N, features_6_G*384 + features_6_M, features_6_P, features_6_Q ]', 'read-write': True}], 'dimensions': ['features_6_G', 'features_6_C', 'features_6_M', 'features_6_R', 'features_6_S', 'features_6_N', 'features_6_P', 'features_6_Q'], 'name': 'features_6'}}, {'instance': '0 <= features_8_G < 1 and 0 <= features_8_C < 384 and 0 <= features_8_M < 256 and 0 <= features_8_N < 2 and 0 <= features_8_P < 13 and 0 <= features_8_Q < 13 and 0 <= features_8_R < 3 and 0 <= features_8_S < 3', 'shape': {'data-spaces': [{'name': 'features_8_filter', 'projection': '[ features_8_G, features_8_C, features_8_M, features_8_R, features_8_S ]'}, {'name': 'features_6_out', 'projection': '[ features_8_N, features_8_G*384 + features_8_C, features_8_R + features_8_P*1, features_8_S + features_8_Q*1 ]'}, {'name': 'features_8_out', 'projection': '[ features_8_N, features_8_G*256 + features_8_M, features_8_P, features_8_Q ]', 'read-write': True}], 'dimensions': ['features_8_G', 'features_8_C', 'features_8_M', 'features_8_R', 'features_8_S', 'features_8_N', 'features_8_P', 'features_8_Q'], 'name': 'features_8'}}, {'instance': '0 <= features_10_G < 1 and 0 <= features_10_C < 256 and 0 <= features_10_M < 256 and 0 <= features_10_N < 2 and 0 <= features_10_P < 13 and 0 <= features_10_Q < 13 and 0 <= features_10_R < 3 and 0 <= features_10_S < 3', 'shape': {'data-spaces': [{'name': 'features_10_filter', 'projection': '[ features_10_G, features_10_C, features_10_M, features_10_R, features_10_S ]'}, {'name': 'features_8_out', 'projection': '[ features_10_N, features_10_G*256 + features_10_C, features_10_R + features_10_P*1, features_10_S + features_10_Q*1 ]'}, {'name': 'features_10_out', 'projection': '[ features_10_N, features_10_G*256 + features_10_M, features_10_P, features_10_Q ]', 'read-write': True}], 'dimensions': ['features_10_G', 'features_10_C', 'features_10_M', 'features_10_R', 'features_10_S', 'features_10_N', 'features_10_P', 'features_10_Q'], 'name': 'features_10'}}, {'instance': '0 <= features_12_C < 256 and 0 <= features_12_N < 2 and 0 <= features_12_P < 6 and 0 <= features_12_Q < 6 and 0 <= features_12_R < 3 and 0 <= features_12_S < 3', 'shape': {'data-spaces': [{'name': 'features_10_out', 'projection': '[ features_12_N, features_12_C, features_12_R + features_12_P*2, features_12_S + features_12_Q*2 ]'}, {'name': 'features_12_out', 'projection': '[ features_12_N, features_12_C, features_12_P, features_12_Q ]', 'read-write': True}], 'dimensions': ['features_12_C', 'features_12_R', 'features_12_S', 'features_12_N', 'features_12_P', 'features_12_Q'], 'name': 'features_12'}}, {'instance': '0 <= avgpool_C < 256 and 0 <= avgpool_N < 2 and 0 <= avgpool_P < 6 and 0 <= avgpool_Q < 6 and 0 <= avgpool_R < 1 and 0 <= avgpool_S < 1', 'shape': {'data-spaces': [{'name': 'features_12_out', 'projection': '[ avgpool_N, avgpool_C, avgpool_R + avgpool_P*1, avgpool_S + avgpool_Q*1 ]'}, {'name': 'avgpool_out', 'projection': '[ avgpool_N, avgpool_C, avgpool_P, avgpool_Q ]', 'read-write': True}], 'dimensions': ['avgpool_C', 'avgpool_R', 'avgpool_S', 'avgpool_N', 'avgpool_P', 'avgpool_Q'], 'name': 'avgpool'}}, {'instance': '0 <= A < 2 and 0 <= B < 9216', 'shape': {'data-spaces': [{'name': 'avgpool_out', 'projection': '[ floor(B*1 + A*9216/9216)%2, floor(B*1 + A*9216/36)%256, floor(B*1 + A*9216/6)%6, floor(B*1 + A*9216/1)%6 ]'}, {'name': 'flatten_out', 'projection': '[ A, B ]', 'read-write': True}], 'dimensions': ['A', 'B'], 'name': 'flatten'}}, {'instance': '0 <= classifier_1_G < 1 and 0 <= classifier_1_C < 9216 and 0 <= classifier_1_M < 4096 and 0 <= classifier_1_N < 2 and 0 <= classifier_1_P < 1 and 0 <= classifier_1_Q < 1 and 0 <= classifier_1_R < 1 and 0 <= classifier_1_S < 1', 'shape': {'data-spaces': [{'name': 'classifier_1_filter', 'projection': '[ classifier_1_G, classifier_1_C, classifier_1_M, classifier_1_R, classifier_1_S ]'}, {'name': 'flatten_out', 'projection': '[ classifier_1_N, classifier_1_G*9216 + classifier_1_C, classifier_1_R + classifier_1_P*1, classifier_1_S + classifier_1_Q*1 ]'}, {'name': 'classifier_1_out', 'projection': '[ classifier_1_N, classifier_1_G*4096 + classifier_1_M, classifier_1_P, classifier_1_Q ]', 'read-write': True}], 'dimensions': ['classifier_1_G', 'classifier_1_C', 'classifier_1_M', 'classifier_1_R', 'classifier_1_S', 'classifier_1_N', 'classifier_1_P', 'classifier_1_Q'], 'name': 'classifier_1'}}, {'instance': '0 <= classifier_4_G < 1 and 0 <= classifier_4_C < 4096 and 0 <= classifier_4_M < 4096 and 0 <= classifier_4_N < 2 and 0 <= classifier_4_P < 1 and 0 <= classifier_4_Q < 1 and 0 <= classifier_4_R < 1 and 0 <= classifier_4_S < 1', 'shape': {'data-spaces': [{'name': 'classifier_4_filter', 'projection': '[ classifier_4_G, classifier_4_C, classifier_4_M, classifier_4_R, classifier_4_S ]'}, {'name': 'classifier_1_out', 'projection': '[ classifier_4_N, classifier_4_G*4096 + classifier_4_C, classifier_4_R + classifier_4_P*1, classifier_4_S + classifier_4_Q*1 ]'}, {'name': 'classifier_4_out', 'projection': '[ classifier_4_N, classifier_4_G*4096 + classifier_4_M, classifier_4_P, classifier_4_Q ]', 'read-write': True}], 'dimensions': ['classifier_4_G', 'classifier_4_C', 'classifier_4_M', 'classifier_4_R', 'classifier_4_S', 'classifier_4_N', 'classifier_4_P', 'classifier_4_Q'], 'name': 'classifier_4'}}, {'instance': '0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1', 'shape': {'data-spaces': [{'name': 'classifier_6_filter', 'projection': '[ classifier_6_G, classifier_6_C, classifier_6_M, classifier_6_R, classifier_6_S ]'}, {'name': 'classifier_4_out', 'projection': '[ classifier_6_N, classifier_6_G*4096 + classifier_6_C, classifier_6_R + classifier_6_P*1, classifier_6_S + classifier_6_Q*1 ]'}, {'name': 'classifier_6_out', 'projection': '[ classifier_6_N, classifier_6_G*1000 + classifier_6_M, classifier_6_P, classifier_6_Q ]', 'read-write': True}], 'dimensions': ['classifier_6_G', 'classifier_6_C', 'classifier_6_M', 'classifier_6_R', 'classifier_6_S', 'classifier_6_N', 'classifier_6_P', 'classifier_6_Q'], 'name': 'classifier_6'}}]" in Specification[problem]. Can not convert non-dict to dict: 0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1
The text was updated successfully, but these errors were encountered:
Same question. Can someone provide a working example of arch + mapper with a pytorch2timeloop converted yaml?
Sorry, something went wrong.
No branches or pull requests
When attempting to use the Alexnet output with TimeloopFE, based on the problem's
instance
attribute:So how am I supposed to actually use pytorch2timeloop-converter?
Full error is:
The text was updated successfully, but these errors were encountered: