Replies: 2 comments
-
When switching from TI to TM1py and Python you also need to change your approach slightly. In Turbo Integrator, you can do individual cell-read cell-write operations (e.g., attribute value lookup) while looping through a view on a cell-by-cell basis. With TM1py, you must not do individual/small read and write operations With TM1py, you need to approach the problem differently:
If you follow this approach, there is a good chance that the py script is actually faster than the original TI script. I hope this helps |
Beta Was this translation helpful? Give feedback.
-
Ok. The source view I imported at once. No problem. I used that piece of code:
But the distribution view is only defined after I have on record of the source. Is there a feature to export the entire distribution cube and then define Subset and build views based on the dataset "data" in Python itself? To write back, should not be a problem, bc. I could collect them internally and then use one write statement. But when something goes wrong, e.g. a write attempt to a consolidated element or type mismatch or missing element, how can I manage this if I only get an "Error" for 120.000 records? |
Beta Was this translation helpful? Give feedback.
-
I just started with TM1Py and I'm very satisfied with the functionality. I build a small process which dynamically creates a view by using subsets for each dimension.
For this source data I have a look in another cube for keys, lets say month. So for one record in source I get 12 values from the distribution cube. Now I write back to another cube the partly values of the one source value in relation to the sum of the 12 distribution values, e.g. 1/12 of the source value.
This works fine but slow.
My Question: How can I speed up the process?
When I get a record from the source view (lets assume I will get 10.000 records), I have to define a new view definition to get the corresponding values from the distribution cube. So if this cube has 5 dimensions, I have to define a Subset for this cube 10.000 (records) x 5 (dimensions) = 50.000 times.
After this I get only 12 records from this new created view, so it results in writing back 10.000 (source values) x 12 (distribution parts) = 120.000 destination records.
I'm writing it back by using this code value by value, to get informed, if something fails and if, when and where:
So how can I speed up with TM1PY functionality? I try to avoid unneccessary definitions, e.g. define a subset which is identical to the previous one, but it is still terribly slow.
Beta Was this translation helpful? Give feedback.
All reactions