-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge/json improvements #650
Conversation
d06afac
to
f7fa5d4
Compare
Hi @gbryant-dev, I rebased your branch on the master branch and solved the few conflicts in the Before we merge it into the master, I think there are a few small tasks left.
I will continue working in this branch for the moment and keep the remote up to date. Here is my simple benchmarking script: import time
from TM1py import TM1Service
with TM1Service(address="", port=12297, user="admin", password="apple", ssl=True) as tm1:
mdx = """
SELECT
{[Big Cube Measure].[Big Cube Measure].Members} ON COLUMNS,
{[Big Dimension 1].[Big Dimension 1].Members} *
{[Big Dimension 2].[Big Dimension 2].[000001],
[Big Dimension 2].[Big Dimension 2].[000002]} ON ROWS
FROM [Big Cube]
"""
times = list()
for _ in range(10):
before = time.time()
data = tm1.cells.execute_mdx(mdx=mdx, use_compact_json=True)
elapsed_time = time.time() - before
times.append(elapsed_time)
print(f"execute_mdx with use_compact_json=True: {sum(times) / len(times)}")
times = list()
for _ in range(10):
before = time.time()
data = tm1.cells.execute_mdx(mdx=mdx, use_compact_json=False)
elapsed_time = time.time() - before
times.append(elapsed_time)
print(f"execute_mdx with use_compact_json=False: {sum(times) / len(times)}")
times = list()
for _ in range(10):
before = time.time()
data = tm1.cells.execute_mdx_values(mdx=mdx, use_compact_json=True)
elapsed_time = time.time() - before
times.append(elapsed_time)
print(f"execute_mdx_values with use_compact_json=True: {sum(times) / len(times)}")
times = list()
for _ in range(10):
before = time.time()
data = tm1.cells.execute_mdx_values(mdx=mdx, use_compact_json=False)
elapsed_time = time.time() - before
times.append(elapsed_time)
print(f"execute_mdx_values with use_compact_json=False: {sum(times) / len(times)}") And the output:
|
Hi @MariusWirtz, Great, thanks for looking at this. As for performance, the two key things that I would expect to have a positive impact when From your benchmarking script, I can see you are running an instance on localhost so that will eliminate any improvement you would gain from the network latency element. In terms of payload size, this will depend on how big the cellset is. From the initial testing I did, the payload size can reduce by up to 70%. I suspect the reason why you're not seeing any difference (besides testing on a local instance) is because the cellset isn't large enough to make a noticeable difference. How big are the dimensions in the cube used in the script? In addition, as this MR only uses compact json for cells, the performance benefit you would gain from network latency / payload size would be netted-off to a varying degree as there would be two requests instead of one, one to get the axes, tuples and member information and the second to get the cell information. This wouldn't apply to functions that eventually call I'll try and do some benchmarking but I suspect that won't be until the latter half of next week. |
Hi @gbryant-dev, You are absolutely right. When I re-run the test against a remote server, I get a different picture. See below. Agree. By default, we should probably set it to I'm curious to see benchmarking results in other environments.
|
0f30ee7
to
4224866
Compare
I renamed I few things and refactored the code a bit here and there. @gbryant-dev please take a final look and approve the pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @MariusWirtz, looks great. In hindsight, extracting the logic in the decorator into a core extract function seems like a no-brainer really, I don't know why I didn't do it. I'll keep it in mind for future contributions! 🙂
Rebased #644 on the master branch to solve conflicts