-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark Performance #7
Comments
Hi, are you referring to the contents of https://github.com/viatra/viatra-cps-benchmark/wiki/Performance-evaluation ? Those results are quite outdated and also specific to a given hardware-software configuration. You can see up-to-date results here: https://build.incquerylabs.com/jenkins/job/viatra-cps-benchmark/lastSuccessfulBuild/artifact/benchmark/cpsBenchmarkReport.html If you are running the benchmark in a different way, then I cannot guess what is the issue, since I don't know your hardware setup, software versions, what do you run and how. If you need help in running the benchmark in the same way as we do, I would be happy to help. You can check the build output and the benchmark and reporting scripts |
Thank you for your reply.
|
Please clarify, your computer has 256MB RAM?
The recommended way for getting the benchmark up is described on the main page: https://github.com/viatra/viatra-cps-benchmark#getting-started What you are doing adds two additional levels of runtimes and a huge number of unnecessary plugins. If you just want to run the benchmark, you can download the prepared products and execute it the same way as the Python script does.
That is not a benchmark test, as it's name says it is an integration test to ensure the toolchain works before actually runnning benchmarks. If you really want to run benchmarks from the runtime Eclipse, either run the Also, what is the "expected performance" that you are referring to? |
Thanks for you reply.
|
Yes, sorry, that would have been a more sensible guess. Did you also make sure to set the memory limits of the JVMs with -Xmx at least in the run configuration where you execute the tests?
If you describe where do you get stuck, I may be able to help.
No problem. It is simply used during the Maven build of the benchmark code to execute all the steps on small inputs that will be executed in the real benchmark through the prepared product. This can catch errors in the tool implementations that may not come out when running the benchmark.
Great, if you have VIATRA specific questions, it may be best to ask them on the Eclipse Forums: https://www.eclipse.org/forums/index.php/f/147/
The way you have set it up is also OK, as long as everything compiles. You can import the benchmark projects into the same Eclipse where you have the CPS example projects imported. Simply unzip and copy the .launch file to the |
Hi,
you accidentally replied to the e-mail instead of to Github.
Can I copy-paste your e-mail on the issue or could you do that yourself?
Best regards,
…-----
Ábel Hegedüs
IncQuery Labs Ltd.
On 2017. 03. 08. 11:06:29, penghzhang <notifications@github.com> wrote:
The way you have set it up is also OK
Thanks for your remind. I have run the benchmark successfully in this way.
When running the benchmark, I still encountered several problems.
* I tried to change the case to "CLIENT_SERVER" by setting option "-case" in run configuration parameter to "CLIENT_SERVER". After execution, i saw the EObjects in the log file is 196 and EReferences is 340 and when i change the "-scale" option to 4, 16, 32, the EObjects and EReferences didn't change. Is my operation right?
* Have you run the benchmark with large scale(like Eobjects > 1,000,000)? In your benchmark performance report, i just see the case of "STATISTICS_BASED", Have you test other cases(like "CLIENT_SERVER", "LOWSYNCH")?
* If you didn't conduct 2, could you share a guide with me to help me run the benchmark with large scale model(like Eobjects > 1,000,000)?
Hope for your reply, thanks in advance!
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub [#7 (comment)], or mute the thread [https://github.com/notifications/unsubscribe-auth/AArtTMYf9aPLtpdgmnd1VJLXl0-brG0uks5rjn2hgaJpZM4MUKYe].
|
Well, that's a moot question now 😄 I will answer your question soon. |
You are probably seeing the statistics for the "warmup" phase, which always generates with scale 1, later in the log you should have a second "CPS Stats:" part that lists the size with the scale you entered.
We are running the statistics based case only in our CI, although it would be trivial to run it with the other options. If you want, I can perform a comparative run on our server with one of the other cases. Similarly, we would be very interested to receive the JSON or CSV results from a run on your server.
We should do some digging to see which case would be most suitable. As you can see, the number of references (I'm not sure if it includes attribute values as well) scales more steeply than the EObjects in the statistics based case, so some other case should probably be better. I will look around if there is a test to generate CPS stats easier without running the full benchmark. Looking at https://github.com/viatra/viatra-cps-benchmark/wiki/Performance-evaluation running the Low-synch scenario with scales higher than 512 already results in the numbers you want to get. As for running a benchmark on these sizes:
We would be very interested to run some comparisons with the same set of benchmarks on your machine and ours? |
I am sorry i didn't answer you in time because i was running the benchmark with your suggestions in different case, different scale and different scenario and also there is time difference between two of us. Now i have run several scenarios on my linux server and part of the report is as follows(Unit of consumed time is ns): Besides the test result, i still have a question:
|
You can see that it says
It is possible that the Low-synch scenario generator implementation creates duplicate identifiers. I will have to check that (opened #8). However, this should not cause problems in the built-in transformation variants.
These are different implementations of the same transformation (see https://github.com/viatra/viatra-docs/blob/master/cps/Alternative-transformation-methods.adoc ) and can be used for any case and any scale (though they scale differently). I have also opened #10 to track the need to make model size statistics generation easier. |
Yeah, I just scaned the benchmark source code and I saw there were only two modification phase file for the cases of CLIENT_SERVER and STATISTICS_BASED. So for now, i think i can only run these two cases and if i want to run a large scale model(EObject>1000000), i should change the "scale" parameter based on the existing two cases, am i right? |
In the meantime, I have added the modification phases to all cases, so you can use LOW_SYNCH as well after pulling the changes. Also, yes, unfortunately you will have to change the scales for some cases, as the generator rules are not synchronised between cases. |
Thanks for your updating the benchmark source code so quickly. and i have downloaed it. The latest report is as follows(Time unit is ns): |
Although I don't know exactly what output are you looking at, the values look reasonable.
We haven't explicitly measured other modifications, but in general, the type of the change is not relevant for most variants. For the incremental variants, usually the size of the change matters (deleting an app instance would be still rather small, deleting an app type with a big state machine and lots of instances is another matter). |
Thanks for your reply and i understand what you said. |
For your information, I have run a benchmark with the Low-synch case with some of the transformations on our build server: https://build.incquerylabs.com/jenkins/job/viatra-cps-benchmark/76/artifact/benchmark/cpsBenchmarkReport.html As you can see, with the -Xmx10G limit, we cannot run the incremental transformation on the 1024 scale, although the Local-search based variant can still complete due to its lower memory requirements (of course it is not incremental). |
Yeah. At the same time, I have several questions about CPS:
|
In the CPS-to-deployment transformation, defined here, building the traceability model is required. However, there have been previous discussions about this (see #4 as a result), since some transformation tools build internal traceability models that can be used instead of this explicit one. However, you can definitely write M2M incremental transformations in VIATRA without an explicit traceability model. One option is to link the target model and the source model directly, the other is to use some inherent data for correspondence (e.g. in the CPS example, it would be possible to use the id, ip address, and other mapped data to pair CPS and deployment elements, although maybe not entirely). Finally, while the CPS-to-deployment functional test suite enforces the traceability model, the performance benchmarks do not check transformation correctness (since we assume the transformation passed the functional tests). Of course, directly comparing approaches that build the traceability model to approaches that don't is not really correct from a benchmarking point of view, since the performed tasks are different.
Well, they are necessary to define a simple traceability metamodel (see on the same page). Additionally, they contain common features (id and description) that many other types inherit. |
And now, from my point, if i want to use VIATRA to implement the incremental Xform with a model defined by myself, the traceability model and the two abstract object are all optional. Am i right? |
You could change the
Yes.
Yes, you can implement incremental transformations with VIATRA on arbitrary models. If you need more detailed information, we should set up a Skype call or similar teleconference, to discuss your use cases and any future industrial or academic collaboration. |
I have installed the CPS benchmark projects in eclipse. But the performance is not so good as described in performance evaluation. So i don't know why it is. Could you share specific user manual with me to help find out where is my problem or give me any suggestions? Hope for your reply.
The text was updated successfully, but these errors were encountered: