Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specifiy setup for MSL Regression Testing #1392

Closed
modelica-trac-importer opened this issue Jan 14, 2017 · 26 comments
Closed

Specifiy setup for MSL Regression Testing #1392

modelica-trac-importer opened this issue Jan 14, 2017 · 26 comments
Assignees
Labels
discussion Discussion issue that it not necessarily related to a concrete bug or feature

Comments

@modelica-trac-importer
Copy link

modelica-trac-importer commented Jan 14, 2017

Reported by leo.gall on 13 Jan 2014 11:07 UTC
As decided at the 81st Modelica Design Meeting, I worked on the details of MSL regression tests.
You can find a draft proposal attached to this ticket.

SetupForMSLRegressionTesting_2014-01-13.pdf

This is an early draft. Any comments to the planned file structure, naming, and the overall test procedure are highly welcome!

@tool vendors: Are you able to submit the proposed file structure?
@Library officers / MAP-LIB: Any information missing? Should the full result file (Dymola: .mat) be stored or not?
@dietmar: Any improvements needed in order to make generation of the overview table easier?

After deciding on file structure and simulation settings, I'm going to create a first set of reference result files.


Migrated-From: https://trac.modelica.org/Modelica/ticket/1392

@modelica-trac-importer modelica-trac-importer added the discussion Discussion issue that it not necessarily related to a concrete bug or feature label Jan 14, 2017
@modelica-trac-importer
Copy link
Author

Comment by dietmarw on 13 Jan 2014 14:03 UTC
Before people complain, the correct link in footnote 7 should read:
"gen_fmi_web.py from https://svn.fmi-standard.org/fmi/branches/FMISite/dev/"

Also for section "3.4 Result file sizes" I would say that versioning the reference result data is enough. If people like to check the exact tool result they can simply rerun the simulation.

@modelica-trac-importer
Copy link
Author

Comment by jmattsson on 16 Jan 2014 09:16 UTC
It seems a bit unclear which files under TestResults are to be submitted by tool vendors and which are to be generated by MA.

@modelica-trac-importer
Copy link
Author

Comment by leo.gall on 16 Jan 2014 11:16 UTC
Replying to [comment:2 jmattsson]:

It seems a bit unclear which files under TestResults are to be submitted by tool vendors and which are to be generated by MA.

You are right, tool vendors only need to submit a subset.

A minimum set of files from tool vendors would be:

  • ModelName.csv
  • creation.txt
  • simulate_*.log

It would be nice to have (additionally):

  • ModelName.mat (a full result file, in tool specific format. Does not need to be .mat)
  • translate_*.log (translation diagnostics, if not included in simulation log)

I don't have a strong opinion on file names and structure of log files. So, feel free to propose improvements, if it would facilitate your work of generating results.

@modelica-trac-importer
Copy link
Author

Comment by sjoelund.se on 21 Jan 2014 11:32 UTC
Note:
2.2 and 2.3 do not line up if Option 1 (Flat) structure is used. It's my preferred choice.
And it's a little odd to have ModelName.csv, but simulate_*.log rather than result.csv.

@modelica-trac-importer
Copy link
Author

Comment by sjoelund.se on 21 Jan 2014 11:35 UTC
Making comparisonSignals.txt mandatory would be a nice option since then the tool vendors only needs to download these files to generate their results (the reference results are not needed except if you want to check your results).

@modelica-trac-importer
Copy link
Author

Comment by leo.gall on 22 Jan 2014 16:50 UTC
Replying to [comment:4 sjoelund.se]:

Note:
2.2 and 2.3 do not line up if Option 1 (Flat) structure is used. It's my preferred choice.
And it's a little odd to have ModelName.csv, but simulate_*.log rather than result.csv.

Section 2.3 assumes the hierarchical structure which has been preferred in section 2.2.
As already discussed, I don't have a strong opinion on the folder structure.

I'm aware that it is odd to have ModelName.csv instead of result.csv.
In the beginning I used the name result.csv but switched to ModelName.csv.
Reasons:

  1. The compare tool uses the CSV file name for generating report file names. So, result.csv leads to stupid report names.
  2. When searching for comparison problems (i.e. opening files manually and comparing them in e.g. in Dymola, Matlab, Python), it's more convenient to work with a useful file name instead of a generic name like result.csv.

If more people prefer the flat folder structure, we could switch to that.

@modelica-trac-importer
Copy link
Author

Comment by leo.gall on 22 Jan 2014 16:59 UTC
Replying to [comment:5 sjoelund.se]:

Making comparisonSignals.txt mandatory would be a nice option since then the tool vendors only needs to download these files to generate their results (the reference results are not needed except if you want to check your results).

I agree, comparisonSignals.txt could be mandatory for reference files.
During automatic generation, this file is created anyway.
For maintaining the reference files, it would be a little more work for the library developer, if he has to generate new references. The process, how references are going to be created in the future isn't defined, yet.
But this could be solved by a simple script extracting the comparisonSignals from the reference CSV.

For tool vendors, it is not necessary to submit comparisonSignals.txt together with each set of CSV files. That's just redundant.

@modelica-trac-importer
Copy link
Author

Comment by jmattsson on 23 Jan 2014 08:30 UTC
Replying to [comment:6 leo.gall]:

Replying to [comment:4 sjoelund.se]:
I'm aware that it is odd to have ModelName.csv instead of result.csv.
In the beginning I used the name result.csv but switched to ModelName.csv.
Reasons:

  1. The compare tool uses the CSV file name for generating report file names. So, result.csv leads to stupid report names.
  2. When searching for comparison problems (i.e. opening files manually and comparing them in e.g. in Dymola, Matlab, Python), it's more convenient to work with a useful file name instead of a generic name like result.csv.

Seems reasonable.

If more people prefer the flat folder structure, we could switch to that.

I prefer the hierarchical structure.

@modelica-trac-importer
Copy link
Author

Comment by leo.gall on 18 Jun 2014 10:50 UTC
Status update:

@modelica-trac-importer
Copy link
Author

Comment by sjoelund.se on 23 Jun 2014 10:16 UTC
rsync does not seem to work for the read access. This makes updating the results slow even with a 100 Mbit connection.

@modelica-trac-importer
Copy link
Author

Comment by sjoelund.se on 24 Jun 2014 09:28 UTC
Some tests lack reference results (because they would be empty). Wouldn't it make sense to just keep "time" as the comparison signal (it is listed for examples with more signals)? Just need a file containing (for stopTime=1.0):

time
0.0
1.0

@modelica-trac-importer
Copy link
Author

Comment by leo.gall on 25 Jun 2014 09:09 UTC
Replying to [comment:10 sjoelund.se]:

rsync does not seem to work for the read access. This makes updating the results slow even with a 100 Mbit connection.

Does write access help?
https://svn.modelica.org/projects/RegressionTesting/AccessInfo/write-access/README.html

@modelica-trac-importer
Copy link
Author

Comment by leo.gall on 25 Jun 2014 09:21 UTC
Replying to [comment:11 sjoelund.se]:

Some tests lack reference results (because they would be empty). Wouldn't it make sense to just keep "time" as the comparison signal (it is listed for examples with more signals)? Just need a file containing (for stopTime=1.0):

time
0.0
1.0

What would be the purpose of "empty" reference files?

I tried with the compare tool and it accepts csv files with time as the only column if we add "," to the end of line:

time,
0.0,
1.0,

We could add these empty files, if this helps?
Currently we sort them out during mat2csv conversion, if we recognize that there are no comparison signals available.

@modelica-trac-importer
Copy link
Author

Comment by sjoelund.se on 25 Jun 2014 10:15 UTC
Replying to [comment:12 leo.gall]:

Does write access help?
https://svn.modelica.org/projects/RegressionTesting/AccessInfo/write-access/README.html

You don't have permission to access /projects/RegressionTesting/AccessInfo/write-access/README.html on this server. (using my MA credentials)

@modelica-trac-importer
Copy link
Author

Comment by sjoelund.se on 25 Jun 2014 10:17 UTC
Replying to [comment:13 leo.gall]:

What would be the purpose of "empty" reference files?

Consistency. Right now I would need to add some special cases to run this test and not upload a result-file. It makes little sense to have a regression testing suite with models that you don't check anything for anyway.

@modelica-trac-importer
Copy link
Author

Comment by leo.gall on 25 Jun 2014 14:05 UTC
Replying to [comment:15 sjoelund.se]:

Right now I would need to add some special cases to run this test and not upload a result-file.

I don't know how your scripts work, but as you have to read the comparison signals file, it should be easy to not store csv results if there are no comparison signals defined.

But if it's tedious I can create empty reference files. It shouldn't be to many cases.

It makes little sense to have a regression testing suite with models that you don't check anything for anyway.

I agree, therefore I tried to define comparison signals when ever possible. But as we used a tagged version for creating reference results, I couldn't change the test cases.

Even if there are no comparison signals defined, we can test if the model simulates or not.

@modelica-trac-importer
Copy link
Author

Comment by sjoelund.se on 25 Jun 2014 14:09 UTC
Replying to [comment:16 leo.gall]:

I don't know how your scripts work, but as you have to read the comparison signals file [...]

I do? It was simpler to just read the first line of the csv-file ;)

@modelica-trac-importer
Copy link
Author

Comment by anonymous on 9 Feb 2015 14:27 UTC
@leo: When can we expect to see the newly generated reference results of MSL 3.2.1+build.3 at the RegressionTesting SFTP server? One necessary reason is the changed behavior of Modelica.Mechanics.Translational.Examples.InitialConditions.

@modelica-trac-importer
Copy link
Author

Comment by frenkel on 17 Feb 2015 13:32 UTC
For the model Modelica.Blocks.Examples.BooleanNetwork1 I discovered a problem regarding the first output step. For the variable rSFlipFlop.pre.u which is calculated in the following way:

rSFlipFlop.Qini = false;
rSFlipFlop.pre.u.start = not rSFlipFlop.Qini;
pre(rSFlipFlop.pre.u) = rSFlipFlop.pre.u.start;
rSFlipFlop.QI = pre(rSFlipFlop.pre.u);
rSFlipFlop.R = false;
rSFlipFlop.Q = not (rSFlipFlop.QI or rSFlipFlop.R);
rSFlipFlop.S = false;
rSFlipFlop.pre.u = not (rSFlipFlop.S or rSFlipFlop.Q)

After solving the initial equation system rSFlipFlop.pre.u=true hold and after the first step with the equation system for transient simulation at time.start it switched to rSFlipFlop.pre.u=false.

In the reference results from Dymola is only stored rSFlipFlop.pre.u=false, but if a tool, like OpenModelica or SimulationX generate results then rSFlipFlop.pre.u=true and at the same time but later rSFlipFlop.pre.u=false

DataSet: rSFlipFlop.pre.u
0, 1
0, 1
0, 0
0.02, 0

There should be a desicion how to handle this (for example):
a. store results after solving the initial system and for each time step
b. store not the results of the initial system but for each time step
c. ignore the first time steps for equal values of time for comparison

From my point of view I prefer a., because then also the results of the initial system are compareable. But then may be Dymola is not the right choise for generate reference results.

@modelica-trac-importer
Copy link
Author

Comment by frenkel on 20 Feb 2015 12:28 UTC
Reference Results from Modelica.Electrical.Analog.Examples.AD_DA_conversion seems to be wrong.
For example the result for aD_Converter.y[1] switch at time=0.151 from '1' to '0'. But if the length of the output interval is changed from 4e-05 to 4e-04, aD_Converter.y[1] switch at time=0.15. In other words one sample(pulse.startTime, pulse.period) earlier.
This ist not explainable because the value for aD_Converter.y[1] ist calculated from:

z:=if u>VRefLow then integer((u-VRefLow)/(VRefHigh-VRefLow)*(2^N - 1) + 0.5) else 0;
y[i] :=if mod(z, 2) > 0 then L.'1' else L.'0';

The strange point is that the value of (u-VRefLow)/(VRefHigh-VRefLow)*(2^N - 1) + 0.5 is for both cases 64. Unfortunately the value for integer((u-VRefLow)/(VRefHigh-VRefLow)*(2^N - 1) + 0.5) is in one case 64 and in the other case 63.

@modelica-trac-importer
Copy link
Author

Comment by jmattsson on 20 Feb 2015 12:39 UTC
Replying to [comment:20 frenkel@…]:

The strange point is that the value of (u-VRefLow)/(VRefHigh-VRefLow)*(2^N - 1) + 0.5 is for both cases 64. Unfortunately the value for integer((u-VRefLow)/(VRefHigh-VRefLow)*(2^N - 1) + 0.5) is in one case 64 and in the other case 63.

I suspect that the expression evaluates to a number x such that 63.9999995 =< x < 64, which is then printed with 6 decimals as 64.000000.

@modelica-trac-importer
Copy link
Author

Comment by frenkel on 20 Feb 2015 14:38 UTC
I checked this and then for:

aD_Converter.u := sineVoltage.signalSource.offset+(if time < sineVoltage.signalSource.startTime
     then 0 else sineVoltage.signalSource.amplitude*sin(6.283185307179586*(
    sineVoltage.signalSource.freqHz*(time-sineVoltage.signalSource.startTime))+
    sineVoltage.signalSource.phase));

calculated different values for different output lengths.

@modelica-trac-importer
Copy link
Author

Comment by jfrenkel on 24 Feb 2015 11:55 UTC
The Results for Modelica.Electrical.Machines.Examples.Transformers.AsymmetricalLoad seems not to be correct. The comparison signal is transformer.l2sigma.inductor[1].i.

It looks like that the results are generated with a version prior to 0414edc, d5a38e5.
If you take a look at the transformer.l2sigma.inductor.i, which is the same value for the p pin, which should have the same value of the Rot2.plug_p but is the negated.

The same hold are for the other Modelica.Electrical.Machines.Examples.

@modelica-trac-importer
Copy link
Author

Comment by jfrenkel on 25 Feb 2015 12:02 UTC
Also the reference files for Modelica.Media.Examples.Tests.MediaTestModels.Air.SimpleAir are generated with an old (< 7ed004c) version of the library.

@modelica-trac-importer
Copy link
Author

Comment by leo.gall on 16 Mar 2015 08:13 UTC
Sorry for not answering all these messages since comment:18. Somehow I didn't receive email notifications for this ticket.

The current (MSL 3.2.1+build.2) CSV reference results have been commited here:
https://svn.modelica.org/projects/RegressionTesting/ReferenceResults/MSL/trunk
The existing MA file server has the exact same structure, but additionally contains full MAT files. The CSV files only contain the variables which are used for automatic comparison.

Now, as the reference files are under version control, we can start to update and improve them. But this isn't part of the current contract with MA, so I have to see when I can squeeze this in.

@modelica-trac-importer
Copy link
Author

Comment by otter on 22 Jun 2015 13:04 UTC
Seems to be finalized

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Discussion issue that it not necessarily related to a concrete bug or feature
Projects
None yet
Development

No branches or pull requests

2 participants