Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Scale-independent Pixels in more parts of Simulation carousel #567

Conversation

blurfl
Copy link
Collaborator

@blurfl blurfl commented Jan 13, 2018

Scale-independent Pixels helps solve the text size issues in the labels. The labels over the sliders, and the text in the upper screen are addressed.
Haven't found a Linux system to test on yet...

Screenshots after fix:
OSX retina
simulation patch osx retina

WIN10 1368 x 768
simulation patch win136x768

Scale-independent Pixels helps solve the text size issues in the labels
@BarbourSmith
Copy link
Member

This looks beautiful!

I'm seeing some conflicts (probably because this branch was created before your last PR was merged).

image

In all of these cases where it's asking me to pick one version or the other, I just want to pick the one which uses the sp, right?

@blurfl
Copy link
Collaborator Author

blurfl commented Jan 13, 2018

the 'sp' one, right. Sorry for the confusion!

@davidelang
Copy link
Contributor

davidelang commented Jan 13, 2018 via email

@blurfl
Copy link
Collaborator Author

blurfl commented Jan 13, 2018

Here's a screenshot from a Ubuntu Linux box at 1920x1200:
simulation patch linux 1920x1200

Trying Linux at 800x600 is a mess, but then so is the calibration carousel 😟

@BarbourSmith
Copy link
Member

It looks beautiful! 👍 👍

@BarbourSmith BarbourSmith merged commit 14cb36a into MaslowCNC:master Jan 13, 2018
@blurfl
Copy link
Collaborator Author

blurfl commented Jan 13, 2018

I've done some more work in the Simulator - added "Bar's-score", though I don' think I'm calculating correctly:
Average the long-horizontal-cut error then average the long-vertical-cut error and average those two for the first digit.
Average the box-horizontal-cut error then average the box-vertical-cut error and average those two for the second digit.
Is that the idea?

@BarbourSmith
Copy link
Member

Wow! That was fast!

I think that sounds right, but I'm not 100% sure I understand so I'm going to repeat it.

Of a score like 5.43-2.10 the 5.43 is the average of the absolute value of the error in all of the long cuts (the 900mm cuts and the 1905mm cuts). The second number the 2.10 is the average of the absolute value of the error in the smaller 100mm square cuts.

For example if the long measurements were 899, 901, 900, 1905, 1905, 1906 that would give us absolute values of the errors of 1,1,0,0,0,1 which would have an average of 0.50

@blurfl
Copy link
Collaborator Author

blurfl commented Jan 13, 2018

Thanks for the clarification! What do you want to call the figure?

@BarbourSmith
Copy link
Member

Oh, I don't know. We don't have a name yet. What about "Calibration Benchmark Test" or is that too boring?

@blurfl
Copy link
Collaborator Author

blurfl commented Jan 13, 2018

How about "Calibration Benchmark Score"?

@BarbourSmith
Copy link
Member

That sounds excellent!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants