Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First algorithm fails on E5-26xx #78

Open
ADD-eNavarro opened this issue Aug 3, 2023 · 7 comments
Open

First algorithm fails on E5-26xx #78

ADD-eNavarro opened this issue Aug 3, 2023 · 7 comments
Labels

Comments

@ADD-eNavarro
Copy link

We are using your tool in my enterprise, and automatically testing in a pool of computers.
We've found that the first search algorithm always fails in some computers, making our tests flaky. At first, we thought it may be related to work load on those machines, but tinkering with limit time parameters didn't change a thing. In the end, we realized it fails always in the same machines, and works fine in the rest.
The only point in common we've found on those machines is that them all have Intel processors from the E5-26xx family (2640, 2650, 2680).

Can you shed some light about this issue, or share a thought?
Thanks.

@aarondandy
Copy link
Owner

That sounds pretty strange. Maybe a place to start would be some more details:

  • What do you mean by "fail"?
  • Are you getting an exception or does the result differ from expectations?
  • Does this happen consistently for a specific word? Is it possible to create a minimum reproduction by using a specific word with a specific TimeLimit value?
  • Is the difference with Check or Suggest?
  • For the machines where the results don't meet expectations, does it always not meet expectations or is the result intermittent even on those machines?

My guess is you are running into the time limits but you mentioned you tinkered with the time limits already. The design of Hunspell has, in my opinion, an awkward timing mechanism to prevent overuse of CPU resources. You might have tried this already, but increasing the MinTimer to a larger value and increasing the TimeLimit values might help ensure you get more consistent results during testing. See:

public int MinTimer { get; set; } = 100;
/// <summary>
/// The time limit for some long running steps during suggestion generation.
/// </summary>
/// <remarks>
/// Timelimit: max ~1/4 sec (process time on Linux) for a time consuming function.
/// </remarks>
public TimeSpan TimeLimitSuggestStep { get; set; } = TimeSpan.FromMilliseconds(250);
/// <summary>
/// The time limit for each compound suggestion iteration.
/// </summary>
public TimeSpan TimeLimitCompoundSuggest { get; set; } = TimeSpan.FromMilliseconds(100);
/// <summary>
/// The time limit for each compound word check operation.
/// </summary>
public TimeSpan TimeLimitCompoundCheck { get; set; } = TimeSpan.FromMilliseconds(50);
/// <summary>
/// A somewhat overall time limit for the suggestion algorithm.
/// </summary>
public TimeSpan TimeLimitSuggestGlobal { get; set; } = TimeSpan.FromMilliseconds(250);

@ADD-eNavarro
Copy link
Author

That sounds pretty strange. Maybe a place to start would be some more details:

  • What do you mean by "fail"?

I mean that it fails to use the right algorithm to give an answer.

  • Are you getting an exception or does the result differ from expectations?

Different result, seems to be using the second algorithm.

  • Does this happen consistently for a specific word? Is it possible to create a minimum reproduction by using a specific word with a specific TimeLimit value?

Yes, our tests are made with single and multiple words (a phrase), but always the same, so to know the expected result. We have played around with the TimeLimit value, all the way from 1ms (to force second algorithm) to 3000ms (12 times the base time limit to be sure it's the first one solving the query). That's how we realized that this issue happened in some machines and in those only.
I'm not sure if you're asking for a minimum reproduction example code, if that's the case please tell and I'd gladly write it.

  • Is the difference with Check or Suggest?

To be precise, in the multi-word input I run a Check on each word first and Suggest only in those not present in the dictionary, but that part working fine I haven't checked the inner workings of Check. does it use the same timed algorithm-switching mechanism?

  • For the machines where the results don't meet expectations, does it always not meet expectations or is the result intermittent even on those machines?

Seems to be consistent in those machines. Will make a deeper check though.

My guess is you are running into the time limits but you mentioned you tinkered with the time limits already. The design of Hunspell has, in my opinion, an awkward timing mechanism to prevent overuse of CPU resources. You might have tried this already, but increasing the MinTimer to a larger value and increasing the TimeLimit values might help ensure you get more consistent results during testing. See:

public int MinTimer { get; set; } = 100;
/// <summary>
/// The time limit for some long running steps during suggestion generation.
/// </summary>
/// <remarks>
/// Timelimit: max ~1/4 sec (process time on Linux) for a time consuming function.
/// </remarks>
public TimeSpan TimeLimitSuggestStep { get; set; } = TimeSpan.FromMilliseconds(250);
/// <summary>
/// The time limit for each compound suggestion iteration.
/// </summary>
public TimeSpan TimeLimitCompoundSuggest { get; set; } = TimeSpan.FromMilliseconds(100);
/// <summary>
/// The time limit for each compound word check operation.
/// </summary>
public TimeSpan TimeLimitCompoundCheck { get; set; } = TimeSpan.FromMilliseconds(50);
/// <summary>
/// A somewhat overall time limit for the suggestion algorithm.
/// </summary>
public TimeSpan TimeLimitSuggestGlobal { get; set; } = TimeSpan.FromMilliseconds(250);

As I said, I've played quite a bit with the different time configurations available, changed nothing.

@aarondandy
Copy link
Owner

It sounds like you are saying there is a timing issue and something in the first part of some algorithm is going too slow on some machines which prevents the following part of Suggest from returning results. If I got that right, this is starting to make some sense to me. To debug this, you could create a test case for your specific wods and dictionaries to see if you can find specifically which code in the codebase is returning results and which code is not being executed. Setting breakpoints on or around opLimiter usages might reveal which specific code is going slow for your specific words and dictionary.

var opLimiter = new OperationTimedLimiter(Options.TimeLimitSuggestGlobal, _query.CancellationToken);

@ADD-eNavarro
Copy link
Author

ADD-eNavarro commented Aug 29, 2023

Let me explain a bit better.
In #40 (comment) you said that at some point of the code, the suggestion algorithm switches from the one it begins using (MapRelated) to NGram, which I call first and second algorithms, respectively.
Now, the issue is that, in some machines, even with a long TimeLimit, I'm getting the same results as when I use a TimeLimit of 1 (to force NGram use internally). The only point in common for those machines is the processor family, as stated.
I will try debugging opLimiter and let you know the results.

@aarondandy
Copy link
Owner

@ADD-eNavarro , I made a new release that might fix your issue. Give it a try and let me know. I was previously using Environment.TickCount which wasn't really a great choice. This new release changes that and may behave differently.

https://github.com/aarondandy/WeCantSpell.Hunspell/releases/tag/5.0.0

@aarondandy
Copy link
Owner

This version also has a fix for a timer issue: https://www.nuget.org/packages/WeCantSpell.Hunspell/5.0.1

@ADD-eNavarro
Copy link
Author

Cool, thanks for the good work.
As of now, we're using the parameter that limits the amount of returned results to go around this issue, but we'll probably switch to the new version at some point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants