Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Owl Hotfixes and proxy defaults fix #107

Merged
merged 5 commits into from
Nov 28, 2024
Merged

Conversation

JensOgorek
Copy link
Contributor

@JensOgorek JensOgorek commented Nov 28, 2024

This is the hotfix for #99, in the long run, we should fork owl and modify it before parsing.
Why is the bug happening? The tokenizer run wants to allocate 40kb of ram at least twice each string. Since this is always very close to the limit of the esp32 and when the script is too long or lizard grew again, the bug will occur more often.

The fix is mainly two parts (+ proxy defaults fix).

First, limit the size of the token run:
The Tokenizer run will split the string statement into tokens, it has a limit of 4096 tokens for each run, since it can handle full files to be parsed. We only give it lines of string one by one, so it does not need to be that big.
By reducing the limit from 4096 to 256, we reduce the size from 40kb to 2,5kb for each run.
256 gives us a lot of headroom with tokens, yet we can make it configurable with our own owl fork.

Message token breakdown

"tornado_ref_knife_stop.active=true;tornado_ref_knife_stop.change=0;tornado_ref_knife_stop.inverted=false;tornado_ref_knife_stop.level=1;"

First statement:

  • tornado_ref_knife_stop (identifier)
  • . (dot)
  • active (identifier)
  • = (equals)
  • true (boolean literal)
  • ; (semicolon)

Second statement:

  • tornado_ref_knife_stop (identifier)
  • . (dot)
  • change (identifier)
  • = (equals)
  • 0 (integer literal)
  • ; (semicolon)

Third statement:

  • tornado_ref_knife_stop (identifier)
  • . (dot)
  • inverted (identifier)
  • = (equals)
  • false (boolean literal)
  • ; (semicolon)

Fourth statement:

  • tornado_ref_knife_stop (identifier)
  • . (dot)
  • level (identifier)
  • = (equals)
  • 1 (integer literal)
  • ; (semicolon)

Total: 24 tokens (6 tokens per statement × 4 statements)

Second, the Tokenizer Run will create another instance of itself at the end of each message to find the '\0', to optimize this, we just check for '\0' at the end without creating a new instance of Tokenizer Run.

Last (this has nothing directly to do with the #99 bug), some of the modules had no defaults for properties, since they don't have properties, the proxy module will check them anyway, that led to an exception. Now they return an empty list.

Testrun showed these outputs for 1,5h with "b1.liz" (startup script from a field friend robot)

>> size of token run: 2548
>> available heap: 84776
>> largest free block: 69632

@JensOgorek JensOgorek added the bug Something isn't working label Nov 28, 2024
@JensOgorek JensOgorek added this to the 0.6.0 milestone Nov 28, 2024
@JensOgorek JensOgorek self-assigned this Nov 28, 2024
@JensOgorek JensOgorek linked an issue Nov 28, 2024 that may be closed by this pull request
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Core gets unresponsive (long startup scripts)
2 participants