Owl Hotfixes and proxy defaults fix #107
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is the hotfix for #99, in the long run, we should fork owl and modify it before parsing.
Why is the bug happening? The tokenizer run wants to allocate 40kb of ram at least twice each string. Since this is always very close to the limit of the esp32 and when the script is too long or lizard grew again, the bug will occur more often.
The fix is mainly two parts (+ proxy defaults fix).
First, limit the size of the token run:
The Tokenizer run will split the string statement into tokens, it has a limit of 4096 tokens for each run, since it can handle full files to be parsed. We only give it lines of string one by one, so it does not need to be that big.
By reducing the limit from 4096 to 256, we reduce the size from 40kb to 2,5kb for each run.
256 gives us a lot of headroom with tokens, yet we can make it configurable with our own owl fork.
Message token breakdown
"tornado_ref_knife_stop.active=true;tornado_ref_knife_stop.change=0;tornado_ref_knife_stop.inverted=false;tornado_ref_knife_stop.level=1;"
First statement:
Second statement:
Third statement:
Fourth statement:
Total: 24 tokens (6 tokens per statement × 4 statements)
Second, the Tokenizer Run will create another instance of itself at the end of each message to find the '\0', to optimize this, we just check for '\0' at the end without creating a new instance of Tokenizer Run.
Last (this has nothing directly to do with the #99 bug), some of the modules had no defaults for properties, since they don't have properties, the proxy module will check them anyway, that led to an exception. Now they return an empty list.
Testrun showed these outputs for 1,5h with "b1.liz" (startup script from a field friend robot)