Wordlists and compilation speed #422
Replies: 8 comments 36 replies
-
Two potential changes:
|
Beta Was this translation helpful? Give feedback.
-
Turning the existing dictionary structure into a linked list would cost about 700 bytes in data. Unclear what the code size cost would be. edit: Actually, navigating a linked list would take 50% more cycles than navigating by addition, 27 vs 18. I'm starting to think there are no savings to be had without a radical change to the dictionary. |
Beta Was this translation helpful? Give feedback.
-
A fixed size array is still pretty flexible and cool. In the unlikely event the array is full, swap in another array. |
Beta Was this translation helpful? Give feedback.
-
I dare to say if there is a faster way to traverse a dictionary, then Chuck Moore hasn't found it yet! 😄 |
Beta Was this translation helpful? Give feedback.
-
A silly idea to speed up FIND. Theres one bit free in the ”name length”
byte. That could be used to store if the sum of characters in the name is
even or odd. By comparing that together with string length, in 50% of cases
we can dismiss ”wrong” strings with same string length without comparing
actual characters.
Does it matter? Probably not much. But it could be done.
…On Sun, 9 Jan 2022 at 05:30, Poindexter Frink ***@***.***> wrote:
There are 2 things the dictionary provides: names and associated tokens.
We have demonstrated, I think you'll agree, that looking up names is
significantly slower. Arrays as described are flat out slower.
Unquestionably.
Which leaves fetching execution tokens. The current idiom is to add an
8-bit number to the 16-bit name pointer already available FIND-NAME,
fetch that and put it on the stack.
The array design would need to update base addresses at the top of
FIND-NAME even if the string didn't match so that you wouldn't need a bunch
of compare/branches here to reuse your convenient index. You would save a
half-dozen cycles fetching the XT here over the current version, at the
cost of a dozen cycles for every single lookup to cache the base address, a
net loss in the search algorithm as we would only have to look up the XT
for the matching entry.
It's not faster than the current design, demonstrably, on paper, not in
concept. Whatever savings "could" be there are overshadowed by the costs
for the current design goal: compilation speed.
You could prove me wrong by writing and profiling the code though. :)
—
Reply to this email directly, view it on GitHub
<#422 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAY34O7SSHTXAIUZPO237S3UVEFODANCNFSM5LQOQNNQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I hate to admit it, but |
Beta Was this translation helpful? Give feedback.
-
For measuring compilation speed, I am going to measure the time taken to include the following files:
|
Beta Was this translation helpful? Give feedback.
-
What if we kept track of the first unique feature of each word? |
Beta Was this translation helpful? Give feedback.
-
It was always the case that durexforth does not have Wordlists.
Biggest reason not to, is brevity and ease of implementation. The feature is optional and not needed.
It now crossed my mind that there could be a concrete benefit to having optional wordlists: compilation speed. Practically, it might help FIND-NAME speed considerably if assembler words can be skipped in the search.
(exactly how much it would help, would need to be measured)
On a tangent, I realize now that the traditional Ragsdale assembler, where e.g. # ,X ,Y are separate words, is helpful for reducing word count and thus compilation speed.
Beta Was this translation helpful? Give feedback.
All reactions