-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Add a tokenizer as intermediate step #178
Comments
Instead of implementing a new lexer, isn't it possible to actually use rust's lexer? (https://crates.io/crates/). |
Specifically, rustc's lexer is now available as The other option is to use |
I have way too much experience writing syn-based parsers. Here's a parser for the RON syntax using syn/proc_macro2: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=18dc90ed4b5244507cb6660fd2f39de7 Feel free to use it. Also, double check me, I could've missed something. (EDIT: Added |
Issue has had no activity in the last 180 days and is going to be closed in 7 days if no further activity occurs |
I think we can leave this closed, it seems outside the scope of this repository and I've started something like this in ron-rs/ron-reboot |
Description
Add a tokenizer which emits
Token
s given a byte sequence.Those tokens will then be feed into the deserializer.
Motivation
This allows to simplify the logic in the deserializer, makes testing easier and is more likely to avoid bugs.
Additionally, we could refactor the
Serializer
to emit tokens, too (at a second step). If we do that, we might be able to address #175 by formatting the tokens; the pretty config would be mostly irrelevant for the serialization.Aaand in a third step, we could even utilize this architecture to implement a formatter, simply by tokenizing an input file and pretty-formatting the tokens.
The text was updated successfully, but these errors were encountered: