Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split Reader into SliceReader and BufferedReader #425

Closed
wants to merge 8 commits into from

Conversation

999eagle
Copy link
Contributor

This PR was split from #417.

This splits Reader into two new structs, SliceReader and IoReader to better separate which kind of byte source the Reader uses to read bytes. Changes are based on #417 (comment).
A Reader<SliceReader> also explicitly doesn't have methods for buffered access anymore.

@999eagle 999eagle mentioned this pull request Jul 18, 2022
Copy link
Collaborator

@Mingun Mingun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the first commit (0fa2a8b) you've added a new XmlSource implementation which then was removed in the next commit -- it seems this impl could have not been introduced at all? Anyway, it is completely wrong to create some implementation just to be able to write test code. Tests should test real code that would executed. I think, you could just squash first two commits.

The third commit does too much work -- it replaces usages of read_event_into() with read_event() and did some other stuff. I prefer to split that two actions to separate commits. Actually, I'll plan to make a PR with replacements tonight, so you will no need to do the first part of that.

Also, after introducing a helper functions (input_from_bytes, etc.), comments that explaining positions in the input, no longer spot the correct place in the input. That should be fixed

@999eagle
Copy link
Contributor Author

@Mingun Yes I know. I added this implementation purely so the check! macro can use &mut () as buffer instead of () to unify how buffers are passed to the underlying implementation. As you can see, this implementation just forwards to the real XmlSource-implementation for &[u8] and thus does test real code.
I had the choice of either making this commit not compile at all, making the check-macro more complex and then removing that complexity in a later commit or introducing this forwarding implementation of XmlSource. I thought the latter option was the best to be able to split my original larger commit into several smaller ones.

@codecov-commenter
Copy link

codecov-commenter commented Jul 18, 2022

Codecov Report

Merging #425 (c972101) into master (ebbcce0) will increase coverage by 2.42%.
The diff coverage is 77.04%.

@@            Coverage Diff             @@
##           master     #425      +/-   ##
==========================================
+ Coverage   49.51%   51.93%   +2.42%     
==========================================
  Files          22       25       +3     
  Lines       13847    13449     -398     
==========================================
+ Hits         6856     6985     +129     
+ Misses       6991     6464     -527     
Flag Coverage Δ
unittests 51.93% <77.04%> (+2.42%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
benches/macrobenches.rs 0.00% <0.00%> (ø)
benches/microbenches.rs 0.00% <0.00%> (ø)
examples/read_buffered.rs 0.00% <0.00%> (ø)
examples/read_texts.rs 0.00% <0.00%> (ø)
src/lib.rs 12.33% <0.00%> (+0.06%) ⬆️
src/reader/buffered_reader.rs 68.11% <68.11%> (ø)
src/reader/slice_reader.rs 86.82% <86.82%> (ø)
src/reader/mod.rs 90.70% <96.56%> (ø)
src/de/mod.rs 77.77% <100.00%> (-0.46%) ⬇️
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ebbcce0...c972101. Read the comment docs.

@Mingun
Copy link
Collaborator

Mingun commented Jul 18, 2022

Yes, I see that now. It is really hard to follow refactorings where some code moved to another file & changed at the same time. I suggest to rebase this and follow the following pattern:

  • in the first commit move relevant parts of code with minimal changes to the reader/io_reader.rs and reader/slice_reader.rs. Make only that changes that will needed to keep code compilable and tests run. You can even make two different commits (one for reader/io_reader.rs, one for reader/slice_reader.rs) if changes will read hardly at once.
  • in the second and next commits try to do that you've done now

With such structure all actual changes would be visible in 2nd and next commits and will make review much more simpler.

Anyway, try to not mix different kind of changes in one commit. If you need to change something that is not directly related to changes in your commit, stash your work, made necessary commit(s), stash pop and continue work on original changes.

Very frequently the main work is not to made changes, but split them into several commits in the correct order. I personally have many private branches (~20) in my local working copy, which contains some working code, but their just not yet ready for PR. To prepare them I need to spend several weeks to adapt them

@dralley
Copy link
Collaborator

dralley commented Jul 18, 2022

Code-wise, this all looks fine to me, I don't have any complaints that block the PR. I also checked out the branch, ran tests, went through the code for a few hours while experimenting with some changes on top of it.

re: commit structure, I agree about trying to do it that way in the first place, but I don't feel so strongly that it needs to block the PR, especially since fixing it at this point would essentially require redoing the work of entire commit. It is probably easier to review, than to untangle at this point.

@dralley
Copy link
Collaborator

dralley commented Jul 18, 2022

So for the stuff I'm about to say, wait on an acknowledgement and 👍 from @Mingun before putting any effort into addressing it. I have a patch that you can start with or use as reference.

I think the abstraction is (very slightly, in a fairly easy-to-fix way) incorrect for the architecture we might want to have going forwards [0]. Basically IoReader should be BufferedReader, and SliceReader only used for &str.

The idea is that in the near future all decoding will be done internally rather than forcing the user to handle it, which aside from being much easier to use and maintain, would have better performance in the majority of cases. That means, that if you want to parse a byte slice of an unknown encoding, you would need to either decode the data as UTF-8 before parsing it (and the SliceReader can then reliably assume that no buffer is necessary), or else use a BufferedReader that would do so internally.

Does that make sense?

And on a related note, the check! macro makes the assumption that SliceReader can be used with raw bytes. This, I haven't yet figured out how to address.

[0] @Mingun you never actually gave a direct thumbs-up or thumbs-down

@Mingun
Copy link
Collaborator

Mingun commented Jul 18, 2022

It is anyway needed to fix explanation comments in tests that shifted in this PR, so anyway there should be a rebase. Also, I hope that it will be rebased on top of #426.

I don't feel so strongly that it needs to block the PR, especially since fixing it at this point would essentially require redoing the work of entire commit

I think, it will be better to spend some time by polishing stuff rather than discovering in the future that some error was made, and we were unable to detect it in time. In the end, we are in no particular hurry and can spend time on this. In the long run, an understandable history is more important.

I think the abstraction is (very slightly, in a fairly easy-to-fix way) incorrect for the architecture we might want to have going forwards [0]. Basically IoReader should be BufferedReader, and SliceReader only used for &str.

If you think so, it is another reason to wait some time before merge. The efforts we have been making recently to properly support encoding can affect a lot of things. I also suspended my work on the namespace because of this. I do not forget about your PRs, @dralley, just had no time yet to write my thoughts.

@dralley
Copy link
Collaborator

dralley commented Jul 18, 2022

I just meant the overall idea, rather than any specific PR.

Anyway, #426 ought to address the concerns about commit #3, which just leaves #2.

I really do appreciate how much attention you give to clean commits, it does make your PRs very easy to review, but in this specific instance I'm not sure if the benefits are worth the significant time investment to break up commit #2. Our test coverage for this functionality is very good (88% for slice reader, 78% for io reader). If you go through the codecov report line by line, the only un-covered code is:

  • some trivial trait implementations e.g. Deref
  • read_to_end_into() and read_text_into() on io_reader, both of which are identical to the currently existing code
  • read_text() on slice_reader, which is trivial and almost identical

Would it be an acceptable compromise if, instead of spending a bunch of time splitting apart commit #2, @999eagle added more testing for these internal functions and got coverage above 90%? I think this might have a more significant long-term benefit.

@dralley
Copy link
Collaborator

dralley commented Jul 18, 2022

I promised some patches:

With the second commit, you can see the error caused by check!

@Mingun
Copy link
Collaborator

Mingun commented Jul 18, 2022

I really do appreciate how much attention you give to clean commits, it does make your PRs very easy to review, but in this specific instance I'm not sure if the benefits are worth the significant time investment to break up commit #2.

The main advantage of the suggested approach is that is simplifies rebases a lot. Instead of trying to manually apply patches to the copied & changed code (error-prone!) you will need just copy upstream code to the first move-only commit and then solve any conflicts with your subsequent changes. Because of competing changes in switching to the UTF-8 I think that this PR could be rebased several times. So it makes sense to rewrite it in a way that makes this work easier.

@dralley
Copy link
Collaborator

dralley commented Jul 18, 2022

That's true - but it's partly why I'm OK with letting the issues I mentioned be addressed as followups. I'd rather not force @999eagle to go through a bunch of rebasing when we can just get this merged, build on top of it and avoid the effort entirely.

So basically, IMO, these are the only 2 things that really ought to happen before merge

  • Rebase one last time on top of your PR that split off the read_event() / read_event_into() swap
  • Rename IoReader -> BufferedReader including file name.

Once it's merged the breakages stop being so much of an issue, and the other issues can be fixed independently

Copy link
Collaborator

@Mingun Mingun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've rebased it by myself, applied my suggestions, introduce some intermediate commits with big changes that was in a one commit originally, reordered code slightly to minimize diff and here is the result: master...Mingun:split-reader

I do not like it much. There a several reasons:

  • first, as I say, diff a very hard to read. This is fixable and I fixed some of the problems, but this was a quick fix. I did not set myself the task of ensuring the compilation of every commit, although I think this is a mandatory requirement. Tests allowed to fail if that will be fixed in the next commit (although this is still extremely undesirable!), but the code must be compiled. This makes it easier to do rebase and explore code
  • current implementation contains commits that did too much work. Again, I tried to separate some of them into separate commits to simplify diff and understand what was happening, but not all
  • the main concern is that after this change many code is copied instead of reused. That was main problem in the previous async PRs, but here it even worse, because here the same code duplicated for a different sync implementations

No, I cannot accept this in that state. I hold it for now, because I still think this is a good start, but it requires hard work on polishing to get my approval.

I want:

  • maximum reuse
  • readable diffs
    • small commits (my preference is not more than 5 changed files with significant changes)
    • try to minimize non-trivial changes in each commit (ideally not more than 100-300 changed lines)
  • each commit should leave project in a compilable state
  • (optionally, but very preferrable) CI checks should pass in each commit

Mingun and others added 3 commits July 20, 2022 12:00
This commit only moves code without significant changes (the only changes is:
- corrected imports
- add imports to the doc comments which have become inaccessible
)
Main code moved from `read_namespaced_event_into` to `resolve_namespaced_event_inner`
@999eagle
Copy link
Contributor Author

999eagle commented Jul 21, 2022

Sorry for the silence, I'm also working on other things besides this. So if I understand correctly you want

  • every commit to compile and
  • either every commit to pass tests or tests to be fixed in the next commit
    yet at the same time a somewhat big refactor of existing code where each commit doesn't change more than 100-300 lines. It would've been helpful to be told these requirements in any of your earlier reviews.

Quite frankly though, I'm really not sure how this would look like. In your rebased branch even the first commit has a diff over over 700 lines added and removed (not even including the move of reader.rs to reader/mod.rs!) and I don't think this can be meaningfully split into smaller commits. I agree that smaller commits may be easier to review but the commits by themselves just don't really make that much sense on their own unless seen in the context of the entire merge.

I also tried to reuse more code instead of copying but read_namespaced_event_inner was pretty much the only good option to eliminate. Other reuse would require either

  • a big (and in my opinion less readable and debuggable) macro, which would become even worse for async support, or
  • an even bigger refactoring for the entire Reader to better separate byte-reading (and optional buffering) and decoding from parsing bytes into events
    to allow for the small but required changes between unbuffered and undecoded access to &str, buffered Read access and buffered AsyncRead access.

If you have concrete suggestions on how to continue here I'm open for that but as of right now I honestly don't think your rebased branch could be improved to your standards for the reasons I've given above.

@Mingun
Copy link
Collaborator

Mingun commented Jul 21, 2022

Quite frankly though, I'm really not sure how this would look like. In your rebased branch even the first commit has a diff over over 700 lines added and removed

This is why I say "non-trivial changes". Simple move code from one place to another with minimal changes (only to guarantee compilation) is acceptable of course. Frankly speaking, even 100 changed lines in a file is a very big change that usually is to hard to understand, if that changes not come in a big chunks. I personally try to stick to not more that 50 changed lines in such cases, but this is just a recommendation.

All these requirements are aimed at maintaining high code quality. Even high coverage with tests does not guarantee against errors, so I prefer that the average code reader can clearly understand what exactly has changed in the code without relying on technical tools. For example, I just found an error in my own code that was covered by me by tests very well (as I thought!). And when I started refactoring, I accidently encountered an error, which was hided previously (#434).

  • an even bigger refactoring for the entire Reader to better separate byte-reading (and optional buffering) and decoding from parsing bytes into events

I talked about the code that was already separated in a similar way, where all buffer-related methods was decoupled into XmlSource trait. All other parsing code used the trait to glue source-independent code with the source-dependent code without duplicating, but this PR changes that. Yes, we probably could not use the same trait for sync and async code (but why not? We can use Poll even in sync code if we need to unify return types. That variant should be investigated), but sync-buffered + sync-borrowed could share the same code as well as async-buffered + async-borrowed.

If you have concrete suggestions on how to continue here

Unfortunately no, that is why I say that this problem requires investigation. I have not time to do that by myself right now, but I'll return to this problem in the future. I feel that there is a better way than in this PR.

@dralley
Copy link
Collaborator

dralley commented Jul 21, 2022

@Mingun I think if you were to compare the files side-by-side, manually, like this:

Screenshot from 2022-07-21 14-58-17

it would quickly be apparent how little has actually changed. The new IoReader code would be near identical to the existing code if not for the shared inner implementations which have now been added in a few places. The same is true of SliceReader, except for a few small differences due to the location of the borrowing. The actual review burden here is not all that high.

As for the duplication of code that remains, given that we are planning to rewrite the IoReader / BufferedReader code anyway, I believe de-duplicating it now would actually be premature because we don't yet know if, how or where the implementations will need to diverge again. If we do end up needing to do that, this work will have been wasted. The saying "duplication is far cheaper than the wrong abstraction" is applicable here.

My request, if you have no fundamental objections to the architecture itself (which remains very much similar to what we have currently overall), is that we merge this, finish the decoding work, and then evaluate what the appropriate level of code sharing will be and perform the unification at that time - once it is more clear what can and cannot be shared.

@dralley
Copy link
Collaborator

dralley commented Jul 21, 2022

Although @999eagle, if you could please address the failing test, and rename IoReader to BufferedReader (I see that the filename has changed, but not the struct name), and fix the commit messages to reflect this, that would be excellent.

@999eagle
Copy link
Contributor Author

I've changed the struct name and fixed the failing test.

@dralley dralley changed the title Split Reader into SliceReader and IoReader Split Reader into SliceReader and BufferedReader Jul 22, 2022
@dralley
Copy link
Collaborator

dralley commented Jul 22, 2022

Thanks!

The last thing I think this needs is a changelog entry. Could you please apply

diff --git a/Changelog.md b/Changelog.md
index c305325..e1cb297 100644
--- a/Changelog.md
+++ b/Changelog.md
@@ -137,6 +137,9 @@
 - [#423]: All escaping functions now accepts and returns strings instead of byte slices
 - [#423]: Removed `BytesText::from_plain` because it internally did escaping of a byte array,
   but since now escaping works on strings. Use `BytesText::from_plain_str` instead
+- [#425]: Split the internal implementation of `Reader` into multiple files to better separate the
+  buffered and unbuffered implementations. The buffered methods, e.g. `read_event_into(&mut buf)`,
+  will no longer be available when reading from a slice.
 
 ### New Tests
 
@@ -167,6 +170,7 @@
 [#418]: https://github.com/tafia/quick-xml/pull/418
 [#421]: https://github.com/tafia/quick-xml/pull/421
 [#423]: https://github.com/tafia/quick-xml/pull/423
+[#425]: https://github.com/tafia/quick-xml/pull/425
 
 ## 0.23.0 -- 2022-05-08
 

@Mingun I have given the entire PR a thorough review, including manual comparisons of individual functions to the previous implementations. Since these files are about to be partially rewritten anyway, and that this PR is effectively blocking that work, would you be willing to accept merging this? Deduplication is absolutely something that should be done if possible, but now is not the right time, it will only make the exploration and refactoring I want to do over the next week more difficult. We may even have new opportunities to do so afterwards that aren't available now.

Specifically I think once we have the internal buffer, depending on how the implementation works out it might not make sense to continue using user-provided ones. If that's the case we can ditch another chunk of the API surface and potentially consolidate most if not all of the implementation

@dralley
Copy link
Collaborator

dralley commented Jul 22, 2022

And @999eagle, I plan to put up a new draft PR in short order with part of those changes, if you have any suggestions about how to adjust the check! macro I would appreciate them.

@999eagle
Copy link
Contributor Author

@dralley I've added an entry to the changelog. All I'd change in the check! macro would be something similar to this commit to allow for async/await to be used in the tests.

@dralley
Copy link
Collaborator

dralley commented Jul 23, 2022

What I ended up doing, was just move those tests outside of the macro since they don't make sense for both implementations.

@Mingun
Copy link
Collaborator

Mingun commented Jul 24, 2022

First, let me apologize that this PR takes some long review, but I'm really think that all changes should have rational explanation behind them. For now it is unclear for me, why we should remove XmlSource and the benefits that it provides. Those benefits includes the ability to write a code that independent on a way from which source event data is borrowed, which allows to write the same code for two kind of sources.

Sharing exactly that implementation with an async reader probably would a challenge and may be impossible using the same trait, but should be possibly by using a macro. Something like:

macro_rules! impl_methods {
  ( $($async:ident, $await:ident)? ) => {
    $(async)? fn read_bytes_until(&mut self, ...) {
      match self.fill_buf() $(.$await)? {
        ...
      }
    }
  };
}

// Sync-based reader
impl<R> XmlSource for Reader<R> {
  impl_methods!();
}

// async in traits is impossible for now
impl<R> AsyncReader<R> {
  impl_methods!(async, await);
}

// If even would required
impl<R> AsyncXmlSource for AsyncReader<R> {
  fn read_bytes_until(&mut self, ...) -> Poll<...> {
    // Trivial call non-trait method, implemented by a macro
    self.read_bytes_until()
  }
}

So I want to investigate possible solutions next week.


@dralley, you've said on other PR

Everything which touches those files is effectively blocked on that PR, and the alternative is forcing 999eagle to rebase it over and over and over again, which doesn't feel like a great way to encourage further contributions. And they're not minor rebases but terribly painful ones.

Yes, unfortunately that is true. But in my justification, I must say, that I said from the very beginning that the process will not be fast, precisely because I already had other changes planned that intersect with any possible changes in async support. I ask not to take it to heart, it's just a matter of priorities. In my opinion, we can live without asynchronous support, but correct namespace support is more in demand. More error-prone working with different encodings are also more important task, because incorrect working with current API easely could lead to a wrong results and even to security vulnerabilities. And while it can be avoided, it's hard now.

Also, precisely so that rebase is not a difficult task, I recommend breaking up commits into small parts and trying not to mix changes with code additions/moves. Precisely because conflicts with simple movement are solved elementary, and after that the conflicts of change are much easier to solve.


Specifically I think once we have the internal buffer, depending on how the implementation works out it might not make sense to continue using user-provided ones. If that's the case we can ditch another chunk of the API surface and potentially consolidate most if not all of the implementation

Unifying different codes is always more challenging than splitting. Therefore, I want to understand whether separation is so necessary at this stage.


As a final note, I should say, that some my concerns still not addressed. I've talked about incorrect alignment in comments, that should explain in tests, where the position would be, for example, here:

quick-xml/src/reader/mod.rs

Lines 974 to 975 in c3a07b6

let mut input = input_from_bytes(b"".as_ref());
// ^= 0

Position of that indicators should be adjusted in that commit that adds input_from_bytes.

@dralley
Copy link
Collaborator

dralley commented Jul 24, 2022

The issue is, at least regarding the decoding work, if there's no inner struct then there's nowhere to put the buffer(s?) and other related data for decoding - so it would have to go into the main Reader struct, and it would be there even when it's not needed. So having separate structs makes sense and makes that work a bit easier.

To not do so might not be terrible, after all there's also the situation where if the Reader was constructed from bytes of unknown encoding (but is found to be UTF-8, so copying the data wouldn't be necessary in theory) where you would have that waste anyway, and also the cost of an unused vec is only 3x8 bytes + perhaps some extra for other metadata specific to that implementation. But, all of that feels like it would make the architecture much less clean. It's just a gut feeling.

@dralley
Copy link
Collaborator

dralley commented Jul 24, 2022

@999eagle In practical terms, hold off on rebasing the PR further until there's a consensus on the architecture. I think what we may end up doing is pulling the commits one or two at a time - I've now merged 1 and 7 and 2 is no longer needed after some other changes @Mingun made, and the rest will have to wait until after the experimentation I suppose.

@999eagle
Copy link
Contributor Author

Alright, I'll hold off on working on this until further notice from you.

I fully agree with you @dralley in that multiple structs holding the underlying data source are pretty much a necessity. @Mingun that exact change was actually proposed by you in this comment, so I'm not entirely sure why you're opposed to that now. I also think that implementations between the (what's now called) BufferedReader and a potential new AsyncBufferedReader are very likely to be similar enough to allow them to be implemented through a single macro, but I think buffered and unbuffered access are too different for a macro-based implementation to be simple enough that it can be read and understood. I'd make that change when (or rather, if) async-support actually will be implemented as it wouldn't make sense to extract an implementation into a macro when that macro would be invoked only once.

@dralley
Copy link
Collaborator

dralley commented Jul 25, 2022

also think that implementations between the (what's now called) BufferedReader and a potential new AsyncBufferedReader are very likely to be similar enough to allow them to be implemented through a single macro, but I think buffered and unbuffered access are too different for a macro-based implementation to be simple enough

I don't think async makes sense in situations where no IO is necessary to begin with, so we only need to worry about the buffered implementation. Code which doesn't block on IO is perfectly OK to call from async code without needing to be async itself.

An idea that I haven't entirely thought through, and I'm not sure if it would work - it might be possible to invert the IO model and have the user perform the IO themselves, "feeding" data to the parser from external user code at the same time as they pull events from it, which would avoid the need to have any asynchronous IO inside the library at all. I'm not entirely sure what that API would look like, though. Just something to explore.

Perhaps something like:

    let mut reader = Reader::new_manual();
    reader.trim_text(true);
    let mut buf = Vec::new();

    loop {
        match reader.read_event_into(&mut buf) {
            // ...
            Ok(Event::Eof) => break,
            Err(XmlError::OutOfData) => {
                let buffer = file.async_fill_buf().await?;
                reader.feed(buffer);
               // Somehow communicate back the # of bytes the reader has processed
            }
            Err(e) => panic!("Error at position {}: {:?}", reader.buffer_position(), e),
            _ => (),
        }
    }

@Mingun
Copy link
Collaborator

Mingun commented Jul 25, 2022

I don't think async makes sense in situations where no IO is necessary to begin with, so we only need to worry about the buffered implementation. Code which doesn't block on IO is perfectly OK to call from async code without needing to be async itself.

I was just going to write the same thing :)

@Mingun that exact change was actually proposed by you in this comment, so I'm not entirely sure why you're opposed to that now.

Yes, I known, I propose it as an alternative to the your very complicated first async implementation, that should be solve the conflict problem with the implementing traits. But the more I studied asynchronous wisdom, the more I realized that in fact, probably there is no conflict in the first place. As @dralley wrote, actually we do not need the .read_event_async() at all, because when you have your XML in a byte buffer / string you don't needed in IO. So that left us with three possible implementations:

  • for &[u8] -- .read_event()
  • for BufRead -- .read_event_into()
  • for AsyncBufRead (by one for each asynchronous library) -- .read_event_into_async()

Also, I mentioned in the very beginning that I welcome experimentation in search of better design. That is just one iteration of that, I looked into it and it was seemed to me that it is not yet finished.

"feeding" data to the parser from external user code

While I think that this is possible to implement, I also think that this lead to more ugly design. Actually, async code tries to move the user away from the need to do "feeding". So I think, we should focused on the traditional async design.

@Mingun Mingun mentioned this pull request Jul 31, 2022
@Mingun Mingun closed this in #450 Aug 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants