Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: read headers only #37

Open
dojeda opened this issue Feb 26, 2019 · 12 comments
Open

Feature request: read headers only #37

dojeda opened this issue Feb 26, 2019 · 12 comments

Comments

@dojeda
Copy link

dojeda commented Feb 26, 2019

I am sorry if I file so many issues at once. What I really wanted was to implement the feature explained in this issue, but when doing so I found the problems/discussions points filed on #34 #35 and #36.

The feature I would like to propose is an extension to load_xdf that is significantly faster but only reads the headers.

My use-case is the following: I am reading and importing all my XDF files into another database/filesystem and I wanted to save the information present on the headers in a database (hence my discussion on #36). The problem is that I have a lot of files, and some of them are big (about 3Gb, probably recordings that started and was then forgotten, but it could be a long recording session).

The way I plan to implement this is to use the tag that identifies the chunk header and the chunk size in bytes (https://github.com/sccn/xdf/wiki/Specifications#chunk). When the tag is not of types FileHeader (1) or StreamHeader (2) I will move the file pointer to the beginning of the next chunk.

I managed to achieve this with the following code:

def load_xdf(filename,
             ...,
             headers_only=False):

           ...

            # read [Tag]
            tag = struct.unpack('<H', f.read(2))[0]
            log_str = ' Read tag: {} at {} bytes, length={}'.format(tag, f.tell(), chunklen)
            if tag in [2, 3, 4, 6]:
                StreamId = struct.unpack('<I', f.read(4))[0]
                log_str += ', StreamId={}'.format(StreamId)

            logger.debug(log_str)
            # ^^^ Keeping this code to show a reference of where the modification goes

            # Quick read of header only: when the chunk if not a header, move
            # the file pointer to the beginning of the next chunk
            if headers_only and tag not in (1, 2):
                offset = 2   # We already read 2 bytes for the tag
                if tag in (2, 3, 4, 6):
                    # In these cases, we already read 4 bytes!
                    offset += 4
                # Move n=chunklen-offset bytes forward, relative to current position (whence=1)
                f.seek(chunklen - offset, 1)
                continue

With this modification, I can cut down the read time of a 3Gb file from 2 1/2 minutes to 5 seconds. Considering I have several hundreds of files (not that big, though), I am saving quite a lot of time.
If you guys agree with this modification, I would gladly make a PR for it.

dojeda added a commit to dojeda/xdf that referenced this issue Feb 26, 2019
@cboulay
Copy link
Collaborator

cboulay commented Feb 26, 2019

In general I very much like this idea.

However, I've never used this feature but I've been told that LabRecorder supports adding a new stream after recording has already started, which would put the header for that stream somewhere in the middle of the file. Do you think you can accommodate that?

@dojeda
Copy link
Author

dojeda commented Feb 26, 2019

Thanks for the input @cboulay.

I agree with this idea, headers can be in any part of the file. In my very first approach, I used the _scan_forward function to move between chunk boundaries (which is why I filed #34).
It worked well for my case because my headers are always at the beginning of the file. However, if a header came anywhere between two boundaries, I would missed them.

The second approach, the one that I am proposing above, would accommodate to your use-case: the file description is actually moved to the beginning of the next chunk.

Please, correct me if I'm wrong: if one adds a new stream after the recording has already started, it would create a write a new Chunk with tag 2 (StreamHeader) with its header info.
However, it does not work if the StreamHeader chunk is embedded inside the data chunk of another stream. I believe that this case would lead to a corrupted file.
In other words, do the threads that write to the file protected by some mutex?
I'll try reading the LabRecorder source code to confirm this.

@dmedine
Copy link
Contributor

dmedine commented Feb 26, 2019 via email

@dojeda
Copy link
Author

dojeda commented Feb 27, 2019

I am not saying it would be useful to have headers embedded in data. I was just commenting (probably misunderstanding) Chadwick's comment on having new streams in the middle of the file. As I commented before, this case is handled by my proposal since I am following the specification. Each chunk is read (at the very minimum its size and tag), but the data part is ignored when it's not a header chunk.

@cbrnr
Copy link
Contributor

cbrnr commented Apr 4, 2019

I've implemented this functionality in parse_xdf (currently only in this Gist, which will later maybe become part of some package, depending on where it fits best).

@dojeda
Copy link
Author

dojeda commented Apr 4, 2019 via email

@cbrnr
Copy link
Contributor

cbrnr commented Apr 4, 2019

Yes, we should definitely consolidate our efforts - it doesn't make sense if everyone develops the same features over and over. I saw that you really only read the headers without the XML in the stream header. In my version, I do read the XML and parse certain things, but not the raw data samples. I found that it is this step that takes time, everything else is very fast.

Furthermore, you introduced an option to select an XML to dict converter - did you have problematic use cases where _xml2dict did not work?

@dojeda
Copy link
Author

dojeda commented Apr 4, 2019

I agree with you for a consolidated effort.

I don't really understand why you think that I only read the headers without the XML in the stream. This is not what's going on my commit from Feb 26th: dojeda@e209638
when the tag is not a header (id 1 or 2), this quick-read part is not executed. In other words, all streams that are data are quickly captured here and replaced with a file seek to move forward.

I read your gist: how to you propose to integrate it with pyxdf?
I feel the original pyxdf code may not be the most organized code, but I would strongly prefer to keep it as intact as possible for a couple of reasons: There is a lot of know-how on the original code (a complete rewrite may reintroduce old and new bugs), and to keep backward compatibility.

Concerning the XML to dict converter, I don't have a specific problem, but more of a disagreement on how this information is decoded and how this makes the code that uses this information less readable and specially inconvenient.
I introduced a different approach to this problem as a possible implementation to #36. During my usage and maintenance of code that uses pyxdf, I find a lot of unnecessary list operations (see original issue for an example) that are the consequence of pyxdf trying to implement their on XML to dict decoder (which is actually a hard task because there is no bijection between XML and JSON). This is why I propose an option where the XML is decoded by following a standard decoding approach (the Parker conversion). Unfortunately, there are some temporary references and bookeeping that fail with this new approach, and again, I want to keep the backward compatibility, so this XML decoding approach is optional on my fork (and the original one is still kept for the bookeeping code).

Finally, I am still not filing a PR for the issues that I filed a month ago because only one of these issues got a bit of traction (this issue in particular), and I still have a new feature to implement concerning keeping the original timestamps even if there is a clock sync and dejittering operation. I really need this feature because I have found some files where the timestamps have some nasty jitter and while the dejitter algorithm does a great job to reconstruct a sensible timeline, we need to apply our own correction in some cases (i.e. we want to use pyxdf dejittering most of the time, and some other times we want to roll our own correction).

@cbrnr
Copy link
Contributor

cbrnr commented Apr 5, 2019

I don't really understand why you think that I only read the headers without the XML in the stream. This is not what's going on my commit from Feb 26th: dojeda/xdf@e209638
when the tag is not a header (id 1 or 2), this quick-read part is not executed. In other words, all streams that are data are quickly captured here and replaced with a file seek to move forward.

Alright, I missed the StreamHeader (2) part.

I read your gist: how to you propose to integrate it with pyxdf?
I feel the original pyxdf code may not be the most organized code, but I would strongly prefer to keep it as intact as possible for a couple of reasons: There is a lot of know-how on the original code (a complete rewrite may reintroduce old and new bugs), and to keep backward compatibility.

Good question, I don't know yet. We could either modify load_xdf (as per your suggestion) or include parse_xdf. I find the chunk-based way of aggregating the data useful (e.g. for debugging purposes), which I think cannot be done with load_xdf, so it might be worth having both functions.

Regarding the XML parser, I think this should be discussed in a separate issue. Same for the original timestamps.

Since this repo has moved to https://github.com/xdf-modules/xdf-Python, we should also continue our discussion of this issue there.

@dojeda
Copy link
Author

dojeda commented Apr 5, 2019

I didn't know this repo was moved elsewhere... I have been filing some issues since a couple of months ago and I did not find any information regarding this.
I can move them along with the discussion if needed...
Perhaps a maintainer can point to or explain what the roadmap for pyxdf is?
@chkothe

@cbrnr
Copy link
Contributor

cbrnr commented Apr 5, 2019

@cboulay is the current maintainer AFAIK

@cboulay
Copy link
Collaborator

cboulay commented Apr 5, 2019

It hasn't moved yet. I made a pull request proposing it to be moved, and for that pull request to work I had to copy the code into the component submodules. Please take a look at the PR and let me know what you think. So far I've only received a little bit of feedback, but it's been positive!

Note that sccn/xdf remains and is still the home. Any documents related to the xdf file spec will stay here. What has moved is the python code, matlab code, and the new addition of C++ libxdf, though the python and matlab code are still linked here via submodules.

This separation is desirable for several reasons:

  • we can collect 'unofficial' tools related to xdf (e.g. libxdf)
  • we can give access to contributors so they can update the code faster than it has been (this is a big problem for xdf)
  • however, changes in the submodules won't be reflected in upstream sccn/xdf until someone updates the submodules pointers (requires write access to sccn/xdf) so there's no risk of the contributors breaking something 'official'.

There are also some conveniences for build systems and continuous integration but those are relatively minor as long as the only 'testable' part is Python.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants