-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libz compression for squashfs reading? #48
Comments
I have no plans of adding it right now, but it shouldn’t be hard. Have you moved the squashfs code into its own repository yet? (re: gokrazy/internal#3) |
No.
If only I knew where to start... ;-) |
Are you aware of https://github.com/diskfs/go-diskfs/tree/squashfs/filesystem/squashfs? Just stumbled upon it recently. Maybe the two of you could join forces. Reference: |
Wasn’t aware before, thanks for the pointer. Joining implementations is quite a bit of effort, though, with no tangible benefit. It is definitely useful to have multiple implementations for comparing and fixing bugs, though :) |
Here’s a patch which decompresses the content of files that fit into a single block (< 8192 bytes in size) and prints it to stderr, just as a proof of concept. Maybe that helps put you on the right track? :) diff --git i/internal/squashfs/reader.go w/internal/squashfs/reader.go
index 647e5e2..ded6f68 100644
--- i/internal/squashfs/reader.go
+++ w/internal/squashfs/reader.go
@@ -2,6 +2,7 @@ package squashfs
import (
"bytes"
+ "compress/zlib"
"encoding/binary"
"fmt"
"io"
@@ -271,8 +272,44 @@ func (r *Reader) FileReader(inode Inode) (*io.SectionReader, error) {
if err != nil {
return nil, err
}
- //log.Printf("i: %+v", i)
- // TODO(compression): read the blocksizes to read compressed blocks
+ log.Printf("i: %+v", i)
+
+ {
+ blockoffset, offset := r.inode(inode)
+ br, err := r.blockReader(r.super.InodeTableStart+blockoffset, offset)
+ if err != nil {
+ return nil, err
+ }
+ rih := i.(regInodeHeader)
+ if err := binary.Read(br, binary.LittleEndian, &rih); err != nil {
+ return nil, err
+ }
+
+ // TODO: derive number of uint32s to read by dividing file size by block size
+ var sz uint32
+ if err := binary.Read(br, binary.LittleEndian, &sz); err != nil {
+ return nil, err
+ }
+ log.Printf("sz = %v", sz)
+
+ off := int64(rih.StartBlock) + int64(rih.Offset)
+ fr := io.NewSectionReader(r.r, off, int64(sz))
+ rd, err := zlib.NewReader(fr)
+ if err != nil {
+ return nil, err
+ }
+ // TODO: make this func return an io.ReadCloser, make callers call close
+ buf := make([]byte, rih.FileSize) // TODO: block size
+ n, err := rd.Read(buf)
+ // TODO: why are we getting io.EOF when reading precisely the file size?
+ log.Printf("Read = %v, %v", n, err)
+ log.Printf("content: %v (%v bytes)", string(buf[:n]), n)
+ if err != nil {
+ return nil, err
+ }
+ log.Printf("content: %v (%v bytes)", buf[:n], n)
+ }
+
switch ri := i.(type) {
case regInodeHeader:
off := int64(ri.StartBlock) + int64(ri.Offset)
|
So I've been working on my own project that's similiar to @probonopd's project in that it's using AppImages, which use squashfs. When trying to use your library with a squashfs that I created with I'm not too familiar with squashfs, and have only really been looking into this for a few hours, but to me it seems that it can't read a compressed inode table, does that sound about right? |
@CalebQ42 you could make an uncompressed squashfs with |
@probonopd I realized that right after I posted it, then I got distracted because I need to get to sleep :P |
Closing this ticket since there is now https://github.com/CalebQ42/squashfs. Thank you very much @CalebQ42. |
Hi, the squashfs Go implementation is very useful 👍
distri/internal/squashfs/reader.go
Line 275 in dff7503
Will libz compression be eventually supported in squashfs reading? If so, I could probably make use of it for a Go implementation of the AppImage tools.
The text was updated successfully, but these errors were encountered: