Skip to content

x/crypto/openpgp: newly introduced buffer has caused memory util increases #37813

Closed
@ralfonso

Description

@ralfonso

What version of Go are you using (go version)?

$ go version
go version go1.13.3 linux/amd64

Does this issue reproduce with the latest release?

yes

What operating system and processor architecture are you using (go env)?

go env Output
$ go env
go version
go version go1.13.3 linux/amd64
root@904aa423860b:/# go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE=
GOENV="
GOEXE=""
GOFLAGS="-mod=vendor"
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH=
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build787361652=/tmp/go-build -gno-record-gcc-switches"

What did you do?

I built my application with x/crypto code that includes the change introduced by https://go-review.googlesource.com/c/crypto/+/181121. This application uses x/crypto/openpgp to encrypt very large files (ranging from GiB to TiB). Specifically https://godoc.org/golang.org/x/crypto/openpgp#SymmetricallyEncrypt.

My application reads input from a network and uses io.Copy{Buffer}() to pass data to the encryption io.Writer. The []byte used in these operations is very large (512MiB) and we double buffer in order to both fill from the network and flush to the PGP encryption writer simultaneously.

Since this change golang/crypto@32487ec added an new intermediate buffer to ensure the first write forwarded is at least 512 bytes, I've observed much greater memory util in my application.

Repro:

(Apologies is this could have been shared via the Go Playground, but I was unable to determine how to disable the garbage collector, which seems necessary to demonstrate the heap growth)

test file

package main

import (
	"io"
	"io/ioutil"
	"math/rand"
	"runtime"
	"testing"
	"time"

	"golang.org/x/crypto/openpgp"
)

func TestOpenPGPMemoryUtil(t *testing.T) {
	r := rand.New(rand.NewSource(time.Now().UnixNano()))
	buf := make([]byte, 512*1024*1024)
	_, err := io.ReadFull(r, buf)
	if err != nil {
		panic("err")
	}

	wr, err := openpgp.SymmetricallyEncrypt(ioutil.Discard, []byte("secret"), nil, nil)
	if err != nil {
		panic(err)
	}

	n, err := wr.Write(buf)
	if err != nil {
		panic(err)
	}

	ms := new(runtime.MemStats)
	runtime.ReadMemStats(ms)
	t.Logf("n: %d, heap: %d", n, ms.HeapAlloc)
}
# start with an new project in a working Go path with the above test file.
$ go mod init

$ go get golang.org/x/crypto@2aa609cf4a9d7d1126360de73b55b6002f9e052a
go: finding golang.org/x/crypto 2aa609cf4a9d7d1126360de73b55b6002f9e052a

$ GOGC=off go test -v 
=== RUN   TestOpenPGPMemoryUtil
--- PASS: TestOpenPGPMemoryUtil (5.74s)
    main_test.go:34: n: 536870912, heap: 1073933776
PASS
ok      github.com/ralfonso/openpgp-repro       5.750s

$ go get golang.org/x/crypto@32487eceac714ab927b55a454631e9d449a81b55
go: finding golang.org/x/crypto 32487eceac714ab927b55a454631e9d449a81b55

$ GOGC=off go test -v
=== RUN   TestOpenPGPMemoryUtil
--- PASS: TestOpenPGPMemoryUtil (6.00s)
    main_test.go:34: n: 536870912, heap: 2147659296
PASS
ok      github.com/ralfonso/openpgp-repro       6.013s

What did you expect to see?

No change.

What did you see instead?

A doubling of memory util due to the []byte buffer within partialLengthReader being unbounded.

Further investigation

It appears that partialLengthWriter.Write() is trying to be intelligent about the first write, with logic to avoid the buffer if the incoming []byte has length greater than or equal to the required 512 bytes. However, I believe this is being defeated by the setup for the symmetrically encrypted writer when it writes a single byte here. This means that the first user call to Write() will always see this condition evaluate true (since len(w.buf) == 1) and subsequently grow w.buf to the entire size of the provided p

My humble suggestion is to provide a constant such as maxFirstPartialWrite which mirrors the new minFirstPartialWrite and limits the size of the intermediate buffer.

Metadata

Metadata

Assignees

No one assigned

    Labels

    FrozenDueToAgeNeedsInvestigationSomeone must examine and confirm this is a valid issue and not a duplicate of an existing one.

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions